Log your accrued CPD hours

APS members get exclusive access to the logging tool to monitor and record accrued CPD hours.

2018 APS Congress

The 2018 APS Congress will be held in Sydney from Thursday 27 to Sunday 30 September 2018


Not a member? Join now

Password reminder

Enter your User ID below and we will send you an email with your password. If you still have trouble logging in please contact us.

Back to

Your password has been emailed to the address we have on file.

Australian Psychology Society This browser is not supported. Please upgrade your browser.

InPsych 2013 | Vol 35

Cover feature : Learning and Learning Disabilities in Schools

Why can’t Jonny read? Bringing theory into cognitive assessment

Research into cognitive abilities has made significant advances over the last couple of decades, culminating in the rise of the Cattell-Horn-Carroll (CHC) model. CHC is regarded by many leaders in the field, such as Professor Alan Kaufman, as the most empirically well-validated structure of human cognitive abilities currently available and has been likened to the periodic table in chemistry (McGrew, 2009).

Cattell-Horn-Carroll model

CHC is a three stratum model. Spearman’s g (stratum III), which represents general overall cognitive ability and is most often operationalised using the full scale IQ, overarches identified broad cognitive abilities (stratum II) which in turn are broken down into more specific narrow abilities (stratum I). A detailed description of the history of CHC, along with descriptions of abilities, can be found at www.iqscorner.com/2009/11/cattell-horn-carroll-chc-theory-key.html.

Seven of the 16 CHC broad abilities are of particular relevance to practitioners in school settings. These abilities have been identified as key cognitive indicators of academic achievement (see Figure 1). Research investigating cognitive-achievement relations has further highlighted the role of certain narrow abilities in literary and numeracy acquisition. Identification of students’ strengths and weaknesses in these key broad CHC abilities (such as fluid reasoning and long-term storage and retrieval) and narrow CHC abilities (such as phonetic coding, associative memory and working memory) can significantly augment information from full scale IQs, and enable development and fine-tuning of interventions to address difficulties in academic achievement.


Figure 1. The Cattell-Horn-Carroll (CHC) model showing cognitive abilities of relevance to academic achievement

Cognitive assessment batteries through a CHC lens

The CHC model’s significance for informing interventions has had a considerable impact on cognitive test battery design. While recent versions of several common test batteries have been explicitly developed using the CHC blueprint (e.g., the Stanford-Binet Intelligence Scales [SB5] and Woodcock-Johnson Tests of Cognitive Abilities [WJ III COG]), others have begun to incorporate CHC terminology (e.g., Wechsler Intelligence Scale for Children [WISC-IV]). However, many assessment tools are limited in the range of broad and narrow CHC abilities they cover. For example, the CHC broad ability long-term storage and retrieval, and one of its narrow abilities associative memory, have been identified as integral to literacy acquisition but are not measured by the cognitive assessment batteries most commonly available to school psychologists. This lag has been referred to as the theory-practice gap; a divide between contemporary theoretical understanding of cognitive abilities and current practice in the assessment of learning difficulties. One method that has been developed for bridging this divide that still uses the most commonly available test batteries is a process called ‘cross-battery assessment’ (XBA) which enables a comprehensive cognitive assessment grounded in CHC theory.

Cross-battery assessment

Though relatively recent to the field of educational psychology, the XBA approach has long been part of neuropsychology’s assessment processes. The XBA approach provides a systematic means by which many cognitive batteries can be interpreted through the CHC lens, enabling practitioners to capture a more detailed picture of a student’s cognitive profile. Flanagan and colleagues (2013) have devised psychometrically sound guidelines and procedures for the integration of test scores derived from multiple test batteries. The guidelines, supported by well-designed spread-sheet calculators, provide effective resources that eliminate ad hoc approaches to assessment, facilitating convenient step-by-step procedures for implementing a XBA.

The case study below is a XBA application for a child with reading difficulties. The XBA case study cogently illustrates the comprehensive differential assessment of cognitive strengths and weaknesses that can be obtained to enable individualised interventions to be developed. The combination of poor auditory processing and working memory provide a sound explanation for Jonny’s current difficulties in reading and following instructions. Intervention would therefore target these weaknesses.

In any cognitive assessment for learning difficulties, psychologists will of course also need to take into account non-cognitive factors such as sleep problems or poor family functioning. Those aside, CHC and the XBA approach provide psychologists with a significantly advanced, evidence-based model and methodology of cognitive ability assessment that is well supported by continuing detailed research and development. Many research developments take time to be translated into practice. Encouragingly, a great deal of work has been done constructing easily available spread-sheets that provide the means through which practitioners can readily understand and implement more sophisticated, well-validated and detailed diagnostics of cognitive functions. With research continuing to improve guidelines for assessing the impact of these cognitive functions on learning, the capacity for practising psychologists to offer more tailored and effective interventions to children with learning difficulties has been significantly enhanced.

Case study: Jonny

Jonny, a 9-year-old boy in Grade 3, is referred for assessment due to reading difficulties and an inability to follow instructions.

Step 1: Select a cognitive battery that best addresses referral concerns

While the WJ III COG may be the most appropriate battery for such a referral (since it measures a number of CHC abilities implicated in literacy acquisition such as phonetic coding and associative memory), additional considerations such as student language proficiency and test availability need to also be taken into account at this step. The WISC-IV is chosen as the primary battery for this case, as most school psychologists around Australia would be expected to have access to it.

Step 2: Identify adequately represented CHC abilities

Inspection of the WISC-IV via the CHC lens reveals that four of the seven broad CHC abilities identified as important for academic achievement are measured adequately by this battery (crystallised knowledge [Gc], visual processing [Gv], short-term memory [Gsm], processing speed [Gs]). Under XBA, ‘adequately’ is defined as providing at least two qualitatively different narrow ability measures of a broad CHC ability. A CHC ability classification guide for numerous cognitive and achievement batteries has been developed by Flanagan et al. (2013).

Step 3: Select tests to measure abilities and processes not measured by core battery

The CHC broad abilities auditory processing [Ga] and long-term storage and retrieval [Glr] have been identified as key predictors of reading and are not specifically measured by the WISC-IV, thus the core battery will need to be supplemented to provide a comprehensive profile which includes assessment of these skills. Four WJ III COG subtests are chosen to measure Ga and Glr. While the WISC-IV is only an adequate measure of the fluid reasoning (Gf) narrow ability induction, research indicates that Gf is of lesser importance to reading achievement at this stage of child development, so the decision is made not to investigate this ability further.

Step 4: Administer core battery and supplementary tests

Jonny’s full scale IQ obtained using the WISC-IV was in the average range, providing limited information as to why he is experiencing reading difficulties and justifying the use of supplementary tests to provide a more comprehensive profile.

Step 5: Use the XBA Data, Management, and Interpretive Assistant (XBA DMIA v.2) to assist interpretation

This program, provided by Flanagan et al. (2013), can be used to interpret assessment results through the CHC framework and is a significant tool for bridging the theory-practice gap in cognitive assessment.

Step 6: Follow XBA interpretative guidelines

Interpretation of the XBA identifies that Jonny has a weakness in auditory processing [Ga]. Additionally, the XBA guidelines suggest follow-up of short-term memory [Gsm] is needed as there appears to be a discrepancy between the narrow abilities of working memory and memory span, with results indicating a significant weakness in the former. Consequently, an additional WJ III COG test that measures the potentially weaker narrow ability of working memory is administered to confirm or deny this hypothesis, with results supporting the conclusion that Jonny experiences difficulty with his working memory.

The XBA approach to assessment provides a comprehensive profile that identifies problems with auditory processing and short-term memory, in particular working memory, effectively explaining Jonny’s difficulties in reading and following instructions. The assessment enables intervention to be tailored specifically to address these difficulties in Jonny’s reading skills.

The first author can be contacted at kate.jacobs@monash.edu


  • Flanagan, D. P., Ortiz, S. O. and Alfonso, V. C. (2013). Essentials of cross battery assessment. (3rd ed.). Hoboken, New Jersey: John Wiley.
  • McGrew, K. (2009). Editorial: CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research, Intelligence, 37, 1-10.

Disclaimer: Published in InPsych on December 2013. The APS aims to ensure that information published in InPsych is current and accurate at the time of publication. Changes after publication may affect the accuracy of this information. Readers are responsible for ascertaining the currency and completeness of information they rely on, which is particularly important for government initiatives, legislation or best-practice principles which are open to amendment. The information provided in InPsych does not replace obtaining appropriate professional and/or legal advice.