Reliability and Concurrent Validity of the Jouve-Cerebrals Test of Induction: A Correlational Study with SAT and RIST

 

This study aimed to evaluate the reliability and concurrent validity of the Jouve-Cerebrals Test of Induction (JCTI), a measure of inductive reasoning. A sample of 2,306 participants completed the JCTI, with a subset of 63 students also providing self-reported Scholastic Aptitude Test (SAT) scores and another subset of 23 students completing the Reynolds Intelligence Screening Test (RIST). The JCTI demonstrated high reliability (Cronbach's Alpha = .90) and satisfactory Item Characteristic Curves based on Item Response Theory analysis. Concurrent validity was assessed through correlations with SAT and RIST scores. The JCTI showed strong correlations with SAT Composite (r = .79) and SAT Math reasoning (r = .84) scores, while a weaker correlation was found with SAT Verbal reasoning (r = .38). Both verbal and nonverbal RIST subtests correlated with the JCTI at approximately .90. Limitations included small sample sizes for concurrent validity analyses and reliance on self-reported SAT scores. Overall, the JCTI demonstrated high reliability and concurrent validity, supporting its use as a measure of inductive reasoning. Future research should focus on refining the test, investigating relationships with other cognitive ability measures, and exploring potential explanations for the weaker correlation with SAT Verbal reasoning.

JCTI, SAT, RIST, inductive reasoning, reliability, concurrent validity

 

Inductive reasoning, a crucial aspect of human intelligence, involves the ability to recognize patterns and extrapolate from specific observations to form general conclusions (Hayes, et al., 2010). The assessment of inductive reasoning is essential in various domains, including education and employment, as it relates to problem-solving, creativity, and adaptability (Deary, et al., 2007). Consequently, there is a growing interest in developing psychometric instruments that can effectively measure inductive reasoning (Nisbett et al., 2012).

Several psychometric theories have been employed in the development of such instruments, including Classical Test Theory (CTT; Lord & Novick, 1968) and Item Response Theory (IRT; Embretson & Reise, 2013). IRT, in particular, offers several advantages over CTT, such as the ability to model individual item properties and examine item functioning across different groups (Hambleton, et al., 1991). The present study utilizes both CTT and IRT to evaluate the reliability and concurrent validity of the Jouve-Cerebrals Test of Induction (JCTI), a widely used measure of inductive reasoning.

The JCTI, developed in 2010 by Jouve as a revision of the Test de Raisonnement Inductif (TRI; Jouve, 2010a), is an online test that measures inductive reasoning through a series of 52 abstract pattern-matching items. The test's development was guided by the Cattell-Horn-Carroll (CHC) theory of cognitive abilities, which emphasizes the distinction between fluid intelligence (Gf) and crystallized intelligence (Gc) (McGrew, 2009). Inductive reasoning is a primary component of Gf and is considered a strong predictor of academic and professional success (Schneider & McGrew, 2018).

Prior research on the JCTI has demonstrated satisfactory psychometric properties, including high internal consistency and significant correlations with other cognitive ability measures (Jouve, 2010b; Jouve, 2010c). However, there is a need for further investigation of the JCTI's reliability and validity using more advanced psychometric techniques, such as IRT, to enhance the understanding of the test's item functioning and its relationships with other measures.

The present study aims to address this gap in the literature by conducting a comprehensive psychometric analysis of the recently revised JCTI. The research hypothesis is that the last revision of the JCTI is a reliable and valid measure of inductive reasoning. The study will evaluate the reliability of the JCTI using Cronbach's Alpha coefficient and IRT, and assess its concurrent validity through correlations with other cognitive ability measures, such as the Scholastic Aptitude Test (SAT) and the Reynolds Intelligence Screening Test (RIST).

The selection of these specific measures is justified based on their relevance to the construct of inductive reasoning and their established psychometric properties. The SAT is a widely used college entrance examination that assesses both verbal and mathematical reasoning abilities (College Board, 2021), while the RIST is a brief screening test that measures general intelligence using nonverbal and verbal subtests from the Reynolds Intellectual Assessment Scales (RAIS) (Reynolds & Kamphaus, 2003).

The present study seeks to contribute to the existing literature on the JCTI's validity by examining its relationship with the SAT and RIST scores. Based on the literature, it is hypothesized that the JCTI will show higher correlations with measures of fluid intelligence (SAT Math reasoning, RIST OIO subtest) than with measures of crystallized intelligence (SAT Verbal reasoning, RIST GWH subtest). Additionally, the study will employ Item Response Theory (IRT) analysis to assess the psychometric properties of the JCTI items and improve its overall quality (Baker, 2001).

By examining the relationships between the JCTI and other intelligence measures, this study aims to enhance our understanding of the JCTI as a measure of fluid intelligence and provide valuable information for researchers and practitioners in the field of psychometrics. Furthermore, this research contributes to the broader discussion of intelligence measurement and the distinction between fluid and crystallized intelligence.

Literature Review

The study of human intelligence has been a central topic in psychology since the early 20th century. Researchers have sought to understand the different facets of intelligence, including the ability to reason inductively. This literature review explores the historical development of psychometric theories of inductive reasoning, from Charles Spearman to John Carroll and beyond, and discusses the interdisciplinary research that has enriched our understanding of this essential cognitive skill.

Spearman's Two-Factor Theory of Intelligence (1904) was among the first attempts to formalize the concept of human intelligence. Spearman proposed that intelligence is composed of a general factor (g-factor) and specific abilities (s-factors). Although Spearman's theory did not explicitly focus on inductive reasoning, it laid the foundation for future investigations into the structure of human intelligence (Spearman, 1904).

Thurstone's Theory of Primary Mental Abilities (1938) expanded on Spearman's work by identifying seven primary abilities, one of which was inductive reasoning. Thurstone argued that each primary ability was relatively independent and that the g-factor was an artifact of these primary abilities (Thurstone, 1938). This theory emphasized the importance of inductive reasoning as a distinct cognitive skill within the broader landscape of human intelligence.

Guilford's Structure of Intellect Model (1959) further refined the classification of mental abilities, proposing a three-dimensional model consisting of operations, content, and products. Guilford underscored the importance of inductive reasoning as an essential component of intellectual functioning (Guilford, 1959). In his model, inductive reasoning represented one of the operations individuals use to process various types of content and produce different intellectual products.

Cattell's Fluid and Crystallized Intelligence Theory (1963) introduced a distinction between fluid intelligence (Gf) and crystallized intelligence (Gc). Fluid intelligence, which includes inductive reasoning, represents the ability to solve novel problems and adapt to new situations. In contrast, crystallized intelligence consists of acquired knowledge and skills (Cattell, 1963). Cattell's theory highlighted the role of inductive reasoning in the broader context of fluid intelligence.

Carroll's Three-Stratum Theory (1993) synthesized the contributions of earlier theories by proposing a hierarchical organization of cognitive abilities. The theory placed inductive reasoning within the broad ability of fluid intelligence (Gf) (Carroll, 1993). Carroll's model has been widely accepted and provided a robust framework for studying inductive reasoning as an essential aspect of human intelligence.

Other recent theories, such as Sternberg's Triarchic Theory of Intelligence (1985) and Gardner's Multiple Intelligences Theory (1983), have also contributed to our understanding of inductive reasoning. These theories further underscore the importance of inductive reasoning as a distinct and essential cognitive skill.

Interdisciplinary research has expanded our knowledge of inductive reasoning, as well. Cognitive neuroscience studies have identified the neural correlates of inductive reasoning, providing insights into the biological underpinnings of this cognitive process (e.g., Goel & Dolan, 2004). Additionally, artificial intelligence research has drawn inspiration from psychometric theories to develop machine learning algorithms that can reason inductively (e.g., Lake, et al., 2017).

Method

Research Design

The study employed a correlational research design to examine the relationship between the Jouve-Cerebrals Test of Induction (JCTI) scores and other measures of cognitive ability, specifically the Scholastic Aptitude Test (SAT) and the Reynolds Intelligence Screening Test (RIST). This design was chosen because it allows for the assessment of the strength and direction of the relationships between variables without manipulating any independent variables (Creswell, 2014).

Participants

A total of 2,306 participants were recruited for this study, with ages ranging from 16 to 65 years (M = 35.6, SD = 8.2). Participants were predominantly male (62%) and identified as Caucasian (72%), followed by African American (12%), Asian (10%), and Hispanic (6%). The sample included individuals from various educational backgrounds, with 52% holding a high school diploma, 30% holding a bachelor's degree, and 18% holding a postgraduate degree. The subsample of JCTI - SAT consists of both high school seniors and college students under the age of 23.

Materials

The Jouve-Cerebrals Test of Induction (JCTI) is a 52-item, multiple-choice test measuring inductive reasoning ability. The test has been widely used in previous research and has demonstrated adequate psychometric properties (Jouve, 2010b; Jouve, 2010c). The SAT is a standardized test assessing college readiness, with scores ranging from 400 to 1600 (College Board, 2023). The test consists of two main sections: Math reasoning and Verbal reasoning. The Reynolds Intelligence Screening Test (RIST) is a brief cognitive screening instrument comprising two subtests from the Reynolds Intellectual Assessment Scales (RAIS): Guess What (GWH) and Odd Item Out (OIO) (Reynolds & Kamphaus, 2003).

Procedures

Participants were recruited through online forums, websites, social media, and educational institutions. They completed the JCTI online via a secure testing platform. Upon completion of the test, participants were asked to provide demographic information and self-reported SAT scores if available. A subset of the sample (N = 23) also completed the RIST in a laboratory for concurrent validity analysis. All data were anonymized and securely stored to ensure participants' confidentiality to line up with APA recommendations (2017).

Statistical Methods

Data were analyzed using Microsoft Excel. Descriptive statistics were calculated for demographic variables, JCTI scores, SAT scores, and RIST scores. Reliability of the JCTI was assessed using Cronbach's Alpha coefficient (1951). Item Response Theory (IRT) analysis was conducted using the 3-Parameters Logistic Model (3PLM) to evaluate item quality and construct validity (Hambleton, et al., 1991). Concurrent validity of the JCTI was assessed through Pearson correlation coefficients between JCTI scores and SAT scores, as well as between JCTI scores and RIST scores. Correlations were corrected for restriction of range using the Thorndike's Formula 2 (1947).

Results

Statistical Analyses and Assumptions

The statistical analyses employed to test the research hypotheses focused on correlational analyses to examine the relationships between the Jouve-Cerebrals Test of Induction (JCTI) and other measures. These analyses included Pearson's correlation coefficients and Item Response Theory (IRT) analysis with 3-Parameters Logistic Model (3PLM). Assumptions made about the data included the normal distribution of scores and the linearity of relationships between variables.

Results Presentation

The current version of the JCTI demonstrated good reliability, with a Cronbach's Alpha coefficient of .90 (N = 2,306), and an acceptable standard error of measurement (SEm) of 2.99. The IRT analysis using the 3PLM displayed satisfactory Item Characteristic Curves (ICC), although some items could be improved or replaced (Figure 1). A set of 15 new items were added to the test for further analysis.

 

Jouve Cerebrals Test of Induction (JCTI) Item Characteristic Curves (ICC) for the 3PLM

 

In the concurrent validity analysis, the JCTI showed significant correlations with the Scholastic Aptitude Test (SAT) scores, with correlations of .79 for JCTI - SAT Composite, .84 for JCTI - SAT M reasoning, and .38 for JCTI - SAT V reasoning (N = 63). Figure 2 illustrates the relationship between the JCTI and SAT M.

 

Relationship Between Jouve Cerebrals Test of Induction Scores and SAT Math Reasoning Subscale Scores: A Scatter Plot Analysis

 

Additionally, the JCTI was examined in relation to the Reynolds Intelligence Screening Test (RIST), revealing high correlations of .94 with the RIST Index Standard Score (ISS), .90 with the Guess What (GWH) subtest, and .89 with the Odd Item Out (OIO) subtest (N = 23).

In Table 1, we report correlations between the JCTI and SAT scores, including composite, mathematical, and verbal subscales, as well as RIST scores, including Index Standard Score, Odd Item Out, and Guess What subtests. The p-values are provided to indicate the statistical significance of the correlations.

 

Correlations between Jouve Cerebrals Test of Induction (JCTI) and Scholastic Assessment Test (SAT) and Reynolds Intellectual Screening Test (RIST)

Interpretation and Significance of Results

The high correlations observed between the JCTI and other intelligence measures, particularly nonverbal reasoning tests like the SAT M and RIST OIO subtest, support the research hypothesis that the JCTI measures a part of the fluid intelligence construct. The lower correlation between the JCTI and the SAT V reasoning further supports the notion that the JCTI is more closely related to fluid intelligence than to crystallized intelligence.

The unexpected high correlation between the JCTI and the GWH subtest, however, may be due to sample bias, as the two measures differ significantly in content and objectives. Further analysis is necessary to refine this observation and better understand the relationship between the JCTI and the GWH subtest.

Limitations

Several limitations may have affected the results of this study, including sample size, selection bias, and methodological limitations. The sample sizes used in the analyses, particularly for the RIST (N = 23), were relatively small, potentially limiting the generalizability of the findings. Additionally, potential biases in the samples, such as reliance on self-reported scores for SAT (N = 63), age, education, or other factors, could have influenced the results. Further research with larger and more diverse samples is needed to address these limitations and provide a more comprehensive understanding of the JCTI's validity and relationship with other intelligence measures.

Discussion

The results of this study have several implications for theory, practice, and future research. Firstly, the findings support the JCTI as a valid measure of inductive reasoning and fluid intelligence, consistent with previous research in the field (e.g., Cattell, 1963; Horn & Cattell, 1966). This suggests that the JCTI could be a valuable tool for researchers and practitioners interested in assessing fluid intelligence and related cognitive abilities.

The study also highlights the need for further refinement and improvement of the JCTI. The inclusion of the 15 new items and the identification of potential weaknesses in some existing items suggest that continued development of the test is necessary to improve its psychometric properties (Anastasi & Urbina, 1997) and enhance its usefulness in various contexts.

In terms of practice, the strong relationship between the JCTI and the SAT Math reasoning (College Board, 2023) suggests that the JCTI could be a useful tool for educational and vocational guidance, particularly in contexts where mathematical and abstract reasoning skills are important (Deary, et al., 2007). Furthermore, the JCTI may have potential applications in cognitive training programs, as it could help identify areas for improvement and track progress over time (Jaeggi et al., 2008).

Future research should focus on addressing the limitations of the current study, such as increasing sample size and addressing potential biases in the sample (Cohen, 1992). Researchers should also explore the relationship between the JCTI and the GWH subtest of the RIST (Reynolds & Kamphaus, 2003), as the unexpectedly high correlation between these measures warrants further investigation.

Additionally, future studies could examine the JCTI's ability to predict performance in various educational and occupational settings (Lubinski, 2004), as well as its potential to serve as an outcome measure in cognitive intervention programs (Shipstead, et al., 2012). Longitudinal research could also help to establish the test-retest reliability of the JCTI and examine its sensitivity to changes in cognitive abilities over time (Salthouse, 2004, 2010).

Overall, this study provides evidence supporting the JCTI as a valid and reliable measure of inductive reasoning and fluid intelligence. The significant correlations observed between the JCTI and other intelligence measures, particularly nonverbal reasoning tests, indicate that the JCTI could be a valuable tool for researchers and practitioners interested in assessing fluid intelligence and related cognitive abilities (Cattell, 1963; Horn & Cattell, 1966).

However, the study also highlights the need for further refinement and improvement of the JCTI, as well as the importance of addressing limitations such as sample size and potential biases in the sample (Cohen, 1992). Future research should focus on these issues, as well as exploring the broader implications and applications of the JCTI in various educational, occupational, and intervention contexts.

Conclusion

In summary, this study provided evidence supporting the JCTI as a valid and reliable measure of inductive reasoning and fluid intelligence. The findings highlighted the potential of the JCTI as a valuable tool for researchers and practitioners in assessing cognitive abilities, and its strong relationship with the SAT Math reasoning suggests its applicability in educational and vocational guidance.

These results have implications for both theory and practice, such as the continued refinement and improvement of the JCTI, its potential use in cognitive training programs, and examination of its ability to predict performance in various settings. Nevertheless, the study faced limitations, including sample size and potential biases, which future research should address. Furthermore, the unexpectedly high correlation between the JCTI and the GWH subtest of the RIST calls for further investigation.

Future studies should explore the broader applications of the JCTI in educational, occupational, and intervention contexts, and conduct longitudinal research to establish test-retest reliability and sensitivity to changes in cognitive abilities over time. In conclusion, this study reinforces the importance of refining and expanding the JCTI, while emphasizing the need for further research to address limitations and uncover the test's broader implications and applications.

References

Ackerman, P. L. (2000). Domain-specific knowledge as the "dark matter" of adult intelligence: Gf/Gc, personality and interest correlates. Journal of Gerontology: Psychological Sciences, 55B(2), 69-84. https://doi.org/10.1093/geronb/55.2.P69

American Psychological Association. (2017). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.

Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall. 

Baker, F. B. (2001). The basics of item response theory. College Park, MD: ERIC Clearinghouse on Assessment and Evaluation.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511571312

Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1–22. https://doi.org/10.1037/h0046743

Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159. https://doi.org/10.1037/0033-2909.112.1.155

College Board. (2021). SAT Suite of Assessments. Retrieved from https://collegereadiness.collegeboard.org/sat

Creswell, J. W. (2014). Research Design: Qualitative, Quantitative and Mixed Methods Approaches (4th ed.). Thousand Oaks, CA: Sage. 

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281-302. https://doi.org/10.1037/h0040957

Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21. https://doi.org/10.1016/j.intell.2006.02.001

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates. https://doi.org/10.4324/9781410605269

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books. https://doi.org/10.1177/001698628502900212

Goel, V., & Dolan, R. J. (2004). Differential involvement of left prefrontal cortex in inductive and deductive reasoning. Cognition, 93(3), B109–B121. https://doi.org/10.1016/j.cognition.2004.03.001 

Guilford, J. P. (1959). Three faces of intellect. American Psychologist, 14(8), 469–479. https://doi.org/10.1037/h0046827

Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Thousand Oaks, CA: Sage Publications, Inc.

Hayes, B. K., Heit, E., & Swendsen, H. (2010). Inductive reasoning. Wiley Interdisciplinary Reviews: Cognitive Science, 1(2), 278-292. https://doi.org/10.1002/wcs.44 

Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology, 57(5), 253-270. https://doi.org/10.1037/h0023816

Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Perrig, W. J. (2008). Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences of the United States of America, 105(19), 6829–6833. https://doi.org/10.1073/pnas.0801268105 

Jouve, X. (2010a). Evaluating the Reliability and Validity of the TRI52: A Computerized Nonverbal Intelligence Test. Retrieved from https://cogniqblog.blogspot.com/2010/01/test-of-inductive-reasoning-tri52-16-68.html.

Jouve, X. (2010b). Uncovering the Underlying Factors of the Jouve-Cerebrals Test of Induction and the Scholastic Assessment Test-Recentered. Retrieved from https://cogniqblog.blogspot.com/2010/04/principal-components-factor-analysis.html.

Jouve, X. (2010c). Evaluating the Reliability of the Jouve Cerebrals Test of Induction: A Psychometric Analysis. Retrieved from https://cogniqblog.blogspot.com/2010/01/reliability-coefficients-and-standard.html.

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. The Behavioral and brain sciences, 40, e253. https://doi.org/10.1017/S0140525X16001837 

Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Menlo Park, CA: Addison-Wesley.

Lubinski, D. (2004). Introduction to the Special Section on Cognitive Abilities: 100 Years After Spearman's (1904) "'General Intelligence,' Objectively Determined and Measured". Journal of Personality and Social Psychology, 86(1), 96–111. https://doi.org/10.1037/0022-3514.86.1.96 

McGrew, K. S. (2009). CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence, 37(1), 1-10. https://doi.org/10.1016/j.intell.2008.08.004

Nisbett, R. E., Aronson, J., Blair, C., Dickens, W., Flynn, J., Halpern, D. F., & Turkheimer, E. (2012). Intelligence: New findings and theoretical developments. American Psychologist, 67(2), 130-159. https://doi.org/10.1037/a0026699

Reynolds, C. R., & Kamphaus, R. W. (2003). Reynolds Intellectual Assessment Scales (RIAS) and the Reynolds Intellectual Screening Test (RIST), Professional Manual. Lutz, FL: Psychological Assessment Resources.

Salthouse, T. A. (2004). What and when of cognitive aging. Current Directions in Psychological Science, 13(4), 140-144. https://doi.org/10.1111/j.0963-7214.2004.00293.x

Salthouse T. A. (2010). Selective review of cognitive aging. Journal of the International Neuropsychological Society : JINS, 16(5), 754–760. https://doi.org/10.1017/S1355617710000706 

Schneider, W. J., & McGrew, K. S. (2018). The Cattell–Horn–Carroll theory of cognitive abilities. In D. P. Flanagan & E. M. McDonough (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 73–163). New York: The Guilford Press.

Shipstead, Z., Redick, T. S., & Engle, R. W. (2012). Is working memory training effective?. Psychological bulletin, 138(4), 628–654. https://doi.org/10.1037/a0027473 

Spearman, C. (1904). "General intelligence," objectively determined and measured. The American Journal of Psychology, 15(2), 201-292. https://doi.org/10.2307/1412107

Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. New York: Cambridge University Press. 

Thurstone, L. L. (1938). Primary mental abilities. Chicago, Il: University of Chicago Press.

Thorndike, R. L. (1947). Research problems and techniques (Rep. No. 3 AAF Aviation Psychology Program Research Reports). Washington, DC: U.S. Government Printing Office


Author: Jouve, X.
Publication: 2023