Psychometric Properties of the Jouve Cerebrals Word Similarities Test: An Evaluation of Vocabulary and Verbal Reasoning Abilities

Abstract

The Jouve Cerebrals Word Similarities (JCWS) test is a self-administered verbal test designed to evaluate vocabulary and reasoning in a verbal context. This paper presents an analysis of the psychometric properties of the JCWS, with a focus on the first subtest, which is based on the Cerebrals Contest's Word Similarities (CCWS). The CCWS demonstrated high reliability, with a Cronbach's alpha coefficient of .960, and steep discrimination curves for each item. Validity of the CCWS was assessed through correlations with WAIS scores, and found to be a measure of verbal-crystallized ability. The JCWS subtests demonstrated high internal consistency and reliability, with a Spearman-Brown prophecy coefficient of .988. The paper concludes that the JCWS is a reliable measure of vocabulary and reasoning in a verbal context, with high internal consistency and reliability. However, limitations of this study include a small sample size for assessing internal consistency and concurrent validity for the whole JCWS. Future research should aim to address these limitations and further evaluate the validity of the JCWS.

JCWS, verbal ability, reasoning, psychometric properties, reliability, validity

The measurement of psychological constructs through psychometric testing forms the bedrock of psychological assessment. Psychometrics encompasses the quantification of psychological variables such as intelligence, personality, and cognitive abilities (Gregory, 2011; Nunnally & Bernstein, 1994). Since the inception of formal intelligence testing, the field has evolved through methodological advances, yielding increasingly sophisticated instruments (Embretson & Reise, 2000). The current study examines the Jouve Cerebrals Word Similarities (JCWS; Jouve, 2023) test, a self-administered tool devised to assess vocabulary and verbal reasoning abilities, with particular emphasis on its psychometric robustness.

The JCWS is a self-administered verbal test designed to evaluate vocabulary and reasoning in a verbal context. The purpose of this study is to examine the psychometric properties of the JCWS, with a focus on the first subtest, which is based on the Cerebrals Contest's Word Similarities (CCWS). The CCWS was first proposed during the 2010 Contest of the Cerebrals Society, and has been shown to be highly reliable with a Cronbach's alpha coefficient of .960.

In psychometric theory, reliability denotes the stability and consistency of scores across time and different test administrations (Nunnally & Bernstein, 1994). The internal consistency of an instrument, assessed via Cronbach’s alpha, indicates the degree of interrelation among test items. The JCWS, constructed to measure verbal reasoning, demonstrates high internal consistency, with a Spearman-Brown split-half coefficient of .988, suggesting substantial reliability and inter-item correlation (Cronbach, 1951).

In addition to reliability, validity is also an important aspect of psychometrics, which refers to the degree to which a test measures what it is intended to measure. While this study focuses on the reliability of the JCWS, it is important to note that its validity has been partially assessed through correlations with WAIS scores and found to be a measure of verbal-crystallized ability (Wechsler, 2008).

In the next section, we will provide a brief literature review of relevant theories and instruments related to the study's focus, as well as a justification for the selection of specific materials or test items.

Literature Review

The assessment of verbal reasoning and vocabulary forms a key domain within intelligence research. Cattell’s (1963) dichotomy between fluid and crystallized intelligence remains foundational in understanding verbal-crystallized ability. While fluid intelligence concerns problem-solving and adaptability, crystallized intelligence encapsulates knowledge-based capacities, including vocabulary and general information (Cattell, 1963). The JCWS, which focuses on verbal-crystallized abilities, is aligned with Cattell’s model, positioning vocabulary knowledge as a central measure of overall cognitive function (Carroll, 1993).

External Measures

The structure of the JCWS builds upon well-established intelligence measures, particularly Thorndike’s CAVD (Completion, Analogies, Vocabulary, Directions), which is known for its focus on verbal reasoning. The CAVD was a key influence in the development of the JCWS, as both instruments assess word similarities using comparable methodologies (Thorndike, 1927; Deary et al., 2007). Additionally, the JCWS mirrors the Wechsler Adult Intelligence Scale (WAIS) in its evaluation of verbal intelligence, employing tasks such as analogy and pattern recognition that engage higher-order cognitive processes (Wechsler, 2008; Kaufman & Lichtenberger, 2006).

The WAIS, used to validate the CCWS, has a long-standing history in psychological testing and has undergone multiple revisions to maintain its accuracy and relevance. One of its primary strengths is its ability to assess verbal intelligence, a crucial aspect of cognitive functioning that is closely linked to academic and occupational outcomes.

The latest edition, the WAIS-IV, includes several subtests aimed at measuring crystallized intelligence. The Vocabulary subtest evaluates an individual’s ability to define and correctly use words, while the Information subtest measures general knowledge and understanding of the world (Kaufman & Lichtenberger, 2006). The Comprehension subtest examines practical reasoning and the application of common sense to solve everyday problems (Wechsler, 2008). The Similarities subtest assesses an individual’s capacity to recognize and analyze relationships between concepts and ideas (Kaufman & Lichtenberger, 2006).

Research has consistently demonstrated the reliability and validity of these WAIS subtests as measures of verbal and crystallized intelligence (Benson et al., 2010; Deary et al., 2007; Goff & Ackerman, 1992; Salthouse & Kersten, 1993). Widely utilized in both clinical practice and research, the WAIS has greatly enhanced our understanding of human intelligence.

Statistical Framework

Item Response Theory (IRT) is a statistical framework for analyzing the properties of test items, which has become increasingly popular for its robustness (Embretson & Reise, 2000). IRT models are particularly useful in analyzing the psychometric properties of tests, such as item difficulty and discrimination, as well as overall test reliability and validity. One of the advantages of IRT is that it can provide a more detailed analysis of individual test items, rather than just analyzing overall test scores.

In this study, IRT was used to analyze the discrimination curves for each item in the CCWS. Discrimination curves indicate how well an item distinguishes between individuals with different levels of ability. Steep discrimination curves indicate that the item is effective at discriminating between individuals with different levels of ability, while flat discrimination curves indicate that the item is not effective at discriminating between individuals with different levels of ability (Lord, 1980).

Test Construction

The JCWS consists of three subtests, each targeting different types of word similarities, offering a thorough assessment of verbal ability. The first subtest, modeled after the CCWS, evaluates basic word similarity by requiring individuals to identify common features between two words. This skill is fundamental to verbal reasoning (Deary et al., 2007). The second subtest focuses on analogical reasoning, where participants must discern relationships between pairs of words and apply these relationships to new word pairs. This measures the ability to reason and form conceptual links (Carroll, 1993; Kaufman & Lichtenberger, 2006). The third subtest assesses sequence-based word similarity, asking participants to recognize patterns in a series of words and apply the same relationship to a new set. This requires higher-order cognitive abilities, including pattern recognition and logical reasoning (Gardner, 1983; Sternberg, 1985).

The selection of these three subtests was intended to capture a wide range of verbal abilities. Previous research on psychometric assessments highlights the importance of evaluating diverse cognitive skills to gain a comprehensive understanding of an individual's intellectual profile (Schmidt & Hunter, 1998). By incorporating assessments of simple word similarity, analogical reasoning, and sequential pattern recognition, the JCWS offers a broad evaluation that can highlight both strengths and weaknesses in verbal reasoning. These abilities have been closely associated with general intelligence, particularly through skills like recognizing abstract concepts, understanding word meanings, and making connections between them (Deary et al., 2007). Carroll (1993) argues that analogical reasoning is a complex cognitive process involving the recognition of commonalities across different concepts, a skill critical for problem-solving. Kaufman and Lichtenberger (2006) view this as a higher-order skill essential for applying conceptual knowledge to novel problems. Gardner (1983) emphasizes the role of pattern recognition and logical reasoning in overall intelligence, while Sternberg (1985) underscores the importance of identifying conceptual relationships and using them in problem-solving. These theoretical perspectives support the JCWS’s utility in evaluating both verbal ability and broader cognitive functioning.

Method

Research Design

This study employed a correlational research design to examine the psychometric properties of the JCWS, with a focus on the first subtest, which is based on the CCWS. Correlational research designs are useful in establishing relationships between variables without manipulating them (Creswell, 2014). In this study, the relationship between the scores on the JCWS and CCWS was analyzed to examine the internal consistency and reliability of the JCWS.

Participants

The participants in this study consisted of two different samples. The first sample, which completed the CCWS, comprised 157 adults aged 18 years and above. The second sample, which completed the JCWS, comprised 24 adults aged 18 years and above. Both samples were recruited through social media, online forums, and other internet media. No specific inclusion or exclusion criteria were applied to the samples. It should be noted that both the CCWS and JCWS are self-administered and untimed tests, and participants completed them in their own homes or preferred locations.

Materials

The CCWS was used as the basis for the JCWS, and both tests are self-administered verbal tests designed to evaluate vocabulary and reasoning in a verbal context. The CCWS consists of 50 items, with two words presented in each item, and the individual is required to identify the similarity between the two words. The JCWS consists of three subtests, with the first subtest consisting of 20 items based on the CCWS. The second subtest assesses similarity between words in the context of an analogy, and the third subtest assesses similarity between words in the context of a sequence. Both the CCWS and JCWS are untimed tests, and participants took them at home.

Procedures

Participants were provided with a link to an online survey that included the CCWS and JCWS. Participants were instructed to complete the tests to take as much time as needed to complete the tests. The order of the tests was not counterbalanced. The survey was created using HTML and PHP languages.

Data Analysis

Descriptive statistics, including means, standard deviations, and ranges, were computed for the JCWS and CCWS scores. Internal consistency was assessed using Cronbach's alpha coefficient, with values of 0.80 or higher indicating adequate internal consistency (Nunnally & Bernstein, 1994). To analyze the relationship between the JCWS and CCWS scores, Pearson correlation coefficients were computed. Discrimination curves were analyzed using the graded response model, a type of Item Response Theory (IRT) model that can be used to analyze the properties of test items (Samejima, 1969). All data analyses were conducted using Microsoft Excel and the EIRT add-on (Germain et al., 2007)

Ethical Considerations

Informed consent was obtained from all participants prior to their participation in the study. Participants were informed that their participation was voluntary, and they could withdraw from the study at any time without penalty. Participants were also informed that their data would be kept confidential and used only for research purposes. The researchers also ensured that the study adhered to the ethical guidelines set forth by the American Psychological Association (APA, 2017).

Results

Psychometric Properties of CCWS

The psychometric properties of the CCWS were analyzed to evaluate its internal consistency and reliability as a measure of verbal ability. The Cronbach's alpha coefficient for the CCWS was calculated to be .96, indicating high internal consistency and reliability of this test. This suggests that the items in the CCWS are measuring a common construct, and that the test is an effective and reliable measure of verbal ability.

An Item Response Theory (IRT) analysis was also conducted to further explore the properties of the CCWS. The IRT 2PLM (van der Linden & Hambleton, 1997) showed that each item had steep discrimination curves (cf. Figure 1), which suggests that the CCWS items were highly effective in distinguishing between participants with different levels of verbal ability. The steep discrimination curves also indicate that the CCWS is able to effectively differentiate between participants with high and low levels of verbal ability.

Figure (1) Cerebrals Contest Word Similarities Item Characteristic Curves, 2PLM (Bayes Modal Estimator, N = 157)

To evaluate the validity of the CCWS as a measure of verbal ability, correlations were drawn between the CCWS and various WAIS subtests. Data from a small sample of 17 participants, who self-reported their WAIS scores, were analyzed. The CCWS showed a strong correlation with the WAIS Vocabulary subtest (r = .71), indicating that it may be a reliable measure of vocabulary knowledge. Similarly, a high correlation with the WAIS Information subtest (r = .88) supports its potential as an indicator of general knowledge. Additionally, the correlation between the CCWS and the WAIS Verbal IQ (VIQ) was also strong (r = .79; see Figure 2). However, the correlation with the WAIS Similarities subtest was moderate (r = .58), suggesting that the CCWS may be less effective in assessing abstract reasoning. Given the other correlations, a stronger relationship with the Similarities subtest would have been expected based on content validity. It is important to acknowledge the limitations posed by the small sample size and reliance on self-reported scores, which may affect the generalizability of these results.

Figure (2) Scatter Plot, Cerebrals Contest Word Similarities vs. WAIS VIQ (N = 17), r = .79 (p < .05)

Psychometric Properties of JCWS

The internal consistency and reliability of the JCWS were evaluated using the Spearman-Brown prophecy coefficient, which yielded a value of .988. This high coefficient indicates strong internal consistency and suggests that the JCWS is a reliable tool for assessing vocabulary and verbal reasoning abilities.

Intercorrelations between the subtests were also analyzed to further explore internal consistency. The correlation between WS1 and WS2 was .947, while WS1 and WS3 showed a correlation of .918. The correlation between WS2 and WS3 was similarly high, at .947. These strong intercorrelations support the overall reliability of the JCWS as a consistent measure of verbal reasoning and vocabulary skills.

However, the small sample size of 24 participants presents a limitation, as it may affect the generalizability of the findings and reduce the statistical power of the analyses. Therefore, these results should be interpreted with caution, and further research with larger and more diverse samples is necessary to fully establish the psychometric robustness of the JCWS.

Discussion

This study set out to examine the psychometric properties of the Jouve Cerebrals Word Similarities (JCWS) test, designed to measure vocabulary and verbal reasoning abilities. The findings indicate that the JCWS demonstrates strong reliability as a measure of verbal ability, with high internal consistency, as reflected by the Spearman-Brown prophecy coefficient of .988 (van der Linden & Hambleton, 1997).

Furthermore, the high intercorrelations between subtests underscore the JCWS’s consistency as a tool for assessing vocabulary and reasoning within a verbal context (Aiken, 1999). This reliability aligns with established psychometric standards, further reinforcing its robustness (Nunnally & Bernstein, 1994).

In light of the study’s hypotheses and prior research, these results lend support to the JCWS’s validity and utility as an assessment of verbal ability (Wechsler, 2008). The test’s high internal consistency suggests that it can be effectively employed in various settings, including academic and clinical environments (Deary et al., 2010).

Additionally, the observed correlations between the CCWS and the WAIS Vocabulary and Information subtests bolster the validity of key aspects of the JCWS as a measure of verbal-crystallized intelligence.

The findings indicate that the JCWS could be a valuable tool for assessing verbal ability in various contexts, especially in situations requiring a self-administered measure of verbal skills (Deary et al., 2010).

However, this study's limitations, including the small sample size and the focus on internal consistency and reliability, must be acknowledged. Further research with larger samples and additional validity measures is needed to fully assess the JCWS’s psychometric properties (Nunnally & Bernstein, 1994).

Although the study demonstrates that the JCWS is a reliable measure with strong internal consistency, further validation is necessary to confirm its effectiveness in different settings. The JCWS shows potential as a useful tool for verbal assessment, complementing existing measures in academic and clinical environments.

Future studies should focus on exploring the external and criterion-related validity of the JCWS using larger and more diverse samples. Additionally, examining test-retest reliability and sensitivity to change over time would enhance understanding of its utility. Research could also investigate potential adaptations of the JCWS for different populations or contexts (Tabachnick & Fidell, 2013; McCoach et al., 2013).

Conclusion

This study examined the psychometric properties of the Jouve Cerebrals Word Similarities (JCWS) test, designed to measure vocabulary and verbal reasoning. The findings indicate that the JCWS is a reliable instrument, demonstrating high internal consistency. Additionally, the strong correlations between key components of the JCWS and the WAIS Vocabulary and Information subtests further support its validity as a measure of verbal ability.

These results have important theoretical and practical implications, suggesting that the JCWS could be a valuable tool for assessing verbal ability across various contexts. However, further research with larger and more diverse populations is required to comprehensively evaluate its psychometric properties, particularly its external and criterion-related validity.

While the findings are promising, this study's limitations must be noted, particularly the small sample size and the focus solely on internal consistency and reliability. Future studies should consider adapting the JCWS for broader applications and exploring its potential use with different populations or in varied contexts.

References

Aiken, L. R. (1999). Psychological testing and assessment (10th ed.). Englewood Cliffs, NJ: Prentice Hall.

American Psychological Association. (2017). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.

Benson, N. F., Hulac, D. M., & Kranzler, J. H. (2010). Independent examination of the Wechsler Adult Intelligence Scale–Fourth Edition (WAIS–IV): What does the WAIS–IV measure? Lutz, FL: Psychological Assessment Resources. https://doi.org/10.1037/a0017767

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511571312

Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: A critical experiment. Journal of Educational Psychology, 54(1), 1-22. https://doi.org/10.1037/h0046743

Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches (4th ed.). Thousand Oaks, CA: Sage Publications.

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334. https://doi.org/10.1007/BF02310555

Deary, I. J., Johnson, W., & Houlihan, L. M. (2009). Genetic foundations of human intelligence. Human genetics, 126(1), 215–232. https://doi.org/10.1007/s00439-009-0655-4

Deary, J. I., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21. https://doi.org/10.1016/j.intell.2006.02.001

Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates. https://doi.org/10.4324/9781410605269

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.

Gregory, R. J. (2011). Psychological Testing: History, Principles, and Applications (7th ed.). New York: Pearson.

Hambleton, R. K., & Swaminathan, H. (1985). Item response theory principles and applications. Boston, MA: Kluwer-Nijhoff Publishing.

Jouve, X. (2023). Jouve-Cerebrals Word Similarities (JCWS). Cogn-IQ Cognitive Assessments. https://pubscience.org/ps-1mVDO-c3d0e1-eFWj

Kaufman, A. S., & Lichtenberger, E. O. (2006). Assessing adolescent and adult intelligence (3rd ed.). Hoboken, NJ: John Wiley & Sons.

Lichtenberger, E. O., Kaufman, A. S., & Kaufman, N . L. (2012). Essentials of WAIS-IV assessment (2nd ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Erlbaum. https://doi.org/10.4324/9780203056615

McCoach, D. B., Gable, R. K., & Madura, J. P. (2013). Instrument development in the affective domain: School and corporate applications (3rd ed.). Springer Science & Business Media. https://doi.org/10.1007/978-1-4614-7135-6

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill. https://doi.org/10.1177/014662169501900308

Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. (Psychometrika Monograph No. 17). Psychometrika.

Salthouse, T. A., & Kersten, A. W. (1993). Decomposing adult age differences in symbol arithmetic. Memory & Cognition, 21(5), 699-710. https://doi.org/10.3758/BF03197200

Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-274. https://doi.org/10.1037/0033-2909.124.2.262

Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. New York: Cambridge University Press.

Sternberg, R. J. (2008). WICS: A model of educational leadership. The Educational Forum, 68(2),108-114. https://doi.org/10.1080/00131720408984617

Thorndike, E. L. (1927). The measurement of intelligence. New York: Teachers College, Columbia University.

van der Linden, W. J., & Hambleton, R. K. (1997). Handbook of modern item response theory. New York: Springer.

Wechsler, D. (2008). Wechsler Adult Intelligence Scale (4th ed.). San Antonio, TX: Pearson.

Author: Jouve, X.
Publication: 2023