Word Similarities - Updated Norms as of 13 January 2024

This table has been updated on 13 January 2024. It presents the equivalencies between raw scores and standard scores for adults aged 30 - 39. These equivalencies are derived from the relationship established through Z-score equating between the Jouve Cerebrals Word Similarities (JCWS) and the I am A Word (IAW) Verbal Ability Index (VAI). Z-score equating is a statistical method used to standardize scores from different tests onto a common scale (Kolen & Brennan, 2014). By transforming scores into Z-scores, which represent the number of standard deviations a score is from the mean of the distribution, it allows for direct comparison of scores from different tests.

The high correlation of .90 (N = 25) between the JCWS and the IAW VAI underscores a very strong linear relationship in their measurement of verbal abilities, making Z-score equating a suitable method for establishing score equivalencies. However, there are limitations to this method that must be acknowledged (Dorans, Pommerich, & Holland, 2007).

Firstly, Z-score equating assumes that the distribution of scores on both tests is normally distributed. If this assumption is not met, the equating might not accurately reflect the relationship between scores. In the context of JCWS and IAW VAI, if either test has a skewed distribution of scores, it could lead to inaccuracies in the equated scores.

Another limitation is the sample size used for correlation. While a correlation of .90 is strong, it is based on a relatively small sample size (N = 25). This raises questions about the generalizability of the equating to the entire population of test-takers (Thorndike, 2010).

Additionally, Z-score equating does not account for potential differences in difficulty level or content between the JCWS and the IAW VAI. If one test is inherently more difficult or covers different aspects of verbal ability, equated scores might not fully capture these nuances.

Despite these limitations, Z-score equating provides a valuable tool for comparing scores across different tests, especially when direct comparisons are necessary, as in the case of JCWS and IAW VAI.

Standard scores are used as a means of quantitatively assessing an individual's cognitive abilities relative to a normative population. According to the Stanford–Binet Fifth Edition (SB5) classification (Roid, 2003), a standard score of 100 is identified as the average range, with each standard deviation being 15 points. The classifications are as follows: scores of 140 and above are categorized as 'Very gifted or highly advanced'; scores from 130 to 139 are deemed 'Gifted or very advanced'; scores from 120 to 129 are classified as 'Superior'; scores from 110 to 119 are considered 'High average'; scores from 90 to 109 fall into the 'Average' range; scores from 80 to 89 are 'Low average'; scores from 70 to 79 are described as 'Borderline impaired or delayed'; scores from 55 to 69 are 'Mildly impaired or delayed'; and scores from 40 to 54 are 'Moderately impaired or delayed'.

Table 1
Standard Score to Total Raw Score Equivalencies (M = 100, SD = 15)
Standard Score (IAW VAI)
JCWS Raw Score
Qualitative Description
140+
101+
Very gifted or highly advanced
130 – 139
74 - 99
Gifted or very advanced
120 – 129
49 - 71
Superior
110 – 119
24 - 46
High average
100 – 109
1 - 22
Average

Note. Standard scores are derived using Z-score equating and represent the individual's cognitive abilities relative to the normative population. These scores are based on the available test data and may be updated with further data collection and analysis.

In addition to the Z-score correspondence, it's important to note that the internal consistency reliability of the JCWS, as measured by Guttman's Lambda-6, is .98 (N = 61). This high level of reliability is crucial in the context of score interpretation (Allen & Yen, 2002).

Guttman's Lambda-6 is a measure of reliability that focuses on the internal consistency of a test (Guttman, 1945). Similar to Cronbach's Alpha, Lambda-6 assesses how consistently the test measures the construct it is intended to measure (Cronbach, 1951). A high Lambda-6 value, such as .98, indicates that the test items are highly consistent with each other in assessing verbal ability. This suggests that the various items on the test reliably measure the same underlying construct.

The relevance of this high reliability score in the context of the JCWS can be understood in several ways:

1) Confidence in Score Interpretation: High internal consistency suggests that the test scores are reliable and can be interpreted with confidence. The high reliability of the JCWS, as indicated by the Lambda-6 score, means that the test consistently measures verbal ability across its items, ensuring dependable test results (Nunnally & Bernstein, 1994).

2) Stability of the Test Across Different Samples: The high Lambda-6 score suggests that the JCWS is likely to show consistent performance across different samples within the target demographic group (adults aged 30 - 39). This indicates the test's generalizability and reliability across a broader population (Anastasi & Urbina, 1997).

3) Suitability for High-Stakes Decisions: The high reliability as measured by Lambda-6 is particularly important for high-stakes decision-making, such as educational placement or employment. The strong internal consistency of the JCWS assures that the test results are dependable for making critical decisions (Messick, 1989).

4) Comparison with Other Measures: The reliability of the JCWS, as assessed by Guttman's Lambda-6, supports the validity of the Z-score equating process with the IAW VAI. The high internal consistency adds to the confidence in comparing and interpreting scores between these two measures, enhancing the utility of the score equivalencies presented in the table.

References

Allen, M. J., & Yen, W. M. (2002). Introduction to Measurement Theory. Waveland Press.

Anastasi, A., & Urbina, S. (1997). Psychological Testing (7th ed.). Prentice Hall.

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334. https://doi.org/10.1007/BF02310555

Dorans, N. J., Pommerich, M., & Holland, P. W. (2007). Linking and Aligning Scores and Scales. Springer. https://doi.org/10.1007/978-0-387-49771-6

Guttman, L. (1945). A Basis for Analyzing Test-Retest Reliability. Psychometrika, 10(4), 255–282. https://doi.org/10.1007/BF02288892

Kolen, M. J., & Brennan, R. L. (2014). Test Equating, Scaling, and Linking: Methods and Practices (3rd ed.). Springer. https://doi.org/10.1007/978-1-4939-0317-7

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rd ed.). American Council on Education and Macmillan.

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric Theory (3rd ed.). McGraw-Hill.

Roid, G. H. (2003). Stanford-Binet Intelligence Scales, Fifth Edition. Riverside Publishing.

Thorndike, R. L. (2010). Measurement and Evaluation in Psychology and Education (8th ed.). Pearson.