Both TestAS formats were designed to provide the best possible reflection of academic performance. The development and compilation of the tasks is based on scientifically proven principles and empirical testing. To ensure that both versions of the test are valid, reliable, objective and fair, mathematical analyses and comparisons of TestAS scores and academic performance are carried out regularly.
The structure of TestAS, with its core and subject-specific modules, is based on the differentiation between so-called fluid and crystalline intellectual abilities.
The core module tests logical reasoning and analytical and abstract thinking. The manifestation of these abilities is relatively constant, which means that they cannot be changed to a great extent through training (Wilhelm & Kyllönen, 2021).
The Subject Modules require an understanding of problems typical for the specific field of study as well as the application and transfer of solution principles. Thus, they relate to abilities that a person has acquired through long-term education and experience (including the ability to work with subject-specific forms of presentation such as texts, tables, figures, formulas, etc.), in addition to logical reasoning. Embedding the questions in thematic contexts increases the informative value of the test takers' answers in relation to their later performance in their studies.
Most of the questions are answered using multiple choice procedures with exactly one correct solution. For secure, standardized and objective evaluation, this is the format of choice. It has also been shown to be suitable for measuring crystallised abilities, i. e. abilities that have been acquired through educational learning processes (Goecke, Staab, Schittenhelm & Wilhelm, 2022).
The reliability of standardised aptitude tests is continuously reviewed in individual studies as well as in meta-analyses. In meta-analyses, the comparisons of test results and later academic success from individual studies are measured against each other. These analyses are based on a very large sample. For German-speaking countries, a strong correlation between test result and later grades in university and thus a high prognostic validity of the tests could be established (Schult, Hofmann & Stegt, 2019). In addition, the study showed that aptitude tests for university have so-called incremental validity compared to the school-leaving grade (in this case, the German Abitur). This means that the reliability of the prognosis is best when the school grade and the test result are considered together (ibid.).
Goecke, B., Staab, M., Schittenhelm, C., & Wilhelm, O. (2022). Stop Worrying about Multiple-Choice: Fact Knowledge Does Not Change with Response Format. Journal of Intelligence, 10(4), 102.
Schult, J., Hofmann, A., & Stegt, S. J. (2019). Leisten fachspezifische Studierfähigkeitstests im deutschsprachigen Raum eine valide Studienerfolgsprognose?: Ein metaanalytisches Update. Zeitschrift für Entwicklungspsychologie und Pädagogische Psychologie, 51(1), 16–30.
Wilhelm, O., & Kyllonen, P. (2021). To predict the future, consider the past: Revisiting Carroll (1993) as a guide to the future of intelligence research. Intelligence, 89.
g.a.s.t. conducts research in its own projects and in cooperation with scientists at universities on issues related to the assessment and testing of linguistic and cognitive competencies in the higher education context, as well as on issues related to digital learning.