Our Assessment Beliefs

The ASAP assessments will be designed as learning tools that target specific skill clusters aligned to adult education curriculum standards (e.g., CCRSAE, NRS) and workplace skills (e.g., Occupational Network, O*NET) to provide actionable diagnostic information useful for instructional decisions by teachers, adult education program staff, employers, and students. 

The assessments will leverage 21st-century technology to deliver the assessments online across multiple devices, use adaptive testing technology to minimize testing time and maximize measurement precision, focus on problem-solving skills using technology-based assessment for learning principlesgather “response process data” to validate the cognitive processes used by learners in responding to items, provide universal supports and accommodations to learners, and facilitate rapid reporting of results.  

In addition, we will gather data on learners’ digital literacy to understand the degree to which such skills impact learning and assessment. Based on this research, ASAP assessments will be delivered in a manner accessible to all learners, regardless of prior technology experience.  We have assembled a team with the needed expertise to address these goals (see Our Team). At the completion of the project, our team members will have produced research findings that inform practice or policy and disseminated information to various stakeholders across the nation to help them develop, adopt, or deploy technology-supported interventions for adult learners as informed by the needs assessment and subsequent phases of the project.

ASAP believes in accessible, justice-oriented, anti-racist, and culturally sustaining assessment practices.

Currently, an assumptive bias appears to underlie traditional assessment development and validation approaches. This bias fails to recognize the ways in which oppressive systems such as racism may impact the test design and development process. Acknowledging the negative effect that traditional assessment development and validation approaches may have on diverse populations, ASAP believes in accessible, justice-oriented, anti-racist, and culturally sustaining assessment practicesIn alignment with these believes, ASAP has put forward the following anti-racist statement for use as an alternative assessment design, development, and validation approach seeking to disrupt the promotion and enactment of white supremacist hegemonic assessment validation practices.

These practices involve ensuring stakeholders from diverse socioeconomic, cultural, linguistic, racial, and ethnic groups are represented in our test design and development activities. To this end:

  • We strive to build on all adult learners’* community and cultural wealth to develop assessments that more closely connect to these students’ lives and provide adult learners with more meaningful and diverse opportunities to demonstrate competence and skill development.
  • We aim to include all adult learners in our design thinking process as we seek to design assessments that are flexible and customizable to serve multiple populations’ diverse needs. These needs include assisting learners at various levels to learn and acquire competencies at their own level in culturally sustaining ways that do not erase diverse cultures’ ways of knowing and learning.

Moreover, to challenge ourselves to continuously confront the economic, structural, and historical roots of inequity, we work in close collaboration with underserved communities (adult learners, their instructors, employers, and employees), and leaders serving them to develop assessments that disrupt traditional assessments’ white-centric development approaches. As an alternative, we offer a justice-oriented, antiracist-based assessment approach that better integrates adult learners’ needs, cultures, and racial/ethnic backgrounds to create an empowering assessment development process that places minoritized learners and their ways of learning and being at the forefront.

*We define adult learners as adults aged 18 and older who are currently enrolled in academic, workforce, or technical training programs within the United States.

Randall, J. (2021).  “Color-neutral” is not a thing: Redefining construct definition and representation through a justice-oriented critical antiracist lens.  Educational Measurement: Issues and Practice, 40(4), 82-90. https://doi.org/10.1111/emip.12429

Randall, J., Poe, M., & Slomp, D. (2021).  Ain’t oughta be in the dictionary: Getting to justice by dismantling anti-black literacy assessment practices. Journal of Adolescent and Adult Literacy, 64, 594-599. https://doi.org/10.1002/jaal.1142

Randall, J., Slomp, D., Poe, M., & Oliveri, M. E. (2022).  Disrupting white supremacy in assessment: toward a justice-oriented, antiracist validity framework.  Educational Assessment, 27 (2), 170-178. https://doi.org/10.1080/10627197.2022.2042682.

We believe:

  1. The local assessment situation (context) matters. We believe local environments have a substantial influence on stakeholders (adult learners, instructors, employees, employers, and policymakers) impacted by tests and its scores [i] [ii] . Accordingly, we invite stakeholders to actively collaborate throughout all aspects of the assessment design process.
  2. Assessments (e.g., complex digital assessments) are best built through multidisciplinary collaboration involving a wide variety of stakeholders and experts (from computer scientists to policy analysts) to maximize the desired, intended consequences from the use of assessments and minimize undesirable, unintended effects[iii].
  3. Assessment processes should be anti-racist, in which we actively disrupt racist beliefs around learning and actively confront the economic, structural, and historical roots of inequality[iv]. We also believe the current and historical role of race and racism should be acknowledged and interrogated in our pedagogical and assessment practices[v].
  4. Sources of construct underrepresentation and construct-irrelevant variance that may differentially disadvantage test-taker groups should be identified and all test development, quality control, validation, and test revision processes should seek to minimize such sources. To this end, we use:
    • Universal Design to maximize test accessibility for all test takers, enhance test fairness for all test-takers, regardless of gender, age, language background, socioeconomic status, or disability[vi].
    • Understandardization[vii] to make the assessment process as flexible as possible and allow test takers to access their funds of knowledge throughout the testing experience.
  5. Testing consequences should be anticipated and evaluated. We believe potential and actual intended and unintended outcomes of using tests in particular ways, in certain contexts, and with certain populations should be examined. We also believe test developers need to include validity evidence based on testing consequences in validating the use of educational assessments and should strive to prevent and minimize the occurrence of negative (un)intended consequences and use structured (anticipatory) frameworks, such as:
    • Integrated Design and Appraisal Framework (IDAF[viii]), expanded evidence-centered design[ix], and equity-centered design[x] to help identify patterns of interactions among stakeholders and various forms of consequences, both intermediate and long term[xi].
  6. We believe assessment systems should include actionable information to support learning and instruction and leverage technology-based assessments to provide personalized solutions that put the learner at the center of the assessment design. The axiom ‘what gets tested gets taught’ has been proven right many times. We challenge to include ‘and what is useful for the learner gets learned’ to this axiom.

[i] Gee, J. P. (2020). What is a human?: Language, mind, and culture. Cham, Switzerland: Palgrave Macmillan.

[ii] Mislevy, R. J. (2018). Sociocognitive Foundations of Educational Measurement. London: Routledge.

[iii] Oliveri, M.E., Slomp, D., Elliot, N., Rupp, A., Mislevy, R., Vezzu, M., Tackitt, A., Nastal, J., Phelps, J., Osborn, M., (2021). Introduction: meeting the challenges of workplace English communication in the 21st Century. The Journal of Writing Analytics, 5, 1–33. DOI: https://doi.org/10.37514/JWA-J.2021.5.1.01

[iv] McGregor, J., (1993). The effect of role playing and anti-racist teaching on student racial prejudice: a meta-analysis of research. The Journal of Educational Research, 86 (4), 215–226. DOI: https://doi.org/10.1080/00220671.1993.9941833

[v] Randall, J, Slomp, D., Poe, M., & Oliveri, M.E. (2022). Disrupting White Supremacy in Assessment: Toward a Justice-Oriented, Antiracist Validity Framework. Educational Assessment, 27(2), 170-178. DOI: https://doi.org/10.1080/10627197.2022.2042682

[vi] American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association. Retrieved from https://www.apa.org/science/programs/testing/standards

[vii] Sireci, S. G. (2020). Standardization and UNDERSTANDardization in educational assessment. Educational Measurement: Issues and Practice, 39(3), 100-105. DOI: https://doi.org/10.1111/emip.12377

[viii] Slomp, D., (2016). An integrated design and appraisal framework for ethical writing assessment. The Journal of Writing Analytics, 9(1). Retrieved from http://journalofwritingassessment.org/article.php?article=91

[ix] Arieli-Attali, M., Ward, S., Thomas, J., Deonovic, B., von Davier, A.A., (2019). The expanded evidence-centered design (e-ECD) for learning and assessment systems: a framework for incorporating learning goals and processes within assessment design. Frontiers in Psychology, 10, 1–17. DOI: https://doi.org/10.3389/fpsyg.2019.00853

[x] Oliveri, M. E., Slomp, D. H., Rupp, A. A., & Mislevy, R. J. (2021). Principled development of workplace English communication Part 2: Expanded evidence-centered design and theory of action frameworks. The Journal of Writing Analytics, 5. DOI: https://doi.org/10.37514/JWA-J.2021.5.1.03