ACT Learning and Professional Services
CRASE+®
ACT’s Automated Essay Scoring Engine
Automated essay scoring uses computers to reliably emulate how humans score writing assessment responses.
Selected References about Automated Scoring and CRASE+
The CRASE+ team at ACT use the best practices that we can to produce automated scoring models. The following references shape our best practices.
The Standards for Educational and Psychological Testing, by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education (2014)
Guidelines for Technology-Based Assessment, by the International Test Commission and the Association of Test Publishers (2022)
Establishing Standards of Best Practice in Automated Scoring, by Scott Wood, Erin Yao, Lisa Haisfield, and Susan Lottridge (2021)
Public Perception and Communication around Automated Essay Scoring, from Handbook of Automated Scoring: Theory into Practice, by Scott Wood (2020)
Best Practices for Constructed-Response Scoring, by ETS (2021)
A Framework for Evaluation and Use of Automated Scoring, from Educational Measurement: Issues and Practice, by David M. Williamson, Xiaoming Xi, and F. Jay Breyer (2012)
Selected CRASE+ References
The following references illustrate uses of the CRASE+ engine on writing assessments.
CRASE Essay Scoring Model Performance Based on Proof-of-Concept and Operational Engine Trainings, by Scott Wood (2023)
Anchoring Validity Evidence for Automated Essay Scoring, from the Journal of Educational Measurement, by Mark D. Shermis (2022)
Communicating to the Public About Machine Scoring: What Works, What Doesn’t, by Mark D. Shermis and Susan Lottridge (2019)
Establishing a Crosswalk between the Common European Framework for Languages (CEFR) and Writing Domains Scored by Automated Essay Scoring, from Applied Measurement in Education, by Mark D. Shermis (2018)
The Impact of Anonymization for Automated Essay Scoring, from the Journal of Educational Measurement, by Mark D. Shermis, Sue Lottridge, and Elijah Mayfield (2015)
An Evaluation of Automated Scoring of NAPLAN Persuasive Writing, by the ACARA NASOP Research Team (2015)
NAPLAN Online Automated Scoring Research Program: Research Report, by Goran Lazendic, Julie-Anne Justus, and Stanley Rabinowitz (2018)
Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring, from Handbook of Automated Essay Evaluation: Current Applications and New Directions, by Susan Lottridge, E. Matthew Schulz, and Howard Mitzel (2013)
Contrasting State-of-the-Art Automated Scoring of Essays, from Handbook of Automated Essay Evaluation: Current Applications and New Directions, by Mark D. Shermis and Ben Hamner (2013)
To learn more, contact ACT at crase@act.org.