The new issue of “Language Assessment Quarterly” is the best thing I’ve read this year. I’m certain that anyone who is interested in the stuff I write about in this space will enjoy it a lot. Best of all… the articles are all free for a few months, so click the “download” button while you can.
I’ll link here to the introductory editorial by Xiaoming Xi, which is absolutely fantastic work (in later posts I’ll link to some specific articles from the issue). Xi’s editorial is a detailed survey of the topics explored in the issue. Including:
- The use of automated scoring in assessment. Xi argues that there is a need for more validity research in this area given the increased use of AI scoring in a variety of contexts. She also touched on when it is appropriate to use a “black box approach” to AI scoring. I’ve written here about my concern that advocates of AI scoring sometimes use a “just trust us, bro” approach instead of publishing detailed documentation of how the scores are actually generated. Is this appropriate?
- Automated feedback. This is a really key issue for people interested in test prep. As I’ve mentioned many times, test prep is a more self-guided journey than ever before. Students are increasingly able to prep for tests on their own instead of relying on costly tutoring thanks to new tools that make use of AI. Xi lays out some key validity issues that need to be considered before individuals rely on automated feedback. My pals at ETS will like this part as they have recently begun providing automated feedback to all test-takers, rather than a select few that sign up for specific products.
- Remote proctoring technology. This topic keeps me up at night. Really. Remote proctoring can be very good. It can be very bad. It can also be frigging horrific. That isn’t really the topic of the article, but Xi notes that there are issues surrounding remote proctoring that “may challenge the fairness and justice of tests.” She also raises potential validity issues surrounding the use of remote proctoring. It isn’t mentioned in the article, but at least one major test has a much higher average score when taken at home. Is that something that is worth studying? Should all test-makers release that information?
There is much more, but I will leave it at that. Let me know what you think.