Today I want to write a few words about an interesting new (December, 2019) text from ETS. “Automated Speaking Assessment” is the first book-length study of SpeechRater, which is the organization’s automated speaking assessment technology. That makes it an extremely valuable resource for those of us who are interested in the TOEFL and how our students are assessed. There is little in here that will make someone a better TOEFL teacher, but many readers will appreciate how it demystifies the changes to the TOEFL speaking section that were implemented in August of 2019 (that is, when the SpeechRater was put into use on the test).
I highly recommend that TOEFL teachers dive into chapter five of the book, which discusses the scoring models used in the development of SpeechRater. Check out chapter four as well, which discusses how recorded input from students is converted into something that can actually be graded.
Chapters six, seven and eight will be the most useful for teachers. These discuss, in turn: features measuring fluency and pronunciation, features measuring vocabulary and grammar, and features measuring content and discourse coherence. Experienced teachers will recognize that these three categories are quite similar to the published scoring rubrics for the TOEFL speaking section.
In chapter six readers will learn about how the SpeechRater measures the fluency of a student by counting silences and disfluencies. They will also learn about how it handles speed, chunking and self-corrections. These are actually things that could influence how they prepare students for this section of the test, though I suspect that most teachers don’t need a book to tell them that silences in the middle of an answer are a bad idea. There is also a detailed depiction of how the technology judges pronunciation, though that section was a bit to academic for me to grasp.
Chapter seven discusses grammar and vocabulary features that SpeechRater checks for. Impressively, it just sticks them in a list. A diligent teacher might create a sort of check list to provide to students. Finally, chapter eight discusses how the software assesses topic development in student answers.
Sadly, this book was finished just before ETS started using automated speaking scoring on high-stakes assessment. Chapter nine discusses how the technology is used to grade TOEFL practice tests (low-stakes testing), but nothing is mentioned about its use on the actual TOEFL. I would really love to hear more about that, particularly its ongoing relationship with the human raters who grade the same responses.