You can take the new Pearson English Express Test for free by using the coupon code EXPRESS100. The code should work for the next 11 days.
Month: October 2025
Some folks have reported being charged $39 (each) to send additional TOEFL score reports. It currently costs only $29 when I try to send scores from my account. I don’t know if ETS is doing an A/B test, or if someone in Gurgaon pressed the wrong button.
What does it cost you to send scores? Let me know in the comments.
I finally got around to experiencing the new room scan for the Duolingo English Test. The room scan is mostly painless. It resembles that of the British Council’s EnglishScore test: the test taker just has to slowly spin around their room while holding the secondary camera (their phone). There is also a second scan, which hasn’t been discussed much in this space. To complete it, the test taker first points the phone at their keyboard, and slowly moves it towards the space behind their computer. It is a little clunkier than the main room scan, and the test taker might attempt it a couple times before doing it properly.
Some have argued that requiring test takers to have a compatible smartphone is burdensome. This is true, to a certain extent. That said, many test takers will find that it alleviates the burden of completing room scans by carrying a laptop around their apartment. Remember that most test takers don’t have a cool-guy Macbook like the people who design and market tests. Instead, they have a barbell of a Windows machine with a bunch of peripherals connected via USB and a battery that has to be plugged in all the time because it maxes out at a 5% charge. Doing a room scan with such a machine really sucks. Ask me how I know.
Here’s my latest substack. It contains all the testing news that was fit to print between October 1 and 21. According to my substack account, 51% of subscribers opened this issue and 15% of those openers clicked on a link somewhere in the newsletter. That’s pretty cool.
New from Studies in Language Assessment is an article by Daniel Isbell, Dustin Crowther, Jieun Kim and Yoonseo Kim which looks at the speaking tasks included in British Council’s Aptis Test. Specifically, it compares official Aptis scores to assessments of intelligibility and comprehensibility from laypeople recruited from the “Prolific” online research participant pool. Scored responses from 50 test-takers were selected and those participants rated their comprehensibility on a 1-9 scale. For intelligibility, the participants were asked to transcribe the recordings. Those transcriptions were compared to criterion transcripts created by the research team. Among other things, the authors note that “to the ears of layperson listeners… speakers that earned higher Aptis scores were more intelligible and easier to understand.”
I would love to see more of this kind of research. I suppose a similar study could be done of test-taker responses to just about any English test.
Some really great research this month from Alina Reid in support of the new ISE Digital Test from Trinity College London. The research examines how the test’s new integrated reading/writing task connects to real-life academic writing.
This is important in a world where testing companies are quick to claim that their tests are “fit for purpose” without really supporting that claim. Indeed, the author references studies which seem to question the validity of both the IELTS and TOEFL independent writing tasks.
In the case of Trinity’s test, 64% of surveyed EAP instructors agreed that the task in question resembled the assignments they give in their classes. Which is good, but not great. A greater share – 86% of respondents – agreed that the task engages similar skills and strategies that are used in their classes. That’s a bit better.
Said one respondent:
“…if students are able to write reasonably well [at] this level, they could be equipped with many of the skills and strategies required to write lengthier assignments. You will have concrete evidence as a teacher of the capability of the student in terms of understanding texts, choosing relevant information from sources, synthesizing, and paraphrasing by answering a question like this.”
But another:
“This task does not reflect the real needs of university students. University students need to engage with extensive reading and evaluate the suitability of texts. They would never have to write something so short in forty minutes. Instead, they would have weeks to work on a more extensive research task which would result in a longer piece of academic writing. Shorter texts are more simple to organize and may not emphasize the importance of coherence.”
Personally, my perspective is similar to the latter comment. Tests can be great measures of one’s ability to use the English language. But I’m wary of claims that any two or three hour test can indicate the ability of a student to do meaningful academic work. Especially when a test maker just asks us to take their word for it.
I took the Gateway English Test from English3 a few days ago. I had a lot of fun with it. I’m not a psychometrician, so I can’t evaluate the validity of items, but I was happy to see that the English3 team put some thought into designing interesting tasks. There are some negative points, but I’ll save those for the end of this post, lest anyone gets the wrong idea about me.
A few notes:
- This is a 90-minute test with a $99 price tag. Proctoring is asynchronous, which I guess puts it in the “contemporary affordable” quadrant with DET and PEXT. Results come in five days or you can pay extra to get them more quickly.
- The test includes meaningfully integrated tasks! They include listening to a lecture and answering a question about it orally, listening to a conversation and giving an opinion about it in writing, and listening to a “zoom” call and summarizing each of the speaker’s points in writing. These are fun.
- Content is mostly “academic” with some “campus life” stuff. I didn’t spot any non-campus “daily life” content. Reading and listening passages resemble what you might find on other tests.
- Since this is a 90-minute test, there is still time for a complete essay. Indeed, there is quite a lot of written and spoken production. The 90-minute length gives designers room to include quite a lot of speaking and writing, if that’s what they value. Going with a 90-minute length is a tough decision in a world where 60-minute tests seem to be the future… but that extra half hour does give designers a certain amount of freedom.
- The test uses the same on-screen note-taking widget as the ITEP. I like that.
- The test starts with 5 unscored speaking questions. Responses are shared with score users.
- The list of accepting schools seems to skew toward faith-based institutions, which is really interesting. While taking the practice test I sort of sensed content that might appeal to the CLT folks, but didn’t pick up on any of that in the actual test.
- One almost senses that if the TOEFL team had a bit more time to think about their relaunch, they might have come up with a product sort of like this one. This is a fun, non-threatening test that includes an extended writing task, a ton of speaking, meaningfully integrated tasks and a splash of “campus life” stuff.
Meanwhile, some of the not-good stuff:
- Security seems dated. This is an asynchronously proctored test that utilizes neither a secure browser nor a secondary camera. I think institutions expect a bit more in 2025.
- Scoring is wonky. The maximum score is supposed to be 600 points. I scored 605 points. My listening score was 680/600. Whoops.
- Instead of linking my scores to the CEFR, my score report just contains the letters “CEFR.” Hmmm.
Not a ton to report this month, as much of my reading time was spent on fiction, which I don’t usually highlight here. But I did check out a few relevant items.
First up, I read Norbert Elliot’s “On a Scale: A Social History of Writing Assessment in America.” This might be the best work on the topic, but it is a pretty niche topic. I learned a lot about why ETS raters grade TOEFL essays holistically – basically, it is pretty hard to get two raters to score an essay the same way when they are giving specific scores to various categories. That actually answers some long-standing questions I’ve had about the IELTS test, actually.
Next, I read the April 4, 2024 issue of the London Review of Books. I quite liked an article called “Zzzzzzz” about why we sleep. The topic of sleep has appeared on the TOEFL quite a few times. I think I’ll even add a question about sleep to the book I’m working on right now.
Finally, I read the latest dispatch from the Luddite Club. Now this is only available in print form and I don’t want to break that sacred trust by sharing the contents here, but if you happen to be a Luddite (or just enjoy traditional post), I do recommend joining their mailing list.
Also: I have decided to supplement this column with something called “You Should Read More Ephemera,” which will encourage everyone to read more of the odds and ends that appear on the new TOEFL, starting in January. I’ll do that by cutting out and photographing some of the various bits and bobs of English writing that cross my path. First up is a little travel guide to the mountain I look at from my window every morning, which came from a copy of “Stars and Stripes Korea” which I picked up last time I visited Incheon Airport. Enjoy:
The IELTS partners have published a new guide to assessment literacy and picking English tests. It’s pretty good.
They suggest asking five questions:
- Is there research to support test validity?
- How are speaking and writing assessed?
- How do your tasks align with academic demands?
- What security measures are in place?
- Is it possible to review how test taker performance is assessed against specific criteria?
These are all excellent questions. Score users should certainly ask them. One might also read Goodine’s Guidelines.
From a business perspective, these questions seem to highlight how IELTS Official continues to find itself in a tight spot. They are dealing with competition from more contemporary tests like the PTE Academic Test and Duolingo English Test which are successfully putting forth the argument that shorter items (what IELTS calls “limited-response items”) have a role to play in snapshotting the language proficiency of an applicant (even if only in combination with longer items). On the other hand, they also face competition from fairly traditional tests like LANGUAGECERT which, as recent events have suggested, may be supported by much more stringent security measures than is the IELTS.
You can read a transcript of IDP Education Ltd’s Annual General Meeting right here. It’s mostly quite dull, but I suppose it is interesting to see how IDP’s future depends on governments becoming more open to immigration in the near future. That’s a dicey proposition in a world where voters around the globe are becoming more nativist and more nationalist with every passing day.
While I was searching the warrens of the College Board’s website, I stumbled upon something I didn’t know I needed: this technical manual for the AP International English Language Test! Developed for the College board by ETS and offered from 1997 to 2002, the APIEL Test was used by some schools in the United States alongside TOEFL and IELTS to confirm English fluency when admitting international students.
Testing enthusiasts might find the content of the manual somewhat interesting. This was a three-hour test (plus time for instructions) and if you squint at it long enough you can see how it may have been influenced by research being done in support of the TOEFL iBT. For instance, this was the first ETS test with both a speaking section and a writing section. Meanwhile, in true AP fashion, the reading and listening items are sourced from real-world publications rather than the somewhat stilted constructions we see in tests today. I even spotted an uncredited excerpt from “The Spy Who Came in From the Cold.”
Perhaps some readers of this space contributed to the development of the test and have some memories to share.
This test actually came up in a conversation I had at DETcon 2024; another attendee reminisced about how ETS held similar events in the 90s to promote the product. Recall that back then ETS ran most of the College Board’s testing programs.
This seems to have been the College Board’s only foray into the world of high stakes English testing (aside from their early management of the TOEFL). I’ve often wondered if they have ever considered getting into this market. There is still some potential for profit, I think. Just off the top of my head I can count TEN firms trying to make money on high stakes testing for college admissions in the UK, but only four trying to profit from the much larger US market.
Pearson’s Nine Month trading update describes the PTE as having “strong performance” in Q3. Dunno what that actually means, but there ya go.
City & Guilds Institute has sold its assessment business to PeopleCert (they own LanguageCert) for an undisclosed amount. According to FE Week, CEO Kirstie Donnelly and almost all of City & Guilds’ 1,400 staff will move over to the PeopleCert organization. They will leave behind a very wealthy charity which will, one assumes, be seeking a new role to play in the world. PeopleCert has a press release out.
Recall that PeopleCert earlier acquired the City & Guilds English assessment business and from that formed the LanguageCert group of tests.
I’ve written quite a few times about non-profit assessment organizations selling their assets to buyers from the for-profit world. There will be more sales to come, I’m quite certain.



