Duolingo has published a new “Demographic and Score Properties” document, tracking Duolingo English Test takers for the year ending June 2025. 

Notably, China now accounts for the largest percentage of test takers per test taker ID, at 12.88% of all test takers (down from 15.88% last year).  India now accounts for the second largest percentage, at 12.43% (down from 18.91% last year).

Here are the top ten countries, with last year’s percentage in parentheses.

  • China: 12.88% (15.88)
  • India: 12.43% (18.91)
  • Canada: 5.49% (4.35)
  • USA: 4.23% (3.34)
  • Brazil: 3.49% (3.42)
  • Korea, Good: 3.39% (3.51)
  • Mexico: 3.36% (3.12)
  • France: 2.95% (2.09)
  • Nepal: 2.71% (new to list)
  • Pakistan: 2.40% (1.94)

Last year, China and India accounted for about 35% of all test takers.  This year they account for about 25% of all test takers.

About 49% of test takers took the test to facilitate undergraduate studies, while about 35% took it to facilitate graduate studies.

Anyway. The document contains quite a lot of additional data.  Do check it out.  You can read about last year’s document over here.

Here’s a post from the official Duolingo English Test blog about the DET’s multistage interactive writing task.  It summarizes a long article in Language Testing.  Twenty-five page explorations of individual tasks give me great joy.

The article explains that this task is an attempt to address the fact that “most large-scale writing assessments do not reflect real-world and classroom-based composing, where writers develop their ideas over multiple drafts.”

In the DET’s task, test takers are first given an initial prompt on a specific topic.  Next, AI is used to identify which pre-identified themes the test taker’s response has addressed.  Finally, a follow-up prompt is presented which asks the test taker to address one of the themes they have not touched on. It’s slightly more nuanced than my summary suggests, but I think you get the point.

When I appeared on the TEFL Development Hub live stream a few days ago, I talked about how some test takers prefer human interlocutors because they can potentially tease out their English skills with follow-up questions, making it possible for them to get full credit for their fluency in the language.  Here we have a style of item that could perhaps replicate the same effect in writing assessment.  The authors note that “[w]ith the advancement in AI technology, Interactive Writing using prompting that is specifically keyed to the content of an initial response offers greater opportunities for measuring test takers’ full writing abilities…

Some readers might be interested in an article by Maggie McGehee and Daniel Isbell which will appear in a future installment of “Studies in Language Testing.”  It examines the relationship between English test (DET, TOEFL, IELTS) scores and academic outcomes at the University of Hawai‘i at Mānoa (UHM). 

According to the article, “mean GPAs and proportions of student withdrawal/probation were similar for the DET and other tests (or no test), and no differences among them were statistically significant.”

I was happy to see the authors note that “[f]or students admitted unconditionally with higher ELP scores, there were no statistically significant correlations between ELP scores and first year GPA.”

As has been discussed in this space, there are many things which impact student outcomes.  The authors note the existence of students with really high ELP scores, but surprisingly low GPAs.  Go figure.

Interestingly, though, it is mentioned that “[f]or those admitted conditionally, with ELP scores reflecting a lower range of English proficiency, stronger, positive, and statistically significant correlations emerged for students submitting IELTS or TOEFL scores, with low or negative (but not statistically significant) correlations for DET takers.”

It is pointed out a few times that the sample size was pretty small, so further study is needed.

There is also some good stuff in here about the business of English testing, but I’ll touch on that in another post.  Check out Table 1 for a preview, though.  You’ll spot which tests are frequently used… and which are not.

Duolingo’s Q2 results for 2025 were published a few days ago. The Duolingo English Test had revenues of $10,088,000 for the quarter. Since the test costs $70, we might estimate that it was taken about 144,114 times in the quarter. But this is a very rough estimate because not everyone pays $70. Some pay extra for faster results, some pay less by purchasing a bundle or buying through a partner organization. Others pay nothing thanks to the access program (which gave away about 25,000 free tests in 2024).

In Q2 of 2024 the test had revenues of $10,968,000 at $65 a pop, representing something in the ballpark of 164,584 total tests.

This suggests about a 12% decline in volume from last year.

Here’s a nice post on the Duolingo English Test blog about graduate admissions at the University of Missouri.  It points out that students admitted with TOEFL, IELTS and DET scores all perform about the same when it comes to overall GPA.  In 2023/24 DET admits earned the highest overall GPA.  And in 2024/25 IELTS admits earned that honor.

That’s mildly interesting and unsurprising.  Academic success depends on a lot of factors.

More interesting are the total students admitted using each test in the period covered (23/24 and 24/25). Specifically:

  • DET: 124 students
  • TOEFL: 122 students
  • IELTS: 106 students

These figures speak to the urgency with which ETS is overhauling the TOEFL, I think.

Conventional wisdom has it that the TOEFL is trailing the DET badly when it comes to undergraduate admissions,  but that TOEFL is still well ahead when it comes to graduate admissions.  This is just one school (and DET’s acceptance for graduate programs remains spotty)… but maybe conventional wisdom requires a slight adjustment.

I read a report today from a test taker who has alleged that the proctor of her at-home test used her personal information to find her Instagram account. He then, allegedly, sent inappropriate DMs.  This sort of allegation is more common than you might think.

I’ve never liked live human proctoring, but I have begrudgingly tolerated it. In cases where it absolutely must be maintained I’ve advocated for an in-house system.  One of the many benefits of this approach is that it gives test makers more control over who can access the personal information of test takers.

That said, if your formula for “in-house” is (proprietary tech) + (a whole bunch of dudes brought in by an outsourcing company) you might as well just stick with one of the big proctoring firms. They’ll probably do a better job.

In any case, the days of live proctoring (in-house or otherwise) are likely numbered. It seems obvious to me that asynchronous proctoring (AI-based security during the test combined with a human review after the test) is the future of at-home testing.  This is Duolingo’s current approach.  ITEP uses it too.  It appears that Pearson will follow suit when they launch their new “Pearson English Express” Test later this year.  I assume that many other test makers are currently trying to figure out how to implement their own async systems.  A decade from now, I don’t think any at-home English tests will utilize live proctors.

How much security is being provided by someone who is simultaneously watching a dozen tests?  Could the same level of security be provided by someone who checks after the test?

Some test makers might protest that their customers prefer to have help with the check-in process.  That may be true if your check-in process is clunky with cumbersome elements.  But those elements can be ironed out of the process.

I’ve taken quite a lot of at-home tests over the years.  I’m always happy when I can take a test without a live proctor, as the process is much more comfortable that doing so with a proctor.  Indeed, I took the ITEP today and the process was pure bliss.

Test makers who eat their own dogfood and, uh, other people’s dogfood know this already.

The incident I mentioned above is bad enough, but a few other stories I’ve heard over the past few years come to mind.  Like:

  1. Proctors insisting that test takers click the “cancel test” button after they have finished the test, causing the results to be cancelled.
  2. Proctors working from public transit.
  3. Proctors extending a room scan to the hallways outside of a test taker’s flat.  And even insisting on a peek inside their building’s elevator.
  4. Proctors mistakenly allowing rules to be broken, resulting in score cancellations.
  5. Proctors forgetting to turn their microphones off. To hilarious effect.

I could continue, but I think you get my point.

The long, long, long awaited Cambridge study about test use in the UK is now available.

For all the heavy lifting the preliminary results have been doing in IELTS marketing over the past year, there is surprisingly little in here about the value (or lack thereof) of specific new tests. That’s not meant to be a criticism, of course. The authors of the study seem to have had a higher purpose.

That’s not to say it is totally bereft of that sort of thing. The study contains statements like this:

“There is a notable divergence in the perceived value of various tests among different groups within institutions. While tests like Cambridge Qualifications are praised for their ability to prepare students for academic study, others, such as the Oxford International Education Group’s (OIEG) ELLT and Duolingo, are viewed with scepticism. Specific concerns were raised about the validity, security, and overall suitability of these newer, more efficient or less established tests. One survey respondent expressed dissatisfaction with ‘the recent decision to accept OEIG’s online ELLT for the China market only (in order to boost recruitment)’ due to its lack of credibility and associated security concerns.”

And this:

“For instance, one respondent noted that ‘students who came with the Duolingo award were not in practice equipped to deal with HE life and study’, echoing concerns found in studies about the adequacy of such tests.”

And this:

“One of the most consistent findings is that IELTS is widely regarded as the international standard or ‘common currency’.”

One imagines that the IELTS partners will continue to lean on this research study when crafting marketing materials in the years ahead. Score users might be wise to keep in mind that the criticisms mentioned in the study are anecdotal and not presently supported by comparative data about actual student outcomes. Often, the statements seem to be based on the perspectives of very small numbers of individuals.

Apart from the above, most of the study highlights concerns about English fluency on campus (quite separate from the use of particular tests) and provides recommendations for how to properly assess the worth of new tests.

I would be remiss if I didn’t link to Nicholas Cuthbert‘s video from DETcon London, since it generated quite a lot of spirited discussion.  Notably, Nicholas describes Duolingo as “ahead of the game.”  And also says:  “Duolingo are winning.”

Hyperbole?  Perhaps.  Duolingo is certainly on a winning trajectory in terms of acceptance in key receiving markets, market share, brand awareness, test taker engagement, use of technology, and baffling social media campaigns.  But it is important to note that IELTS still does more test administrations than everyone else combined.  There are still people paying $530 to take an IELTS test.  IELTS will be the market leader for many years to come.  Accordingly, there is still plenty of time for the IELTS partnership to develop a “next-gen” IELTS that eats Duolingo’s lunch.

Heck, Pearson is hoping to do just that in about four months.

Recent messaging from the British Council and Cambridge University Press & Assessment suggests that the IELTS partners plan to double-down on their more traditional approach to assessment.  It seems like they don’t plan to change the way they assess students, but instead encourage score users to more carefully consider which tests they choose to accept.

Regardless, I’m convinced that Cambridge has got top people working on… something.  Check out yesterday’s article in TESOL Quarterly, for instance.

A well-informed industry watcher recently expressed some incredulity that LLMs haven’t totally disrupted the high-stakes language testing sector.  He was shocked that people still pay hundreds of dollars to take a test.  My response was that this stuff takes time.  Everyone knows that university governance is a slow process.  Immigration regulations are even slower.  But things might finally be coming to a head – coincidentally both ETS and Pearson announced the existence of their “next-gen” (my term) tests at NAFSA a few weeks ago.  Things are moving a tiny bit faster now.

By the way, you must get to one of these DETcon events if you get the chance.  They are a charming combination of research presentations, community building and Duolingo’s trademark irreverence.  I understand that at the most recent Pittsburgh-based event, Duolingo CEO Luis von Ahn was subjected to an unannounced Yinzer Test.  Not sure what that is, but I suspect it is similar to a Voight-Kampff Test. In any case, the results have not been shared publicly.

Dan Isbell has written a guest post about washback in English test preparation for the Duolingo English Test blog.  It discusses preparation for the DET in particular, and for English tests in general.  Isbell divides test prep into three types:  activities that improve your English in general, activities that help you perform better on a particular test (test familiarization), and activities that help you game a particular test (templates and guessing strategies, for instance).

I understand that a full report on this topic is forthcoming.  I will add a link when it is available.

It’s an interesting thing to explore.  Test preparation always includes at least some good washback.  No matter what test they are preparing for, most test takers complete at least a few practice tests.  As a result, they will spend time consuming stuff in English and producing stuff in English.  This is good. But does it have a really meaningful impact on their fluency in the language?  I don’t know.

The TOEFL iBT contains two 800-word articles which are excerpted from actual textbooks.  Students here in Korea take their preparation pretty seriously and might complete 20 or 30 practice tests before test day (or between several test days).  That means they spend a lot of time reading some pretty dense material in English.  Does that improve their fluency?  Of course.  Does it improve their fluency a lot?  I don’t know.

My test prep niche is writing.*  It gives me great joy to know that my students walk away from their lessons with a noticeably stronger command of English grammar and language use conventions.  But, needless to say, there are faster and more economical ways to learn about sentence fragments and collocations.

Does all of this test prep mean that students spend less time on more useful and effective language acquisition approaches?  Maybe.

Is it the job of a test maker to give a darn?  Or is their only job to accurately measure language fluency? I don’t know.

A few stray thoughts come to mind:

  1. I’m interested to know how the age of a test impacts the way that students prepare for it.  As a test ages, people working in test prep become more and more familiar with the design of that test and can use that knowledge to develop better and extremely granular type 2 strategies.  Elderly readers might recall that in the early years of the TOEFL iBT we had just one official book (badly written) and a handful of books from third party publishers (even worse) to go by.  We didn’t know very much about the design specifications of test items, nor about how speaking and writing items were scored.  Things are obviously much different now. Now we know almost everything there is to know. We know so much nowadays that it might be malpractice to not spend quite a lot of time on test familiarization strategies.  Should tests be meaningfully refreshed on a regular basis to mitigate the impact of this factor?
  2. I love reading about the early history of the Princeton Review.  That firm emerged in the early 1980s when the SAT was long in the tooth and probably at its peak terribleness.  Princeton Review taught students how to eliminate answer choices without actually reading questions.  They also taught students how to recognize unscored sections so they could enjoy a refreshing nap part way through the test.
  3. In 2019 Malcolm Gladwell and his assistant both took the LSAT for an episode of his “Revisionist History” podcast.  They got coaching from the one and only John Katzman beforehand.  The point of the episode is that time management (type 2) is the most important thing when it comes to getting a good score on this test. It made LSAT tutors really cranky.

*See also:  “The Whale,” 2022.

Here’s a new video that summarizes all of the coming adjustments to the Duolingo English Test. It describes the “interactive speaking” question I wrote about on Monday and also notes that:

  1. A “listen and complete” task will be added to the beginning of the “interactive listening” task.
  2. The “Read aloud” and “listen then speak” tasks will be removed.
  3. Minimum speaking and writing times will be removed from some of the longer speaking and writing tasks. Test takers can move along whenever they feel they’ve completed the tasks.

UPDATE:  More complete descriptions have been added to the DET Help Center.