It appears that the Fulbright Scholarship program for Pakistan has stopped accepting TOEFL scores. Starting with the 2024 cycle, shortlisted candidates will need to submit a Duolingo English Test score.  This is a big deal, as Pakistan has the largest Fulbright program in terms of funding (according to the Fulbright Pakistan office).

More details here. The official requirements are listed here. Let me know if you spot changes to Fulbright scholarships in other countries.

A note for my friends at ETS: Someone needs to go to New Jersey so they can press the button to send PDF score reports that were supposed to be sent during the holidays. We’ve talked about this holiday-related bug before. 😀

If ya don’t fix it, it will be worse over the Christmas and New Year holidays!  If you are one of the few people who have been at ETS for longer than 1.5 years, try to remember what happened during Christmas 2021 and Christmas 2022!

So a real grab bag in the “You Should Read More” column this  month.  That means it was a good reading month for me… but maybe not a great month for you if you are looking for stuff perfectly suited for TOEFL prep.  In any case, let’s get right to it…

  • First up, I will remind you of the two book reviews I wrote this month. First up, check out my review of the new edition of TOEFL Essential Words.  The book remains a great resource for TOEFL prep, though the new edition has a bunch of errors in its description of the shorter TOEFL test.  Whoops.  Also, it seems to only be available as an ebook right now.  Next up, I reviewed IELTS 17.  Obviously the IELTS is a totally different test, but the articles used in the reading section are great practice if you want to read academic content.
  • Next, I read the September/October 2023 issue of Analog Science Fiction & Fact.  As always, you won’t be able to read any of its content unless you have a subscription, but I will mention that the issue’s “Guest Editorial” by Bryan Thomas Schmidt and Brian Gifford about sacrificing privacy to increase safety inspired the creation of a specific TOEFL academic discussion question for a client.   And a poem called “Object Permanence” by Marissa Lingen inspired the creation of a speaking question about, uh, object permanence. 
  • Later, I finally pulled Stanley Kaplan’s autobiography Test Pilot off my shelf.  If you are into the history of standardized testing in the USA and/or the history of preparation for standardized testing, this one is worth finding.  Here’s what I wrote on Goodreads: “A very short book, but interesting if you are into the history of standardized testing in the USA. You’ll read about Kaplan’s founding, its tussles with ETS and the Princeton Review, and about the sale of the company to the fine folks over at the Washington Post. I wish Kaplan had written more about his interactions with ETS regarding the SAT, as that is still pretty relevant to today’s world.”
  • Following along (but still behind) with the Norton Library Podcast, I read Oedipus the King.  I don’t recommend it, but I mention it here because I enjoy posting updates about this read-along.
  • Finally, I read the September 2023 issue of History Today.  I liked Jane Eyre Goes to the Theatre, about an unauthorized theatrical production of the famous novel that launched shortly after the publication of the famous novel.  Back in the day, it seems, anyone could do anything they wanted with someone else’s intellectual property. Also worth checking out is Signs of the Zodiac: The Dendera Dating Controversy, about the discovery of the Dendera Zodiac in Egypt and its arrival in Paris.  

That’s all for this month, but check back in about 30 days for fresh recommendations.  Keep studying.

I was doing some IELTS tutoring earlier this week and I figured it would be fun to write a “review” of one of the numbered IELTS practice test books.  This is, I guess, a review of “IELTS 17” but it could be used as a review of any of the books… they are all pretty much the same (but new editions more closely match the current style of the test).

Any review must begin by thanking Cambridge for cranking out one of these books every year. Thanks to these books, people preparing for the IELTS have a ton of material to work with. The books keep pace with changes to the test, even though those changes are pretty minor.  As of the writing of this review, there are 18 such books.

Each book contains:

  1. A short introduction that describes the format of the test and how it is scored.
  2. Four practice tests with audio provided via QR codes
  3. Transcripts of the audio portions.
  4. Answer keys.
  5. Sample answer sheets
  6. Sample essays

There is also a single use code that will grant you access to a “resource bank” online that mostly duplicates the stuff available via the QR codes.

Speaking of the QR codes, it pleases me greatly that Cambridge provides access to the necessary audio without a limited-use code. That means that library patrons and second-hand shoppers can use the books. That compares favorably to the most recent official TOEFL prep material. Those books are useless for library patrons as the audio files can only be downloaded four times.

My only quibble is that the books are pretty expensive considering their slim size.

A few notes for teachers and students:

  1. There are 18 editions of this book as of the writing of this review. Each edition has different tests.
  2. Editions 13 and above are generally considered to be the most accurate books, as they match slight changes to the end of the listening section.
  3. That said, editions 6-12 are pretty darn close to the real test.
  4. Editions 1-5 should be avoided as they are quite out of date.

In case ya missed it, here is an email I got from ETS about their GRE sale for Black Friday:

Through Nov. 27, get the following deals:

Note that to take advantage of both the test and prep discounts, you’ll need to make two separate purchases, as our system allows for only one promo code per transaction.  

Note that this is only good for registrations in Canada and the USA.  The code will stop working at the end of November 27, but might be worth trying into November 28 given time zone issues.

The 990 form filed by ETS for the year ending September 2022 is now available via Propublica and the IRS.  Some highlights follow:

  • Revenue was 1.1 billion dollars (down about 26 million from the year before)
  • Salaries accounted for 346 million of spending (down about 28 million)
  • Total expenses were 1.1 billion dollars (about the same)
  • Revenue less expenses was a cool 5 million on the year (down about 42 million)
  • Total assets owned by ETS are valued at about 1.7 billion dollars (down a whopping 257 million)

The top earner for the year was former president Walter Macdonald, who was paid about 1.3 million dollars.

Speaking of top earners, to my eye only one of the thirteen most compensated employees for the year are still with ETS today.  The only one remaining seems to be Ralph Taylor Smith, who manages ETS’s private equity holdings.  Out the door are (as far as I can tell): the President, the Chief Information Officer, the COO Global Education, the Chief Financial Officer, the General Counsel, the Senior VP of Research and Development, the Chief Marketing Officer, the VP Operations, the Treasurer, the Chair of Policy Evaluation and Research, the COO (two different guys), and the (combination) Deputy Counsel & Chief Diversity Officer.  Along with others.  That is quite a change in leadership.

ProctorU was paid 26.3 million dollars for proctoring services.

$125,000 was spent on lobbying. That’s more than usual.

The ETS hotel brought in revenue of 2.6 million dollars.  Surrounded by 370 acres of peaceful woodlands, it’s a perfect place for meetings, weddings and other special events.

The ETS hotel had expenses of 2.8 million dollars.  One popular Youtuber described it as “sort of like a glammed up university residence.  I guess.”

Fun stuff like Kira Talent and Vericant are on the books now.

Why should one care?  One probably shouldn’t care.  But ETS is a tax-exempt organization that still has an outsized influence on the lives of millions of young people in America and around the world. The point is not to gawk at the large numbers but just to share things that smarter people than me ought to be writing about.

The eighth edition of “TOEFL Essential Words” by Steven J. Matthiesen was published a few days ago.  So far it is only available as an ebook, but I’ve got my fingers crossed that a printed version will be provided soon.  Note that previous editions of the book were published as “Essential Words for the TOEFL.”

This remains one of my favorite TOEFL books. While it focuses on just a small slice of one’s preparation for the TOEFL it handles that slice very well.  

So what does it contain?

After providing a brief overview of the TOEFL Test, a detailed overview of the TOEFL reading section, and a few notes about “improving your TOEFL Vocabulary,” the book gets to what people really want – words.  Thirty lessons worth of words, to be exact.

Each lesson consists of:

  • About 17 words
  • Dictionary-style definitions of each word
  • A synonym quiz
  • 10 TOEFL vocabulary questions featuring the words
  • An answer key

This is great.  You can use the above to learn about 500 words that might appear in the reading section of the TOEFL.  This makes the book a valuable part of a healthy TOEFL study plan.

A decent (but not perfect) reading practice test is provided at the very end of the book.  It consists of three articles with 13 questions (all types, not just vocabulary) for each.

Curious about the “difficulty level” of the words?  Here is a list of five words chosen via a random number generator:

  • Elicit
  • Partisan
  • Aggravating
  • Exceptional
  • Selective

Note that the words seem to be pretty much the same as those contained in the seventh edition of the book.  I spent a decent amount of time checking the editions side by side, but didn’t notice any differences.  I am sure some edits were made in the preparation of this edition, but I didn’t spot any.  That is a bit of a let down, as every previous edition of this book contained a decent amount of revisions.

That brings me to the bad part of this review. As most readers know, the TOEFL iBT Test was shortened this year. Chapter 1 of this book was revised to reflect these changes… but the revision was done poorly. The chapter incorrectly states the amount of time given to complete the reading section, the number of listening passages, and the amount of time given to complete the writing section.  It also incorrectly states the amount of time given to prepare for the speaking tasks.  Since this appears to be the only stuff actually revised in this edition, I’m a bit disappointed. This doesn’t take away from the value of the actual content people will study, so it isn’t a big deal… but someone should have done better.

It is worth mentioning that the book also attempts to explain the specifics of the TOEFL ITP, which is a whole different test that I suspect most readers will have no interest in.  For the sake of coherence, that content should probably be shuffled off to a separate chapter, where it can be easily ignored.

It is with great sadness that we note the passing of the Miller Analogies Test. Introduced in 1926 it will be administered today for the final time, if anyone bothered to sign up. Created by W. S. Miller of the University of Minnesota to assess applicants to graduate schools, it was offered (for that purpose) by a series of owners, including the Psychological Corporation, Harcourt and (finally) Pearson.

It’s a weird little test. People used to have such faith in analogies as a tool to predict graduate school performance. And the MAT was nothing but analogies. 

On the MAT you might get a question like this:

Plane : Air :: Car : (a. motorcycle, b. engine, c. land, d. atmosphere)

Or something like this:

Seek : Find :: (a. locate, b. book, c. retrieve, d. listen) : Hear

Those are decent measures of one’s intelligence.  But you might also get something like this:

Salt : Hypertension :: Sugar : (a. cholesterol, b. carbohydrates, c. hyperthyroidism, d. diabetes)

Funny, right?

Or your might get this:

Frost : Poetry :: Miller : (a. grain, b. drama, c. literature, d. bard)

Does one’s knowledge of the Western canon predict one’s success at graduate school?  More on that later.

These are all taken from the Official MAT Study guide.  Here’s my favorite:

Napoleon : Pergola :: (a. baker, b. general, c. lumber, d. trellis) : Carpenter

Give yourself a moment to think it out.  I’ll put the answer at the end of this post.  

The best discussion of this test might be in the pages of Barron’s “How to Prepare for the Miller Analogies Test” by the esteemed Robert J. Sternberg.  He quotes the test’s technical manual as saying that “the test items require the recognition of relationships rather than the display of enormous erudition.”  Alas, he suggests… that might be an overstatement.  He says:

“The test measures the extent to which an individual has become acculturated to the concepts of Western (and particularly white middle class American) civilization”

This seems like a test specially designed to amuse Stephen Fry.

Is it a valid predictor of success in graduate school?  Says Sternberg: “the MAT generally affords a low level of predictive accuracy to those who use it.”  But also:  “the MAT is about as good a predictor of graduate school performances as any other test around.”

I chuckled.  One of the knocks against the SAT has always been that you can replace it with just about anything that is mentally challenging and can be studied for and the scores will be just about as useful. Sternberg’s observation (written before I was even born)  hints at that reality.

Here’s the answer to the above question:  This analogy makes no sense if you think of Napoleon as the French general and emperor. However, a napoleon is also a pastry. Therefore, a napoleon (the pastry) is made by a baker (option a), just as a wooden pergola (a trelliswork arbor or patio covering) is built by a carpenter.

Here’s one more:

Sinanthropus : Pithecanthropus :: (a. Peking, b. Hong Kong, c. Cairo, d. Kabul) : Java

Think you can figure it out?  The answer is in the study guide.

I emailed Pearson about getting a copy of the MAT Technical Manual before they all get pulped but, sadly, they didn’t write back.

The US State Department just released its “Open Doors” data for 2022/23. A few highlights:

  1. There are now 1.05 million international students in the USA (a jump of 11.5% from last year). That’s the highest number since before the pandemic and almost on par with the highest number number ever (1.09 million in 2018/19).
  2. International students make up 5.6% of the total US enrollment, which is the highest ever.
  3. Total US enrollment is just 18.9 million, the lowest number since 2007/2008.
  4. The total number of students coming from India jumped 35%. The total number of graduate students coming from India jumped 63%. No wonder testing companies are focusing on this market.
  5. The number of students coming from China dropped just a shade (-0.2%).
  6. The top countries of origin for students are China (289k), India (268k), Best Korea (43K), Canada (27K) and Vietnam (21K).

Yes.  Unofficial TOEFL scores sometimes change when official scores are reported.

I’ve had some reports from TOEFL test-takers that their “official” reading and listening scores are different from the “unofficial” scores seen at the end of their test.  I don’t know why this occurs, but it does. Perhaps score equating (for new test forms) is being done some days later due to the removal of unscored R and L questions.

If you have experienced this, please leave a comment below.  If you haven’t experienced this, leave a comment as well!

The folks behind IELTS recently published a white paper encouraging institutions to think carefully about the language tests they accept. The paper seems, in part, like an effort to push back at the use of AI and automated scoring in language tests.

It says:

“In order to effectively address each element of the socio-cognitive framework and to ensure all aspects of a language skill are elicited, it is vital to move beyond simple multiple-choice or fill-in-the-blank questions to incorporate more diverse tasks that activate the skills and abilities expected of students by higher education institutions.”

Regarding the use of AI and “algorithmic scoring” in language testing, the authors note:

“…unlike algorithmic scoring… the IELTS Speaking test cannot be ‘hacked’ using gaming techniques that can trick mechanical evaluators into mistakenly evaluating speech as high quality when it is not.”

It notes that algorithmic scoring “requires students to generate predictable patterns of speech in response to fixed tasks,” unlike the IELTS speaking section which “gives the student the best opportunity to be assessed on their communicative proficiency.”

Of writing assessment, the paper notes:

“Given the nature of writing and its importance to learning new knowledge and communicating ideas, there are few shortcuts that can provide the same level of evaluation as an expert trained in writing assessment.”

The paper includes a side-by-side comparison of the two IELTS writing tasks and five “algorithmic scoring” tasks. Weirdly, the authors couldn’t name the test containing those five tasks.

Also included is an infographic about claimed shortcomings of AI-generated reading tasks and a note about the challenge of assessing reading skills “in a truncated period of time.”

The paper has some stuff about listening, but I think you get the point. Beyond singing the praises of IELTS, it really seems like BC and IDP are pushing back at recent trends in the testing industry. And at their competitors.

The closing remarks (which are highly recommended reading) include this:

“While there may be assessments on the market that promise quicker results, more entertaining formats, or easier pathways, the question institutions and students alike must ask is: at what cost?”

There are also a few words about “inherent duty.”

I’m not informed enough to know if the above criticisms are valid, but it is good when testing companies justify their existence and their products. It is also good for tests to be quite different from each other. The last thing we need is a blob of samey tests used for all possible purposes.

Will this make a difference? Well, I haven’t seen any evidence that institutions actually read this sort of stuff. University leaders seem to pay scant attention to the details of the tests they accept – it’s hard enough to get them to adjust scores to match new concordance tables or to stop “accepting” tests that ceased to exist years ago. But things could change.

Duolingo’s Q3 numbers report revenue of 10.6 million dollars from the Duolingo English Test for the quarter. That’s an increase of 30% over Q3 of last year. Note that the price of the test increased by 20% between those two quarters.

At $59 a piece, we can assume that that the test was taken about 179,000 times in the quarter. That said, the real number is probably somewhat higher since the company sometimes offers discounts and freebies. Meanwhile, I estimate that the test was taken about 167,000 times in Q3 of last year.

This list shows growth in revenue (from the test) since the company went public:

Q3 2023 – 10,600,000
Q2 2023 – 9,800,000
Q1 2023 – 9,970,000*
Q4 2022 – 8,410,000
Q3 2022 – 8,192,000
Q2 2022 – 8,036,000
Q1 2022 – 8,080,000
Q4 2021 – 8,095,000
Q3 2021 – 6,695,000
Q2 2021 – 4,833,000
Q1 2021 – 5,035,000
Q4 2020 – 4,197,000
Q3 2020 – 5,607,000
Q2 2020 – 4,598,000
Q1 2020 – 753,000

*Q1 of 2023 seems to be the high point for the number of tests taken, at about 203,000 ($49 each).

The overall numbers coming out of Duolingo as a whole were pretty rosy and shares are up up and away.

As promised, here are a few notes about the Versant by Pearson English Certificate.  By way of a disclaimer, some folks at Pearson gave me a voucher so I could take the test for free.

Likes:

  1. As I said in an earlier post, my favorite part is that the practice test accurately simulates the test-day experience, including the same UI and security checks.  Click through to my profile for 500 words about that.
  2. The UI is, generally, pretty decent.  I like the absence of tense beeps and tones and I appreciate that the user has some control over the flow of the test via buttons that move things along as needed.  There is a bouncing “spectrogram” (probably not the right word) that indicates audio is being detected by the test.
  3. The proctoring is asynchronous.  I know this generates a lot of dialog whenever I bring it up, but I think the whole high-stakes English testing industry will go this route in the future.  It is probably for the best.
  4. There are some really challenging questions here.  The reading section required me to make some tricky inferences.  The “integrated speaking” question that requires test-takers to listen to (and later summarize) a conversation between three speakers is really tricky to do without note-taking.  I’d love to see this sort of thing on other tests.
  5. It uses the “sign in with Google” service.  Every test maker should provide this option. The cost of implementation will be recouped by reduced customer support costs. I promise.
  6. Test-takers get a Credly badge they can easily share on social media.  Other test makers should provide something like this, if only for the free advertising.

Dislikes:

  1. Some of the test security prompts were clunky.  I received a prompt indicating that I had two microphones on my system, and was told that was not allowed.  It did not indicate which microphones it had detected, though.  I flicked off my Bluetooth headset but there was no confirmation that I had solved the issue.  I just proceeded and hoped for the best.  More detailed feedback would be welcome.  The need for this is especially urgent on tests without a live human proctor.
  2. I received a score of “below level” in writing on the practice test.  I think this is likely due to the vagaries of wholly AI scoring rather than my poor writing skills.
  3. Like on most modern tests, the score report is something of a black box.  I got an 83 in speaking and an 83 in writing.  Where did those numbers come from?  How were the various tasks weighted and considered?  That isn’t really indicated. Same for R and L, of course. I still prefer the older approach used in IELTS and TOEFL where the test-taker can look at a given section score and broadly figure out where the number came from over the newer approach favored by DET and PTE that creates a score using a formula that isn’t public knowledge.  Again, though, I suspect that in a few years time all of the major testing firms will use the newer approach.