ETS has announced the new format for the TOEFL iBT. Below is a detailed rundown of what the test will contain starting January 21, 2026. Interestingly, the test will no longer contain integrated questions. Nor will it contain an essay task. As has been noted, this format is extremely similar to the existing TOEFL Essentials Test.  For all the details, start reading here.  

Update:  If you are a publisher or test prep company and need some help adjusting to the changes, feel free to reach out:  mgoodine@gmail.com

 

Reading Tasks (up to 27 minutes)

  1. Complete the Words. This is a “fill in the missing letters” task, like on the Duolingo English Test.  The test gives test takers a paragraph from an academic article and the second and third sentences contain words where the second half is missing.  They have to deduce the missing part.
  2. Read in Daily Life. Test takers read non-academic texts between 15 and 150 words like an email, a text message chain, a memo, a poster, a menu, an invoice, etc. Then they answer multiple-choice questions about them.
  3. Read an Academic Text. This is a roughly 200-word academic text followed by five multiple-choice questions.

The test will include multiple of each task.  There will be 35 to 48 questions in total.  Just note that each task contains multiple questions (for example, each “complete the words” task contains ten questions.

Listening Tasks (up 27 minutes)

  1. Listen and Choose a Response. Test takers hear a single sentence and choose the correct response from among four choices.  This is a test of pragmatics, as the responses aren’t so direct.
  2. Listen to a Conversation. Test takers hear a short conversation (ten turns in the sample) and answer multiple-choice questions about it. Topics include everyday life and campus life situations.
  3. Listen to an announcement. Test takers listen to a campus or classroom announcement and answer multiple-choice questions about it.
  4. Listen to an academic talk. Test takers listen to a short lecture (100 to 250 words) and answer multiple-choice questions about it.

The test will include multiple of each task. There will be 35 to 45 questions in total.

Writing Tasks (23 minutes)

  1. Build a Sentence. Test takers unscramble mixed-up sentences. The sentences are part of an exchange between students.
  2. Write an email. Test takers have seven minutes to write an email regarding a specific scenario.
  3. Writing for an academic discussion. Same as the current TOEFL.  Ten minutes to write a post on a class discussion board.

 

Speaking Tasks (8 minutes)

  1. Listen and Repeat. Test takers listen and repeat seven sentences about a campus or daily life topic.
  2. Take an Interview. Test takers are asked four questions about a given topic. They  have 45 seconds to answer each one. No preparation time is provided.  The questions get a bit harder as they go on.

The whole test will take up to 85 minutes to complete.

Note that the reading and listening sections will be spread across two modules and the reading and listenign sections will be adaptive.  The first reading module and the first listening module will be the same difficulty for everyone.  Based on their performance in those modules, they will get a second module that is “hard” or “easy.”

The “hard” modules will emphasize academic content, while the “easy” modules will emphasize daily life content.

The writing and speaking sections are not adaptive.

ETS is being a little cagey with phrasing, but it appears possible that the revised test will be wholly scored by AI (which has been trained on human ratings). They note:

“The Speaking and Writing responses will be scored by the ETS proprietary AI scoring engine according to the criteria outlined in the scoring guides. These engines integrate the most advanced natural language processing (NLP) techniques, combining cutting edge research with extensive operational expertise for enhanced performance.”

And:

“Human rating remains a critical component of the overall scoring process of TOEFL’s Writing and Speaking tasks because the automated scoring engines are trained on human ratings. Human ratings not only set the standard for machine learning but also provide oversight to ensure the accuracy and reliability of our scoring.”

Subscribe
Notify of

15 Comments
Inline Feedbacks
View all comments