Meanwhile, here is the updated version of my video guide to the second TOEFL speaking question:
It contains a new question, which I created myself. You can check out the website version of the guide over here.
Category: Speaking
Meanwhile, here is the updated version of my video guide to the second TOEFL speaking question:
It contains a new question, which I created myself. You can check out the website version of the guide over here.
Hey, I’ve been uploading a bunch of stuff to the YouTube channel without really mentioning it here. One of the more popular videos is the 2021 version of my guide to the independent speaking task. Check it out!
If you are taking the TOEFL Home Edition, make sure to check your microphone. Don’t just use the ProctorU website test, but actually make a recording and listen to it.
I often get sample answers from students that sound horrible. They sound like they were recorded using Thomas Edison’s wax tube machine. I can barely understand what they are saying. The worst part is that the TOEFL raters will have the same challenge! This could affect your score… or result in a score hold.
Internal microphones (like in your laptop) are often terrible. If yours is bad, consider getting an external microphone.
Just remember that you cannot use a headset microphone. Nothing can cover your ears during the test. Therefore, you should use either an internal laptop microphone or one that sits on your desk.
I’m not a microphone expert, but I really like the Samson SAGOMIC Go Mic. It is pretty cheap, and I use it regularly in my life. It makes clear recordings.
My favorite “expensive” microphone is the Blue Yeti Nano.
The other day, someone asked:
I’ve got twelve months to prepare for the TOEFL, and I need 100 points. What should I do?
The good news for that student is that they have time to really improve their English fluency instead of just learning TOEFL tricks and strategies. I know it sounds crazy, but the best way to increase your TOEFL score is to become more fluent in English.
Here’s how I responded:
And, yes, along the way you should devote some time to becoming familiar with the test. Read the Official Guide cover to cover (a few times). Read some of the guides on this website and watch some Youtube videos. Review sample writing and speaking responses. Just don’t get bogged down in “strategies” if the test is still a year away.
Today I want to talk a little bit about increasing your TOEFL speaking score by giving persuasive rather than descriptive responses in TOEFL speaking question one.
Descriptive responses merely describe something, while persuasive responses try to persuade the grader that your argument is a good one.
Note that since you have so little time to speak in this response (just 45 seconds!) the difference between a persuasive answer and a descriptive answer is very tiny. But I think there is a real difference.
Here’s what I mean.
Imagine you’ve been asked if you prefer taking online classes or in-person classes and you’ve picked online classes. This supporting reason is descriptive:
“First, we can take online classes at any time. I am a mom and the best time for me to study is at night, and in-person classes are usually during the day. Moreover, I can take a class at night while watching my kids.”
This is descriptive, as I’m merely describing some of the features of online classes. The grader might be wondering so what? Why are these good things?
In comparison, here is a persuasive reason:
“First, we can take online classes at any time. I am a mom and the best time for me to study is at night, and in-person classes are usually during the day. Moreover, I can take a class at night while watching my kids. This flexibility allows busy parents to improve their lives by getting university degrees”
That is a bit more persuasive. It describes what an online class is, but also mentions a reason why these things matter. Hopefully I’ve persuaded the grader that the stuff I’ve mentioned is important. As you can see, it is possible to turn a descriptive reason into a persuasive reason just by adding a universal long-term benefit. Like I did here.
This is part of what the speaking rubric means when it talks about a “clear progression of ideas,” I believe.
I think there are a few things to mention about this strategy:
Here’s a mildly interesting article about student responses to speaking question three. The authors have charted out the structure of two sample questions provided by ETS, and tracked how many of the main ideas students of various levels included in their answers (again, provided by ETS).
There is some good stuff in here for TOEFL teachers. Particularly in how the authors map out the progression of “idea units” in the source materials. They identified how test-takers of various levels represented these ideas units in their answers, particularly how many of these idea units they included in their answers. Fluent speakers (or, I guess, proficient test-takers) represented more of the idea units, but also presented them in about the same order as in the sources.
Something I found quite striking, is that one of the question sets studied was much easier than the other one, something described by the authors of the report. I am left wondering how ETS deals with this sort of thing. The rubric doesn’t really have room to adjust for question difficulty changing week by week.
There is also a podcast interview with one of the authors.
Update from April 2021: The app was removed from the Play Store. I don’t know what’s up with that.
Earlier this month, ETS quietly released a new language learning app to the Google Play Store and the App Store. It’s called ELAI. It seems to use their “SpeechRater” technology to grade sample speaking responses recorded using the app. This makes it a very valuable tool for TOEFL prep, since student answers on the TOEFL test are partially graded by that particular technology.
Of course the app isn’t specifically designed for TOEFL prep, so it won’t give you actual TOEFL scores, but it will give you feedback based on word repetition, vocabulary level, pauses and filler words. It will also tell you your words per minute.
There are some sample questions that look like TOEFL questions and some questions and some that don’t look like TOEFL questions. You decide how long you want to speak in your answer, so you can easily stop after 45 seconds to simulate the test. I suppose you could actually ignore the given questions and just record an answer to a question you’ve gotten elsewhere and still get valuable feedback.
Note, though, that this seems to be in a sort of beta test. This means it isn’t available in all countries and it isn’t available for all devices. Don’t complain if you can’t download it.
Here are the links:
If you are able to try it out, leave a comment down below.
Note: This website is not endorsed by ETS.
If you are going to take the at-home version make sure to TEST YOUR MICROPHONE. And I don’t mean just using the ProctorU website. I mean making a whole lot of test recordings. And actually listening to them carefully.
I can’t prove it, but I think a lot of students are getting low speaking scores (and cancelled scores) because of bad microphones.
Moreover, I can state that about 50% of the recordings that students make at home and send to me for evaluation sound like garbage. Like they were made on some of Thomas Edison’s wax tubes.
Today I want to write a few words about an interesting new (December, 2019) text from ETS. “Automated Speaking Assessment” is the first book-length study of SpeechRater, which is the organization’s automated speaking assessment technology. That makes it an extremely valuable resource for those of us who are interested in the TOEFL and how our students are assessed. There is little in here that will make someone a better TOEFL teacher, but many readers will appreciate how it demystifies the changes to the TOEFL speaking section that were implemented in August of 2019 (that is, when the SpeechRater was put into use on the test).
I highly recommend that TOEFL teachers dive into chapter five of the book, which discusses the scoring models used in the development of SpeechRater. Check out chapter four as well, which discusses how recorded input from students is converted into something that can actually be graded.
Chapters six, seven and eight will be the most useful for teachers. These discuss, in turn: features measuring fluency and pronunciation, features measuring vocabulary and grammar, and features measuring content and discourse coherence. Experienced teachers will recognize that these three categories are quite similar to the published scoring rubrics for the TOEFL speaking section.
In chapter six readers will learn about how the SpeechRater measures the fluency of a student by counting silences and disfluencies. They will also learn about how it handles speed, chunking and self-corrections. These are actually things that could influence how they prepare students for this section of the test, though I suspect that most teachers don’t need a book to tell them that silences in the middle of an answer are a bad idea. There is also a detailed depiction of how the technology judges pronunciation, though that section was a bit to academic for me to grasp.
Chapter seven discusses grammar and vocabulary features that SpeechRater checks for. Impressively, it just sticks them in a list. A diligent teacher might create a sort of check list to provide to students. Finally, chapter eight discusses how the software assesses topic development in student answers.
Sadly, this book was finished just before ETS started using automated speaking scoring on high-stakes assessment. Chapter nine discusses how the technology is used to grade TOEFL practice tests (low-stakes testing), but nothing is mentioned about its use on the actual TOEFL. I would really love to hear more about that, particularly its ongoing relationship with the human raters who grade the same responses.
This week I was lucky enough to again have an opportunity to attend a workshop hosted by ETS for TOEFL teachers. Here is a quick summary of some of the questions that were asked by attendees of the workshop. Note that the answers are not direct quotes, unless indicated.
Q: Are scores adjusted statistically for difficulty each time the test is given?
A: Yes. This means that there is no direct conversion from raw to scaled scores in the reading and listening section. The conversion depends on the performance of all students that week.
Q: Do all the individual reading and listening questions have equal weight?
A: Yes.
Q: When will new editions of the Official Guide and Official iBT Test books be published?
A: There is no timeline.
Q: Are accents from outside of North America now used when the question directions are given on the test?
A: Yes.
Q: How are the scores from the human raters and the SpeechRater combined?
A: “Human scores and machines scores are optimally weighted to produce raw scores.” This means ETS isn’t really going to answer this question.
Q: Can the human rater override the SpeechRater if he disagrees with its score?
A: Yes.
Q: How many different human raters will judge a single student’s speaking section?
A: Each question will be judged by a different human.
Q: Will students get a penalty for using the same templates as many other students?
A: Templates “are not a problem at all.”
Q: Why were the question-specific levels removed from the score reports?
A: That information was deemed unnecessary.
Q: Is there a “maximum” word count in the writing section?
A: No.
Q: Is it always okay to pick more than one choice in multiple choice writing prompts?
A: Yes.
At the 2019 TOEFL iBT Seminar in Seoul on September 5, ETS announced details of the new “Enhanced Speaking Scoring” for the TOEFL, which has actually been in place since August 1, 2019.
In the past, speaking responses were graded by two human graders. Now, however, speaking responses are graded by one human grader along with the SpeechRater software. This software is a sort of AI that can evaluate human speech, and has been used by ETS for various tasks since about 2008. Most notably, it provided score estimates for the “TOEFL Practice Online” tests they sell to students.
According to ETS:
“From August 1, 2019, all TOEFL iBT Speaking responses are rated by both a human rater and the SpeechRater scoring engine.”
They also note:
“Human raters evaluate content, meaning, and language in a holistic manner. Automated scoring by the SpeechRater service evaluates linguistic features in an analytic manner.”
To elaborate (and this is not a quote), ETS indicated than the human scorer will check for meaning, content and language use, while the SpeechRater will check pronunciation, accent and intonation.
It is presently unknown how the human and computer scores will be combined to create a single overall score, but looking at the speaking rubric could provide a few hints. Note that in the past the human raters would assess three categories of equal weight: delivery, language use, and topic development. If the above information is accurate, the SpeechRater now assesses delivery, while the human now assess language use and topic development. It is possible, then, that the SpeechRater provides 1/3 of the score, and than the human rater provides the other 2/3.
I will provide more information as I get it. In the meantime, check out the following video for more news and speculation.
There are six things you can do right away to improve your TOEFL speaking score:
You are probably reading this blog post because you sent me a message asking “how can I increase my TOEFL score?” That is a hard question to answer if I haven’t ever heard you speak, but I will talk about each of the above strategies one at a time.
It is important to know that ETS designs the four speaking questions the same way every week. There are really just a few minor variations that you might face. Learning about these designs is the first thing you need to do as you prepare for the TOEFL, as it will make your job on test day a lot easier.
Do this by checking out my playlist on the 2019 version of the TOEFL speaking section. Studying these videos might improve your performance in the “topic delivery” section of the scoring rubric (see below).
Templates can be a controversial topic in the TOEFL world, but if you are struggling to put together your answers they can really help you. You can find some templates for each of the questions on my site. Note that if you have a good teacher you might not need any templates.
You should understand that each of your answers will be given a score in three categories of equal value. Read about them by consulting the TOEFL Speaking Rubric.
For a more detailed look at how your will be graded, watch the following video.
Update: Since August 1 of 2019 the SpeechRater software has been used to judge the delivery of student answers. you can read about this right here.
You absolutely need to practice with some accurate speaking questions. Answer as many as you can, and record your answers so you can review them. Here’s what I recommend:
I don’t recommend using:
Delivery counts for one third of your score so you should try to improve your accent, pronunciation and intonation as much as possible.
Sadly, this is hard to do on your own. A teacher can help (see below), but you might also benefit from activities like repetition, shadowing and chorusing. A fun resource for this is PlayPhrase.me. That site should be easy enough to figure out – just click on the play button and repeat the same phrase until you run out of clips. I believe that repeating the same phrase a few dozen times is a good way to reduce the presence of your native accent and to improve your overall pronunciation. This might improve your performance in the “delivery” section of the rubric.
If you want to get some free feedback on your delivery, I recommend joining the 30 Day Speaking Challenge from Huggins International.
If you really want to improve your score, you should hire a tutor to work with you one on one. They will be able to help you improve your score in all three sections of the rubric. I recommend the following experts:
Mention that you were referred by Michael at “TOEFL Resources” for preferential treatment (maybe).