There is an excellent new article on the IELTS website which notes how “[r]esearch shows that the average minimum entry score is an IELTS band 6.6 across countries, below our expert’s recommended levels.” It says “[w]hen minimum English language scores are set too low, the consequences can be far-reaching.” 

We sometimes hear university staff complaining about how new international students don’t have the linguistic competency needed to excel in their new environment. But what do they expect when their institution requires only an IELTS score of 6.5? Or even a score of 7.5?

The people setting cut scores at universities aren’t wholly to blame when those scores are poorly set. It’s a tough thing to do, and they may not have enough information to work with. The aforementioned article links to the IELTS writing band descriptors. But while those are detailed, their usefulness is somewhat limited. For instance, here’s what they say about the grammar in a band seven essay:

“A variety of complex structures is used with some flexibility and accuracy. Grammar and punctuation are generally well controlled, and error-free sentences are frequent. A few errors in grammar may persist, but these do not impede communication.”

It’s a start, I guess.

If readers of the article dig two links deeper, they’ll find some sample essays with band scores. But there are only a dozen of them.  And, curiously, they are all from the paper edition of the test, and barely legible.

As scrutiny of tests increases, it is more imperative than ever that test makers provide access to whatever it is that test takers produce during the test.  Technology has caught up with the needs of decision makers, and it isn’t terribly hard to let university staff read all of the IELTS essays produced by all of their applicants. Obviously they probably won’t use this information to make admissions decisions, but it will give them crucial information about how to set the best possible cut scores. They’ll finally know, perhaps, what an applicant who has submitted a score of 6.5 is capable of.

A handful of ancient samples from the test maker can only go so far.  Scores can sometimes drift, inter-rater agreement isn’t always fantastic, the scoring criterions interact in curious ways that only become apparent when you’ve got a bunch of responses to look at… and there is always the niggling fear that the samples have been graded by item writers or test developers rather than actual raters. So… the more the better. And instead of talking about providing “more” essays, we should probably be talking about providing all the essays.

And we haven’t even talked about the speaking section.

It is worth mentioning that the TOEFL and Duolingo tests have provided sample responses along with every score report for ages. Other tests have as well.

Subscribe
Notify of

0 Comments
Inline Feedbacks
View all comments