As promised, here are a few notes about the Versant by Pearson English Certificate.  By way of a disclaimer, some folks at Pearson gave me a voucher so I could take the test for free.

Likes:

  1. As I said in an earlier post, my favorite part is that the practice test accurately simulates the test-day experience, including the same UI and security checks.  Click through to my profile for 500 words about that.
  2. The UI is, generally, pretty decent.  I like the absence of tense beeps and tones and I appreciate that the user has some control over the flow of the test via buttons that move things along as needed.  There is a bouncing “spectrogram” (probably not the right word) that indicates audio is being detected by the test.
  3. The proctoring is asynchronous.  I know this generates a lot of dialog whenever I bring it up, but I think the whole high-stakes English testing industry will go this route in the future.  It is probably for the best.
  4. There are some really challenging questions here.  The reading section required me to make some tricky inferences.  The “integrated speaking” question that requires test-takers to listen to (and later summarize) a conversation between three speakers is really tricky to do without note-taking.  I’d love to see this sort of thing on other tests.
  5. It uses the “sign in with Google” service.  Every test maker should provide this option. The cost of implementation will be recouped by reduced customer support costs. I promise.
  6. Test-takers get a Credly badge they can easily share on social media.  Other test makers should provide something like this, if only for the free advertising.

Dislikes:

  1. Some of the test security prompts were clunky.  I received a prompt indicating that I had two microphones on my system, and was told that was not allowed.  It did not indicate which microphones it had detected, though.  I flicked off my Bluetooth headset but there was no confirmation that I had solved the issue.  I just proceeded and hoped for the best.  More detailed feedback would be welcome.  The need for this is especially urgent on tests without a live human proctor.
  2. I received a score of “below level” in writing on the practice test.  I think this is likely due to the vagaries of wholly AI scoring rather than my poor writing skills.
  3. Like on most modern tests, the score report is something of a black box.  I got an 83 in speaking and an 83 in writing.  Where did those numbers come from?  How were the various tasks weighted and considered?  That isn’t really indicated. Same for R and L, of course. I still prefer the older approach used in IELTS and TOEFL where the test-taker can look at a given section score and broadly figure out where the number came from over the newer approach favored by DET and PTE that creates a score using a formula that isn’t public knowledge.  Again, though, I suspect that in a few years time all of the major testing firms will use the newer approach.
Subscribe
Notify of

0 Comments
Inline Feedbacks
View all comments