Computer-based testing (CBT) is an efficient way to provide a secure, consistent environment for certification and licensure while significantly enhancing the candidate experience. It is common for testing volumes to increase after a full conversion from pen and paper to CBT, often as a result of the availability of a greater number of testing locations and more flexible scheduling.

CBT comes with a host of benefits, so it’s no wonder that many test providers and awarding bodies are already using it or looking to move to it in the future. In this post, we attempt to answer your most burning questions about computer-based testing.

Does computer-based testing affect students’ performance?

The simple answer? No.

Recent studies revealed that computer familiarity and attitude towards computer had no significant influence on the students’ performance when a test was computerised. Additionally, participants in the studies showed that they preferred the test features presented on the computer compared to pen and paper assessments.

Can only multiple choice questions be used to assess students?

Multiple choice questions are currently the most commonly used question type when using CBT, however, this is changing. You can mark multi-media, long answers and coursework via CBT as well. It all depends on the type of computer-based testing software you choose and how quickly results need to be evaluated.

How are the questions presented in CBT?

The types of questions included in paper-based exams and CBT will usually be similar, but the way that each question is presented, and the way the answer is recorded, often differs between the two methods. In CBT, candidates record their answers on the same screen as the question is shown; for paper-based multiple-choice questions, answers are recorded on the candidate answer booklet.

In CBT you are presented with one question at a time, compared to the paper-based exam where you can see all questions at the same time. Displaying only one question at a time helps you focus on each question. Research tells us that the way we read information presented on a computer is different to how we read on paper. On the computer, our eyes tend to jump around the screen rather than read it systematically as we would a printed page, so only having one question on the screen helps students to focus on the task at hand.

Who marks computer-based tests?

This depends on the type of question item types being marked. For standard multiple choice question with clear right and wrong answers, the tests can be marked automatically by a marking system, and results can be calculated immediately. For unstructured questions, e.g. in high-stakes educational or professional exams, human examiners are used, aided by automated processes such as tallying of the exam marks for a script.

Automated technologies are increasingly useful in monitoring and interpreting data, enabling awarding bodies to moderate exam scripts across a pool of examiners or to identify trends that can inform improvements to exams.

Is it possible to retain assessment quality when migrating to computer-based testing?

One of the biggest challenges faced by many awarding bodies transitioning to CBT is ensuring that the integrity of the exams doesn’t decrease and remains at the same standard (if not better) than when the tests were being administrated by paper.

Many awarding bodies transitioning from print to digital exams choose to migrate their existing paper exam specification in the first instance, providing the opportunity to later consider how digital could further enhance their testing programme.

Discover how The Association of Chartered Certified Accountants (ACCA), along with the help of RM Results, moved their exams from a paper-based delivery system to a computer-based format without losing the integrity of the assessments by downloading this case study.