Open access peer-reviewed chapter

Perspective Chapter: Paradigm Shift on Student Assessment Due to COVID-19 Pandemic at Malaysian Medical Schools

Written By

Siti Khadijah Adam

Submitted: 12 December 2022 Reviewed: 15 December 2022 Published: 11 January 2023

DOI: 10.5772/intechopen.109555

From the Edited Volume

Higher Education - Reflections From the Field - Volume 4

Edited by Lee Waller and Sharon Kay Waller

Chapter metrics overview

89 Chapter Downloads

View Full Metrics

Abstract

The COVID-19 pandemic has caused disruption to normal face-to-face teaching and learning activities and assessments in medical schools globally. One of the challenges that many medical schools faced was conducting a valid, reliable, secure, and fair online assessment. This chapter introduces the principles of assessment in medical education and the transition to online examinations at several medical schools in Malaysia during the pandemic. Post-pandemic, the new normal for medical education implies using technologies for online learning and conducting assessments remotely, to enhance flexibility, efficiency and cost-effectiveness. Several strategies to ensure the proper organisation of online assessment in medical programmes without compromising its validity and security are described in detail.

Keywords

  • COVID-19
  • pandemic
  • medical school
  • student assessment
  • online assessment
  • Malaysia

1. Introduction

The COVID-19 pandemic struck in 2019 had caused varying effects on different sectors and industries. It affected educational systems worldwide, resulting in an almost complete closure of higher education institutions. Students and teachers were compelled to immediately adjust and switch to online teaching and learning activities. Universities also had to allow some flexibility when it comes to conducting examinations to eliminate in-person physical interaction.

Two years after the outbreak, all educational institutions are back open. However, there is no denying that everything is going back to how it used to be. The ‘new normal’ has forced us to move into digital, involving a hybrid education that combined face-to-face and virtual activities. Adaptation to new technologies seems obligatory and has become a part of the daily routine for educational systems globally. We can observe that online learning is on the rise and assessments can be conducted remotely.

With regard to medical education, e-learning helps students to adjust and adapt to an online medical environment. Yet, it limits students’ interpersonal contact with patients and opportunities for clinical practice and professional development. Nevertheless, the pandemic has brought a new insight that medical teaching and learning as well as student assessment can be conducted virtually. At Universiti Putra Malaysia, conducting online assessments during the pandemic made us realise the necessity to remain maximising the use of technology. The transition from the traditional method of assessment and the paradigmatic shifts are discussed further.

Advertisement

2. Student assessment in medical education

Assessment is the process of documenting the level of a learner’s knowledge, skills, and attitude and its purpose is to make judgement and decisions about a student’s learning against a certain standard or benchmark [1]. Assessment can be classified as a formative or summative assessment. Formative assessment, also known as ‘assessment for learning’, is an ongoing process that aims to monitor student’s learning. It is usually low stake and conducted informally in class. This assessment is a powerful diagnostic tool for students to pinpoint which areas they have mastered and which areas of weakness so they can concentrate their efforts in those areas moving forward. Constructive feedback on the strengths and weaknesses of students is the cornerstone of formative assessment to shape and improve future learning. In many cases, educators modify their instructional materials and clarify contents to ensure students to achieve the expected learning outcomes. Examples of formative assessment include short quizzes during class, direct observation of procedural skills (DOPS) and mini-clinical examination (mini-CEX).

Summative assessment, on the other hand, known as ‘assessment of learning’, takes place at the end of a course of study which is usually high-stake. The purpose is to provide an accurate pass-or-fail decision about students and a final measure of student performance. In health professions education, summative assessment is conducted to determine whether students have met the minimum standards during progression, graduation and licensure to assure that the public is protected from incompetent practitioners. Concurrently, medical educators may obtain feedback on the appropriateness of learning outcomes and the effectiveness of learning instruction based on post-assessment analysis [2]. Examples of summative assessments include those that occur at the end of a course, semester, year or before the newly graduated doctors can begin to practise medicine professionally.

Medical students must acquire and demonstrate various domains of competency throughout the training. However, there is no single method of assessment that can adequately evaluate their performance across all domains. Each assessment method has its own advantages and disadvantages. Therefore, a variety of assessment methods are required to ensure that students achieve all required competencies before graduation.

More than 30 years ago, psychologist George Miller proposed a hierarchical framework for assessing clinical competence [3]. It is a valuable model showing the levels of knowledge and skills assessed in medical education. The iconic Miller’s pyramid model divides between assessment of cognition and behaviour in practice (Figure 1). The base of the pyramid is knowledge (‘knows’), followed by the application of knowledge (‘knows how’). Acquiring medical knowledge is the essential precursor for clinical problem-solving. The ‘knows’ level can be assessed by written assessment such as multiple-choice questions (MCQs) while ‘knows how’ add a level of complexity to the cognitive scheme. Students need to apply their knowledge, manipulate the information, and demonstrate an understanding of the relationship between concepts and applications [1]. Appropriate assessment methods include higher-order MCQs, essay and viva or oral exams.

Figure 1.

Miller’s pyramid [3].

The third level of the pyramid moves the method of assessment to performance assessment and represents clinical skills competency, usually assessed under a controlled environment (‘shows how’). The assessments are rather simulated and standardised. Objective structured clinical communication (OSCE) is an example of assessment, in which students may demonstrate clinical skills such as communication or performing a physical examination on a simulated patient. Finally, the top of the pyramid is clinical performance, assessed by direct observation in authentic clinical settings (‘does’). Examples of assessments include workplace-based assessments such as mini-CEX or DOPS where students demonstrate clinical performance with actual patients by integrating their knowledge, skills and abilities in the real-world clinical setting.

Miller’s pyramid is frequently used with other taxonomy frameworks such as Bloom’s revised taxonomy. Bloom’s taxonomy encompasses six levels of the cognitive domain, from the lowest level, which is remembering information, up to successively more to complex higher-order levels, which is creating [4]. The taxonomy model is useful while deciding expected cognitive outcomes and constructing written assessment items.

In selecting the appropriate assessment methods in the program, the purpose of the assessment should be considered. Is it for formative or summative purposes? As stated above, there are different levels of clinical competence are required. Are the different assessment methods cover all clinical competence in Miller’s pyramid? Are they adequate?

In 1996, van der Vleuten proposed a conceptual model for defining the utility of an assessment method [5]. There are five criteria of an assessment method involved in this model, namely reliability (does it consistently measure what it is supposed to?), validity (does it measure what it is purported to?), educational impact (how does it affect teaching and learning?), acceptability (is it acceptable to relevant stakeholders?) and cost (does it practical and feasible?). Using this model, the utility of an assessment method can be derived by conceptually multiplying all the weights of each criterion.

Assessmentutility=reliability×validity×educationalimpact×acceptability×costE1

It is important to note that this is not a mathematical formula, but a notional one. The weight of each criterion depends on the purpose of the assessment. For formative purposes, more weight is given to educational impact while for summative purposes, more weight is given to reliability [6].

Later, Norcini et al. published a consensus statement identifying seven criteria of a good assessment. Five of them were derived from van der Vleuten’s model, while another two are equivalence (does it produce similar results in different groups?) and catalytic effect (does it create, enhance, and support education?) [7]. Therefore, it is essential to evaluate these criteria when considering the appropriate and suitable assessment methods or tools in the programme.

2.1 Assessment of knowledge acquisition and application

Written assessments are widely used in medical education to assess knowledge acquisition, comprehension of basic principles and clinical reasoning. Although these skills are positioned at the base of Miller’s pyramid (‘knows’ and ‘knows how’), they form a foundational set of skills that students need to master prior to achieving clinical competence. They are inexpensive, convenient and produce reliable scores. There are many types of written assessment commonly used for medical students which will be covered in the next section.

2.1.1 Multiple choice questions (MCQs)

This is certainly the most popular assessment method globally because of its validity, reliability and practicality. The A-type MCQs require examinees to select one best answer from several options. They are also known as single-best answer questions (SBAQs) or one-best answer (OBA) questions. The question consists of a stem, which can be a clinical or non-clinical vignette, a lead-in statement and three or more answer options.

The R-type MCQs, also called as extended matching items (EMIs) or extended matching questions (EMQs), are an extended version of the A-type format. In a set of EMIs, there is a theme of the questions, a list of options (can be from seven to 20), a lead-in statement and a minimum of two items or vignettes. All items should be relevant to the theme. For each item, examinees choose the correct answer from the list of options. Both A-type and R-type MCQs can be used to assess the theory and application of knowledge, critical thinking and problem-solving skills.

Multiple true false (MTF) questions are becoming less popular among medical schools. They are normally used to test factual recall. However, this type of assessment is able to cover more breadth of a topic, which is suitable to be used in formative assessment. Each item consists of a stem, followed by five statements related to the stem. For each statement, examinees may select either true or false. In a pen-and-paper examination, optical mark recognition sheets, better known as OMR sheets used by the examinees to mark their answers. Those sheets are analysed by an OMR machine and the scores can be obtained instantly. Certain OMR machines can also perform a concurrent evaluation of the quality of the questions based on the examinee’s response and scores.

2.1.2 Short answer questions (SAQs) and essay questions

This type of assessment consists of open-ended questions which require the examinees to write either brief or long answers in SAQs or essays, respectively. They can either assess lower-order or higher-order thinking. Usually, examinees are assessed on the application of knowledge (‘knows how’), and clinical reasoning. In both methods, the disadvantage is that they have to be marked manually by examiners. It can be resource intensive with a large number of examinees per cohort, particularly for essay questions. In certain cases, the answer scripts are marked by more than one examiner, based on the answer scheme. Although it can reduce the examiner’s workload, this may affect the reliability of the scores with multiple examiners per question.

2.2 Assessment of clinical performance

2.2.1 Objective structured clinical examination (OSCE)

OSCE consists of several structured stations in a circuit in which an examinee moves in sequence. The number of stations and the duration for each station can vary based on the complexity of the skills being assessed [8]. It allows examinees to demonstrate a specific clinical skill in each station in a standardised medical scenario. It is usually conducted in summative assessment. OSCE is widely implemented due to its high validity and reliability to assess across different cases and skills. A large number of students can be assessed in the same way with multiple concurrent circuits. The use of standardised or simulated patients is common during OSCE so that examinees may interact with them to perform history taking, physical examination, counselling and others. There will be an examiner at each station to observe and score the examinees based on a pre-determined checklist.

2.2.2 Long case and short case examinations

The long case is a traditional clinical examination that assesses student competence at the ‘shows how’ level in Miller’s pyramid. It requires a student to spend approximately an hour with a patient, taking history and carrying out a physical examination, unobserved. Then, the student summarises the findings to one to three examiners and answers several questions. The examiners score the student using unstructured marking criteria. Although many concerns regarding its reliability [9], the long case is still popular due to its authenticity and ability to assess clinical approach holistically. To increase the validity and reliability of long cases, several modifications have been implemented such as observing students while they interact with a patient, using a structured marking scheme and increasing the number of cases [10].

Short case, on the other hand, requires a student to spend about 5–10 minutes with a patient to examine the patient and detect signs under observation. Then, the student needs to formulate a clinical or differential diagnosis of the patient. Similar to long case, the student is scored according to unstructured marking criteria. In many medical schools, the introduction of OSCE has replaced long-case and short-case examinations, especially for high-stake examinations.

2.2.3 Workplace-based assessment (WBA)

WBA encompasses a group of assessment methods that evaluates students’ performance in an actual clinical setting. Examples of WBA include mini-clinical evaluation exercise (mini-CEX), direct observation of procedural skills (DOPS) and case-based discussion (CBD). These assessment methods have high authenticity and are located at the tip of Miller’s pyramid (‘does’). They are usually conducted as formative assessments with the main aim to aid learning through feedback.

Mini-CEX expects a student to conduct a focused clinical skill such as history taking or physical examination with an actual patient within a short and specified time. The performance is graded using a structured evaluation form and constructive feedback is provided. This assessment occurs on multiple occasions in daily practice with different assessors and in different settings. DOPS is a variation on the mini-CEX, which focuses mainly on procedural skills. It is specifically designed to evaluate practical skills for example in surgical, medical or general practice against pre-determined criteria, followed by a face-to-face feedback session.

On the other hand, CBD is a focused discussion driven by an existing case the student has encountered. The discussion centres on what was done, why it was done and how any investigation and intervention was made. After the discussion, the assessor scores the quality of performance and provides constructive feedback.

Advertisement

3. Impact of the pandemic on student assessment at Universiti Putra Malaysia

The Malaysian government imposed a Movement Control Order (MCO) in March 2020 as a result of the unprecedented situation. Consequently, the planned academic schedule and all teaching and learning activities in higher education institutions in Malaysia were severely impacted. All face-to-face activities were suspended and were conducted remotely. Medical schools were given the flexibility to amend their teaching and learning activities and assessments based on the guidelines published by the Malaysian Medical Council (MMC) and the Malaysian Qualification Agency (MQA). This was to ensure that all assessments were valid, reliable and fair without compromising the programme’s educational objectives.

Universiti Putra Malaysia (UPM) experienced the first online examination with a small number of preclinical students for a remedial examination [11]. This became the inception for the faculty to make further improvements for subsequent examinations with a larger cohort. Due to some limitations of the in-house learning management system (LMS), we used commercially available platforms to deliver the theory and practical examination questions. A mock examination was conducted prior to the actual examination to provide hands-on experience for both faculty members and students. Some amendments and improvements were made based on the findings from the mock examination and feedback from faculty and students.

On the day of the examination, a video conferencing platform was used to proctor the students and conduct OSCE. Earlier, the blueprint for OSCE was revised and only history taking, data interpretation and communication skills were possible to be assessed. In order to facilitate coordination and ensure strict adherence to COVID-19 standard operating procedures (SOPs), all examiners and simulated patients (SPs) were convened in the faculty. They were seated according to their stations, maintaining a suitable physical distance between them. Through the video conferencing platform, the examiners could observe the students communicating with the SPs remotely (Figure 2). Online OSCE was possible by transferring the students into multiple breakout rooms which mimic the physical stations in an OSCE circuit. The examiners and SPs remained in the same breakout room and the instructions for students were shared on the computer screen. The example of an online circuit with four OSCE stations is presented in Figure 3. We employed several strategies to prevent cheating attempts which include shuffling the questions and answer options, using a ‘lock-down’ browser and requesting the students to sign an integrity form before the examination.

Figure 2.

An examiner is observing an examinee interacting with a simulated patient remotely in an online OSCE.

Figure 3.

An example of an OSCE circuit consisting of four stations conducted in breakout rooms in a video-conferencing platform.

Later, we successfully conducted an online examination with several cohorts of students. We also modified the typical synchronous online OSCE into written examination and video OSCE [12]. However, these modifications still have their limitations in assessing the intended clinical skills. For instance, we had to postpone our final year examination because it was not feasible to conduct the clinical examinations for a high-stake examination remotely.

When the government allowed medical students in clinical years to return to campus, the final examination was conducted with tight compliance to COVID-19 SOPs. All staff and students were required to complete their vaccination status to be on campus. There were some amendments made to the examination which include a prolonged duration of the exam and a reduced of examiners for each long case and short case examination. The students also had to sit for only two short cases examination as opposed to the usual three short cases. Several discussions with the Dean, Deputy Dean, coordinators and medical education unit were made to ensure that the validity was not compromised. The amendments were also implemented in accordance with the guidelines made by the MMC and MQA related to medical education procedures during and post-COVID-19.

Advertisement

4. Experience at other medical schools in Malaysia

The outbreak of COVID-19 has changed the perspectives of conducting assessments online and remotely. Medical schools in Malaysia had no choice but to innovate and engage in online methods to conduct the examinations despite being in lockdown. This is to avoid any delay in the student’s graduation or progression to the subsequent years.

The assessment method in Universiti Sains Malaysia underwent a major revamp, with the final examination being converted into a continuous or online assessment [13]. Theory examinations and OSCE were conducted online through their e-learning platform [14]. The unexpected circumstances have caused medical lecturers to embrace their e-learning portal more than prior to the pandemic.

Universiti Tunku Abdul Rahman (UTAR), a private medical school conducted their Year 3 MBBS Professional Examination using Google forms for theory examination. They used Microsoft Teams to proctor 49 students during the examination and to run OSCE. The students had to use two devices during the examination; one for assessing the exam questions and the other one placed at the corner back to act as a video camera. To enhance the integrity, the students were required to show that no unauthorised materials were within their vicinity and the examination was recorded and monitored by their Department of Exam. They claimed that the online exam conduct was more efficient and required fewer resources compared to the traditional physical setting [15].

Meanwhile, Universiti Malaysia Sabah used their LMS system to conduct their theory examination online through Google Meet. They convert all methods of theory examinations into MCQs as it was more feasible and practical. Similar to UTAR, the students used two devices during the examination. Meanwhile, the clinical examinations were conducted asynchronously. The students had to record videos while performing procedures and examinations on family members, friends or manikins. They were given feedback and were allowed to resubmit another video until their performance was deemed satisfactory. The online examinations were only done on the cohorts involved in low stake examinations and they would have opportunities to improve themselves in the remaining years before graduation [16].

The traditional long case and short case examinations were converted into physical OSCE in the Final Professional Examination at Universiti Sains Islam Malaysia. The examination was conducted in strict adherence to COVID-19 SOPs. It was their first time conducting OSCE in the high-stake examination, however, they managed to train the faculty, students and all personnel involved as well as recruit adequate simulated patients within the constraint. Despite mixed reactions from the faculty and students, the faculty has decided to resume with OSCE in all clinical examinations from there on after careful deliberation of the benefits of OSCE [17].

Advertisement

5. Challenges of online assessment

The adoption of online assessment is not without obstacles. Internet connection has been a major concern among the faculty and students [11, 15, 16]. A synchronous online examination could be a challenge, especially for students living in rural areas. Some faculties did a mock examination to test logistic capabilities. Students who discovered that their internet connection or hardware was not adequate for the exams had ample time to make required adjustments before the actual examination. These include upgrading their hardware or taking the examination in other places with more stable internet connections. Extra time can be given to the students who have unstable internet connections to complete the examination.

The unusual feeling of taking an important examination remotely may cause the students to develop anxiety. In a study in China, medical students felt anxiety during an online exam due to their poor capacity to adapt to the online platform and worries regarding the fairness of the online assessments [18]. This also occurred at UPM [11]. The pressure of having to familiarise themselves with online platforms might be burdening and cause emotional distress, not just to the students but also to the educators [13]. Administrative support is very crucial to ensure a smooth implementation of the online assessment.

Another challenge was to ensure the constructive alignment of the online assessment. The assessment of affective and psychomotor domains might be difficult to conduct [12, 13]. Assessments related to the ‘shows how’ and ‘does’ in Miller’s pyramid such as long case examination and workplace-based assessment with patients are not possible in an online environment. It can also be tough to assess clinical skills and professional attributes such as empathy and teamwork remotely. To assess physical examination skills, faculties might need to be creative either by using simulated patients [15] or manikin. History-taking and communication skills were assessable either synchronously or asynchronously [11, 12], however, other performance tests are difficult to conduct without introducing threats to the validity of the assessment [1, 19].

Advertisement

6. Strategies to conduct online assessment

Given the technological advancement we have made in adapting to the pandemic, medical schools may consider remaining to adapt the innovative methods as long as they are valid and reliable. Several strategies can be taken to conduct online synchronous and asynchronous examinations, especially if it is conducted for the first time. These can be divided into pre-, during and post-assessment (Figure 4).

Figure 4.

A flowchart of several strategies to conduct an online assessment.

6.1 Pre-examination

6.1.1 Ensure adequate facilities and human resources

The most important factor for a successful online assessment is to identify whether the institution has sufficient infrastructure to run the online assessment. Facilities such as computers with stable internet connection are essential. The computers must be compatible with all online software and platforms needed for the assessment. Additional accessories such as a microphone, camera, speaker and headphones are required to enable two-way communication. The internet connection must have adequate bandwidth to support the transmission of data between faculty and all students for the whole assessment process.

The LMS system subscribed by the institutions usually may be used to deliver the assessment. Some examples include Moodle, Canvas and Blackboard can be used to develop student assessments, obtain performance data and provide feedback to students [20]. Additionally, numerous commercial online examination platforms can be subscribed to by the institution to be used for synchronous and asynchronous examinations. Some of these platforms have additional features such as a question bank besides being able to perform item analysis and alignment of assessment to the course learning outcomes that can facilitate further administrative work.

Some practical considerations should be given for the online examination platform. For example, the platform should be secure and dependable and can cater to the faculty and students. To ensure security and prevent any dishonesty attempt, some of these platforms are equipped with a remote proctoring system to confirm the examinee’s identity and monitor behaviour while taking the examination. For tests consisting of MCQs, most of the platforms allow automated scoring while manual marking can also be done for open-ended questions. However, some platforms can only accommodate written-based examination, but not performance-based examination. OSCE for example may be conducted through a video conference platform such as Microsoft Teams and Zoom application by creating multiple break-out rooms for individual OSCE stations. However, other assessments that involve actual patients and require examinees to perform clinical skills such as physical examination and clinical procedures can be quite tricky. This may not be feasible and still need to be done physically in a proper clinical setting.

Paperless examination using an online platform is more environmental-friendly and can be cost-effective in the long run. The institution is able to save many resources in terms of effort, money and time [21]. Automated analysis can be done instantly and this enhances instant and timely feedback for the benefit of faculty and students. A high volume of assessment data and documentation can be managed efficiently with a proper and stable online examination platform.

However, the institution needs to ensure there are adequate technical experts to manage and support the whole assessment system. This is essential to prevent interference, especially while the examination is running. Detailed guidelines and SOPs can be prepared to be used as a reference. The faculty must also be familiar with the navigation of the online platform used for the examination. Academics need to be well informed on how to set up an assessment using the platform, how to insert questions and the answer scheme, how to retrieve answer scripts and grade the students and how to provide feedback if needed. Maintaining the security of the exam questions and students’ information needs to be paid great attention to. If a video conferencing platform is used, faculty members need to know how to admit students, invigilate students, set up breakout rooms and use many other features in the platform. Adequate training is essential to familiarise the faculty with the workflow and navigation of the online platform. It is crucial for the faculty to be well-prepared and have enough confidence to conduct an online examination.

6.1.2 Identify assessment blueprint

An institution must ensure that the assessment methods are aligned with the intended learning outcomes. Converting to an online examination for a written assessment is highly possible with an appropriate online platform. As a matter of fact, exams with MCQs can be more efficient and cost-effective in an online setting [15].

However, remote or online assessment of some clinical skills may not be feasible due to limited standardisation and inadequate resources. The majority of clinical skills that require the presence of patients are not possible to be assessed remotely. Therefore, a clear and detailed blueprint should be prepared to guarantee that all learning outcomes are covered. An example of a blueprint for Orthopaedic course is shown in Table 1.

Course learning outcomes
At the end of the course, students should be able to:
MCQEssayLong case examinationMini-CEXOSCE
Understand the pathophysiological aspects and principles of treatment for common musculoskeletal conditions/* (synchronous)/* (asynchronous)
Identify appropriate diagnosis and relevant investigations for common musculoskeletal conditions/* (synchronous)/* (asynchronous)
Conduct appropriate history taking and clinical examinations in patients with musculoskeletal conditions///
Demonstrate medical ethics and professionalism in patient care///

Table 1.

An example of an assessment blueprint for Orthopaedic course.

*online assessment.

As shown in the table, written assessment is conducted online while the assessment of performance is maintained as an in-person examination. It is also indicated whether the online assessment is done synchronously or asynchronously. This is flexible for any institution depending on their feasibility and practicality. The conduct of asynchronous examinations is also flexible. For instance, students are given some clinical problems and are required to submit an assignment in the form of essays about the diagnosis, investigations and treatments within an allotted duration.

6.1.3 Explore student situations and needs

One advantage of conducting an online assessment is that students do not have to be present on campus and can sit for the examination remotely. However, we need to consider various factors that may discriminate the student’s capacity for an online assessment. An institution cannot ignore the fact that students living in rural areas may have limited access to a seamless internet connection. Some students may not have the necessary resources such as a personal computer or laptop. It is essential to put these considerations in the first place, especially before implementing an online examination.

The faculty should first gather feedback from students if they decide to convert to online examination. Explore their needs and capacity. If an internet connection is an issue, faculty may conduct an asynchronous examination with a reasonable time to complete it. This allows some flexibility for students to complete the examination at their convenience. An open-book examination can be carried out in the asynchronous method. In this scenario, the questions or assignments should be assessing higher-order thinking skills that answers can not just be directly obtained from any references. If a synchronous online examination is deemed necessary, facilities should be made available for students without a proper device or access to a stable internet connection. An example would be allowing the students to sit for the examination in the computer lab in the faculty. However, invigilators must be present similar to physical examination to prevent discussion among the students.

Students should be briefed about the conduct of online examinations and the expectations. A mock examination or a trial run is obligatory before running the real online examination. It is encouraged to execute the mock similar to the actual exam in terms of the method of assessment, duration and time. This is to provide a hands-on experience to students and faculty and identify any difficulties during the examination. For example, we noted that the long duration of the examination was physically and mentally challenging, so the actual exam schedule was amended by breaking down the exam into multiple parts with breaks in between [11]. There should be an adequate period between the mock and actual examinations so that proper interventions can be taken by both faculty and students.

Some students may also have limited capabilities concerning digital use such as keyboard typing skills and familiarity with certain computer applications. An additional time for the examination can be considered, especially if it involves many multimedia such as images and videos. Practice questions can also be given to students for them to familiarise themselves with the online platform prior to the actual examination.

6.2 During examination

6.2.1 Ensure technical support is on standby

It is important to expect the unexpected worst-case scenario on the day of the examination, particularly for a synchronous examination. Therefore, we need to ensure there is adequate support and assistance available if needed. For example, if there is network interference occurs on the examinees or faculty’s side, there is an SOP on how to handle the situation. In a synchronous examination, the faculty needs to prepare an alternative medium for examinees to immediately inform if they face any glitches to get online, such as by a telephone call. In certain cases, examinees may suddenly face internet or device problems while taking the exams. What kind of intervention should be done? Do we allow extra time given for them to complete the examination? How long should it be? It is also recommended to prepare an extra set of exam questions beforehand in case there are students who do not manage to complete the examination due to some circumstances.

There is also a possibility that the faculty encounters technical problems. For instance, there is an electricity outage in the institution or the online platform cannot be accessed for whatever reasons. Do we have a backup platform? Do we need to postpone the examination? This needs to be considered. A backup platform or devices can be arranged earlier in case it happens. An extra set of exam questions is useful if these problems occur when the examinees have already been exposed to the questions.

6.2.2 Maintain high standards of examination security

There is much evidence showing that cheating and academic dishonesty are common in online assessments [22, 23]. This can be a threat to the validity of the assessment. Given the availability of technologies like Bluetooth, wireless networking, mobile phones and wearable technology, preventing cheating during online exams might be challenging. Tech-savvy students will always come up with new ways to cheat. If the examination involves a large number of students at a time, there is always a possibility that they may communicate with each other.

Some strategies can be employed to deter cheating, particularly for a synchronous examination. Using a remote proctoring system, that is able to confirm student identity, and track eye movements, keystrokes and background noises are convenient to recognise potential cheating attempts. Students need to download specific software onto their computers which can allow a third-party service provider to monitor or record their webcam, microphone and desktop feeds completing the examination. However, this software can be quite expensive to subscribe to. Some institutions may have reservations about online proctoring systems due to worries about privacy and ethical issues [24]. An alternative is to have human invigilators to monitor students’ behaviour in real-time through a video conferencing platform. Yet, to invigilate a large number of students, for example 100 students, in one setting can be difficult. Therefore, many invigilators are needed by breaking down the students into smaller groups for easier observation.

For assessments involving MCQs, the questions and answer options can be shuffled so that each student’s paper would have a distinct order of questions. With this tactic, it can be challenging for them to exchange information and share answers within the allotted exam time. In a closed-book exam, examinees may also have the opportunity to access other applications or search online references while answering the questions. There are many software available to lock down the browser to discourage this attempt. Some online examination platforms or LMS are already been embedded with these enhanced security features to provide a secure test environment. Another solution is by requesting the students to use two devices while taking the examination. One device is their computer, to access the exam questions and the second device should be with a camera, such as another computer, tablet or smartphone to be placed behind them. Hence, we can have an almost 360-degree view and ensure that no other references are available within their surroundings or on the computer screen.

An integrity agreement that students need to read and sign can be distributed prior to the examination. This agreement includes statements that students shall conduct themselves honestly and ethically in the exam. They also need to be briefed and frequently reminded of the consequences of violating the policy.

In an asynchronous and open-book exam, there is also a possibility that students may copy the work of others. There is commercially available cost-effective software that can be used to detect any duplication and plagiarism, such as Turnitin and Dupli Checker. The software can give an instant and comprehensive report to confirm whether the student’s work is authentic by comparing it with billions of resources across the internet.

6.3 Post-examination

6.3.1 Ensure accurate scoring and reporting

The use of an online assessment platform can significantly boost the efficiency of administrative work like marking, gathering and organising data. Automation can reduce the workload and burden of faculty members when testing large student cohorts. However, human error cannot be totally eliminated. Generally, faculty needs to check and verify that the scoring is correct prior to reporting. Having cloud storage or backup folders to save all information is recommended just in case the platform crashes or fails to work.

6.3.2 Evaluate and improve based on feedback

The transition from a traditional pen-and-paper examination to an online assessment may emotionally affect the faculty and students. Hesitancy and anxiety in dealing with the online environment may be observed in a few cases. Therefore, allowing two-way feedback is crucial for building trust and confidence. Frequent training should be conducted to enhance proficiency with the online system, especially for a new cohort of students.

For any kind of assessment, regular monitoring and evaluation should be done to identify any areas for improvement. An institution should invest their effort and time to analyse successes and failures in the assessment conduct. Any problems detected during the examination should be recorded and troubleshot. It is also important to ensure that the online platform is stable and well-maintained to avoid interference for subsequent assessments. Further enhancements can be made to smoothen the assessment process and to be more convenient and cost-effective for both students and faculty.

Advertisement

7. Conclusion

The COVID-19 pandemic has forced a rapid transition from traditional pen-and-paper examination to alternative student assessment methods which encompasses the use of a variety of digital platforms. We can observe that many examples of successful innovative online assessment conduct have been done in Malaysian medical schools, and globally. They have achieved favourable results in terms of validity, reliability and acceptability from both faculty and students. Such efforts serve as a stepping stone for medical education reforms that previously might be proposed but never completely materialised.

Online assessment is no longer impossible, albeit the limitation on assessing clinical performance. Without a doubt, we must not overlook the fact that not all faculty and students have equal access to technology and digital competency. Nevertheless, it is time for us to use technology to create a better educational experience and find creative ways to increase equity. Despite all the challenges, we need to remain vigilant and may take appropriate measures to minimise complications. Above all, we need to ensure that the assessment conducted is valid, reliable, feasible, secure and fair.

References

  1. 1. Yudkowsky R, Park YS, Downing SM, editors. Assessment in Health Professions Education. 2nd ed. New York: Routledge; 2019
  2. 2. Tavakol M, Dennick R. The foundations of measurement and assessment in medical education. Medical Teacher. 2017;39(10):1010-1015. DOI: 10.1080/0142159X.2017.1359521
  3. 3. Miller GE. The assessment of clinical skills/competence/performance. Academic Medicine. 1990;65(9):S63-S67. DOI: 10.1097/00001888-199009000-0004
  4. 4. Krathwohl DR. A revision of Bloom's taxonomy: An overview. Theory into Practice. 2002;41(4):212-218. DOI: 10.1207/s15430421tip4104_2
  5. 5. Van der Vleuten CP. The assessment of professional competence: Developments, research and practical implications. Advances in Health Sciences Education. 1996;1(1):41-67. DOI: 10.1007/BF00596229
  6. 6. Van der Vleuten CP, Schuwirth LW. Assessing professional competence: From methods to programmes. Medical Education. 2005;39(3):309-317. DOI: 10.1111/j.1365-2929.2005.02094.x
  7. 7. Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 conference. Medical Teacher. 2011;33(3):206-214. DOI: 10.3109/0142159X.2011.551559
  8. 8. Harden RM, Lilley P, Patricio M. The Definitive Guide to the OSCE: The Objective Structured Clinical Examination as a Performance Assessment. Philadelphia: Elsevier Health Sciences; 2015
  9. 9. Wass V, Van der Vleuten C. The long case. Medical Education. 2004;38(11):1176-1180. DOI: 10.1111/j.1365-2929.2004.01985.x
  10. 10. Ponnamperuma GG, Karunathilake IM, McAleer S, Davis MH. The long case and its modifications: A literature review. Medical Education. 2009;43(10):936-941. DOI: 10.1111/j.1365-2923.2009.03448.x
  11. 11. Adam SK, Maniam S, Hod R, Abas R. Conducting online assessment for undergraduate medical program during COVID-19 pandemic: The first experience at Universiti Putra Malaysia. Malaysian Journal of Medicine and Health Sciences. 2022;18:179-181
  12. 12. Mahayidin H, Adam SK, Hassan H, Salihan S. Modifications of OSCE for UPM preclinical medical students during COVID-19 pandemic. Malaysian Journal of Medicine and Health Sciences. 2022;18(SUPP.14):181-183. DOI: 10.47836/mjmhs18.s14.22
  13. 13. Yusoff MSB, Mohamad I, Pa MN, Ismail MA, Draman N, Yaacob NA, et al. The major challenges faced by medical lecturers in teaching, learning and assessment during the COVID-19 pandemic: A hermeneutic phenomenology study. Malaysian Journal of Medicine and Health Sciences. 2022;18(SUPP.14):72-82. DOI: 10.47836/mjmhs18.s14.9
  14. 14. Taib F, Mat Pa MN. The change of medical education landscape in the midst of COVID-19 pandemic. Education in Medicine Journal. 2021;13(3):97-101. DOI: 10.21315/eimj2021.13.3.10
  15. 15. Tan CYA, Swe KMM, Poulsaeman V. Online examination: A feasible alternative during COVID-19 lockdown. Quality Assurance in Education. 2021;29(4):550-557. DOI: 10.1108/QAE-09-2020-0112
  16. 16. Kadir F, Yeap BT, Hayati F, Ahmedy F, Mohd Bahar FH, Jeffree MS. Medical education during the COVID-19: A Malaysian experience. International Journal of Medical Education. 2022;13:84. DOI: 10.5116/ijme.6231.a20e
  17. 17. Jailani RF, Md Arepen SA, Mohamad Nor NAU, Zulkifli NF, Sanip S. A paradigmatic shift for final undergraduate medical students’ examination: The COVID-19 pandemic approach. Malaysian Journal of Medicine and Health Sciences. 2022;18(14):167-172. DOI: 10.47836/mjmhs18.s14.19
  18. 18. Wu J, Zhang R, Qiu W, Shen J, Ma J, Chai Y, et al. Impact of online closed-book examinations on medical students’ pediatric exam performance and test anxiety during the coronavirus disease 2019 pandemic. Pediatric Medicine. 2021;4:1. DOI: 10.21037/pm-20-80
  19. 19. Abdul Rahim AF. Guidelines for online assessment in emergency remote teaching during the COVID-19 pandemic. Education in Medicine Journal. 2020;12(2):59-68. DOI: 10.21315/eimj2020.12.2.6
  20. 20. Burrack F, Thompson D. Canvas (LMS) as a means for effective student learning assessment across an institution of higher education. Journal of Assessment in Higher Education. 2021;2(1):1-9. DOI: 10.32473/jahe.v2i1.125129
  21. 21. Baleni ZG. Online formative assessment in higher education: Its pros and cons. Electronic Journal of e-Learning. 2015;13(4):228-236
  22. 22. Miller A, Young-Jones AD. Academic integrity: Online classes compared to face-to-face classes. Journal of Instructional Psychology. 2012;39(3):138-145
  23. 23. Rowe NC. Cheating in online student assessment: Beyond plagiarism. Online Journal of Distance Learning Administration. 2004;7(2):1-10
  24. 24. Balash DG, Kim D, Shaibekova D, Fainchtein RA, Sherr M, Aviv AJ. Examining the examiners: Students' privacy and security perceptions of online proctoring services. In: Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021). 2021. pp. 633-652

Written By

Siti Khadijah Adam

Submitted: 12 December 2022 Reviewed: 15 December 2022 Published: 11 January 2023