Translate this page into:
The System of Examination in Emergency Medicine – Do We Need to Change?
*Corresponding author: Ashima Sharma, Department of Emergency Medicine, Nizam’s Institute of Medical Sciences, Hyderabad, Telangana, India. ashimanims@gmail.com
-
Received: ,
Accepted: ,
How to cite this article: Sharma A. The System of Examination in Emergency Medicine – Do We Need to Change? Ann Emerg Trauma Crit Care. doi: 10.25259/AETCC_5_2026
Winston Churchill famously said, “To improve is to change; to be perfect is to change often.” In Emergency Medicine—a speciality defined by urgency, uncertainty, and decisive action—this sentiment is particularly relevant. As educators and clinicians, we must ask an uncomfortable but necessary question: Does our present system of examination truly assess the competencies required of an emergency physician?
Examinations influence how students learn. They shape priorities, determine the depth of engagement, and ultimately define the kind of clinicians we produce. Yet, there is an important distinction between examination and evaluation. An examination measures performance at a single point in time. Evaluation is broader and longitudinal—it captures growth, professional behaviour, procedural competence, communication skills, and clinical judgment over time. In a fast-paced, high-stakes speciality like Emergency Medicine, this distinction matters profoundly.
Essay-type questions have historically formed the backbone of postgraduate theory examinations. They are relatively easy to construct and allow candidates to demonstrate recall, reasoning, and organisation of knowledge. However, their limitations are well recognised. Marking can be subjective, coverage of the syllabus is often limited, and performance may reflect writing ability more than clinical competence. A single long essay on acute coronary syndrome may test theoretical understanding, but it does not determine whether a candidate can recognise a subtle STEMI on an ECG or initiate timely reperfusion in real practice. Objective-type questions offer improved reliability and standardisation. They are easier to score and reduce examiner bias. Yet, they tend to assess recognition rather than synthesis. They rarely evaluate communication skills, professionalism, or the ability to manage dynamic, evolving clinical situations. Short answer and problem-solving formats attempt to bridge this gap but remain constrained by time and limited sampling. Given the vast scope of Emergency Medicine—from trauma and toxicology to obstetrics, paediatrics, disaster medicine, and critical care—traditional written examinations struggle to comprehensively assess the cognitive domain within the allotted time.
The competency framework outlined by the National Medical Commission reflects the true breadth of Emergency Medicine practice. It includes prehospital care, resuscitation, procedural sedation, trauma management, organ-system emergencies, point-of-care ultrasound, medicolegal responsibilities, toxicology, and environmental injuries. Importantly, it also emphasises psychomotor skills—multiple defined procedural competencies—as well as affective attributes such as communication, leadership, teamwork, ethical reasoning, and professionalism. Can these domains be adequately measured through four written theory papers and a conventional practical examination? Increasingly, the answer appears to be no.
Objective structured clinical examinations (OSCEs) provide a more comprehensive alternative. Carefully designed stations can assess diagnostic reasoning, interpretation skills, procedural steps, communication with patients and relatives, ethical decision-making, and team coordination. OSCEs allow broader sampling of topics in a structured and standardised manner, reducing variability between examiners. Performance-based examinations and simulation further strengthen assessment. Simulated cardiac arrest, polytrauma, or septic shock scenarios allow candidates to demonstrate leadership, adherence to protocols, situational awareness, and teamwork under time pressure. These are core realities of emergency practice and cannot be reliably evaluated through written scripts alone. Oral examinations, while valuable in probing the depth of understanding, often vary significantly across institutions. Differences between university systems and board-based examinations highlight the need for greater standardisation and alignment.
Globally, Emergency Medicine assessment has evolved toward multimodal and longitudinal systems. In the United States, the American Board of Emergency Medicine integrates written examinations with structured oral case simulations that test real-time decision-making. In the United Kingdom, the Royal College of Emergency Medicine combines written assessments with structured clinical components and workplace-based evaluations such as case-based discussions and direct observation of procedural skills. These approaches ensure that competence is demonstrated repeatedly in authentic clinical contexts. International trends increasingly favour continuous assessment, simulation-enhanced testing, and workplace-based feedback rather than reliance solely on a single high-stakes exit examination. The direction of reform is clear: from episodic testing toward sustained competency validation.
Emergency physicians require more than knowledge. They must demonstrate composure in chaos, clarity in uncertainty, empathy in distress, and ethical judgment under pressure. These attributes develop over time and are best assessed longitudinally.
A reimagined assessment model for postgraduate Emergency Medicine may include:
Expanded OSCE and simulation-based stations
Structured performance examinations for procedural skills
Longitudinal formative assessments throughout training
Portfolio-based documentation of competencies
Formal evaluation of communication, leadership, and professionalism
Assessment of research literacy and evidence-based practice
Such reforms would not lower standards; they would raise them. By assessing competence repeatedly and across domains, we ensure readiness for independent practice rather than proficiency in examination technique alone.
Assessment drives learning behaviour. If our examinations prioritise recall, students will prioritise memorisation. If our assessments emphasise reasoning, teamwork, and simulation-based performance, training will naturally evolve toward active, student-centred, and problem-based learning. Modernising assessment patterns will also enhance the credibility and global recognition of Indian postgraduate qualifications. As healthcare becomes increasingly interconnected, alignment with international best practices strengthens academic and professional mobility.
As this journal embarks on its journey, it seeks to serve not merely as a repository of knowledge but as a platform for reflection, innovation, and constructive reform in Emergency Medicine education and practice. Meaningful change in assessment requires collaboration among educators, institutions, regulators, and trainees alike. By encouraging evidence-based discourse on training and evaluation, we hope to contribute to the development of assessment systems that truly reflect the realities of emergency care. If Emergency Medicine demands readiness for the unexpected, then our training and evaluation methods must embody the same dynamism. The future of the speciality depends not only on what we teach, but on how—and what—we choose to assess. We welcome potential authors to submit their content at https://editorialassist.com/#/login/AETCC and look forward to disseminating high-quality clinical information for our fraternity.
Dr. Ashima Sharma
Senior Professor, Emergency Medicine,
Nizam’s Institute of Medical Sciences,
Hyderabad, Telangana, India.