Oral assessments gauge students’ knowledge and skills based on the spoken word, typically guided by questions or small tasks.

Deciding on exactly what is to be assessed is usually the best first step in planning oral assessment.  

Consider the reasons for using an oral assessment:

1. The learning outcomes demand it

2. It allows probing of the students’ knowledge

3. It reflects the world of practice

4. It improves learning

5. It suits some students

6. The meaning of questions can be clarified

7. It helps to ensure academic integrity

Then consider:

I. What is being assessed?

  • Do you want to assess what a student knows?
  • Do you need to move beyond that to see what a student is able to do?
  • Do you want to see what they can do in the context of their chosen field?

The focus of oral assessment typically includes one or more of the following:

Concepts, theories and procedures - Oral assessment can be used to test students’ knowledge at any level, but it may be particularly useful in probing students’ levels of understanding and in assessing that understanding in the context of its application. While conceptual and procedural knowledge can be assessed through various methods, oral assessment may be preferred when there is a need to ensure that the responses are the students’ own.

Applied problem solving - This category includes the students’ capacity to think on their feet, to apply their knowledge to real or hypothetical situations. Students can be called on to diagnose problems in novel situations and recommend a course of action, justifying their decisions with reference to the knowledge and understanding on which they are based.

Interpersonal competence - Interpersonal competence can include how the students communicate with the examiner or examiners, how they interact with their audience in, for example, a class presentation, or how they relate to a real patient or client in a clinical setting or to a pseudo-patient or pseudo-client in a simulation.

• Intrapersonal qualities - Here we move onto difficult ground. Qualities such as confidence, self-awareness, professionalism and ethics are sometimes included in oral assessment, but these qualities are difficult to define, may be hard to elicit in a formal assessment context, and can be extremely difficult to judge.

• Integrated practice - Integrated practice goes beyond applied problem solving. It involves acting in a real or simulated context that incorporates many of the complexities of the workplace. The student teacher in front of a class, the student nurse with a patient, or the graphic design student meeting a client are each engaged in complex action involving knowledge, thoughts, feelings, attitudes and action.

 

II. Interaction

One of the distinctive features of oral assessment is that it allows for interaction between the examiner/s and the student, and sometimes others, with the interaction often being rapid and unpredictable. Of course, interaction is not essential. A paper can be presented orally with little or no interruption or even discussion following it, and even where interaction does occur, marks may be awarded purely on the basis of the presentation itself. But

oral assessment lends itself to interaction, ranging from gentle probing by the examiner seeking further information to the intense interaction of a Psychology student with a client or a student arguing with her peers as she tries to convince her fellow students of the worth of the argument she is making. The level of interaction can be located on a continuum ranging from the non-interactive one-way presentation to the completely dialogic discussion between the student and examiner or student and client, with many points in between, including presentation followed by discussion; question and response followed by probing; or the debate with its presentations, challenges and final summing up. Interaction can bring assessment to life, and the anticipation of interaction can drive the student to prepare thoroughly for the assessment. At the same time, however:

• The path of the assessment can become uncertain, so it is important to make sure that all students are treated fairly and given equal opportunities to display their knowledge

• Interaction should be planned.

For example, follow-up questions that probe a student’s understanding should be worked out in advance:

• What kinds of interaction will be needed?

• How will the examiner/s interact with the student?

• If the student has an audience, e.g. of fellow students, how will he or she be expected to interact with them? And what role will the audience be asked to play?


III. Authenticity

‘Authenticity’ here refers to the extent to which the assessment replicates ‘real life’ or what happens in the world of practice. The assessment may involve the use of an actual audience, realistic timeframes for preparation, collaboration between students, and tasks that are multi-dimensional and located in complex, realistic contexts. Case studies that culminate in oral presentations to a mock panel, roleplays and simulated interviews represent common attempts to incorporate the conditions of practice within the classroom.

 

IV. Structure

Structure is concerned with how far the assessment follows a predetermined set of questions or sequence of events. Students need a more-or-less predictable structure to allow them to plan for the assessment and to reduce unnecessary anxiety about unknowns, while a high degree of structure can also increase the reliability of the assessment. However, if the assessment is overly structured, the capacity to ask probing follow-up questions can be lost, as can the possibility of unpredictable questions from fellow students, both of which can cause students to seek deep understanding of what is being assessed.

• What sort and amount of structure is needed?

• What aspects of the assessment need to be highly structured?

• What aspects of the assessment should be more open?

 

Reference:

Joughin, G. (2010). A Short Guide to Oral Assessment


 Using prompts

Using prompts may be useful, depending on the assessment context.

There are 5 types of prompts:

1. Specific wording on a script (presenting a task)

In its simplest form, a ‘prompt’ simply refers to the specific wording on an examination or assessment question which presents the task to candidates. It represents a question or information that examiners should provide all candidates during the examination. This class of prompting represents the minimum level of interaction from the examiner and the most neutral form of prompting.

2. Repeating information

A second type of prompting is simply repeating the information. Here, examiners intend to remind the candidate to think about information they have been provided, and appear to have forgotten. ‘Remember that this is an 80-year-old. . .’ or ‘Is that still the case in light of the previous history?’ This form of prompting may take the form of re-phrasing the original prompt, or if a candidate is taking the content in an unintended direction, of re-directing the candidate back to the original prompt. Such intervention is usually best expressed in a way that only incidentally cues the candidate that their previous responses were off-track; that is, it should not be accompanied by obvious expressions of disapproval or frustration by the examiner. This form of prompting is simply aimed at giving the candidate the opportunity to correct themselves when it seems clear that their response is a result of misremembering or misunderstanding the question prompt.

3. Clarifying questions

Third, prompting may go further to indicate clarifying questions, such as ‘Can you be more specific?’ or ‘What do you mean by “X”?’ Questions of clarification are commonly used, but some examiners worry that they may be inappropriate in formal assessment contexts. This will depend on the purpose and context of the examination. For example, a question which aims to give the candidate the opportunity to clarify their response would seem appropriate in most oral assessment contexts; it is, after all, one of the fundamental aims of assessment, to find out what the learner knows and understands. Consistency by examiners is key, so that all candidates get similar opportunities to clarify their meaning. It is the examiner’s responsibility to convey that their purpose is wholly to clarify the response, not surreptitiously cue the candidate. In contrast, a question that searches for an alternative response is better categorised as probing or leading; for example, ‘What type exactly?’, ‘Can you phrase that in a different way?’ Candidates in high-stakes assessment are usually highly attuned to such clues.

4. Probing questions

Fourth, examiners may be permitted to ask probing questions. This is more difficult to standardise across cases and examiners. Depending on how the candidate responds, an examiner prompts by probing deeper to ascertain how well the candidate understands the specific piece of knowledge, or its significance in a broader clinical context, for example, ‘What might be some implications of that approach?’, ‘Under what circumstances would that be appropriate?’ Some forms of structured oral assessment specifically call for this form of prompting in order to assess the candidate’s reasoning ability (…) It is also import to distinguish ‘probing’ from ‘prodding’, and attempting to create a climate of psychological safety, even while conducting a summative examination. 

In this form of interrogative prompting, the concept of equivalence seems a more helpful principle than consistency, because the content of the examiner’s probing is likely to vary between candidates depending on their particular knowledge and responses. The examiner must ensure that the nature of the probing is as equitable as possible, even while different specific questions, or different points of the exam, are used for probing. Another risk of probing is that examiners may focus on their particular ‘hobby horses’. This needs to be recognised as a significant source of unfairness and threat to the blueprint alignment and content validity of the examination, and should therefore be specifically addressed during examiner training. Such an approach from examiners may indicate conflicting understanding of the purpose of oral examinations; some may see these assessments as teaching opportunities rather than the observation-focussed and evaluative approach which most high-stakes assessments require. This makes the clarity of examiner briefing, training and the selection process itself, crucial.

5. Leading and vague questions

Finally, examiners sometimes enact prompting by asking leading questions. This represents the most ‘intrusive’ form of prompting and is rightly discouraged in most high-stakes assessment contexts. Typical examples of leading prompts include: ‘You mean type II, don’t you?’, and ‘It sounds like you would. . .’ Less helpfully to the candidates’ performance, it can also take the form of very vague prompts such as ‘What else?’, which frequently ends in a guessing game that frustrates both candidate and examiner. Although examiners may have good intentions, such prompting makes the examiner complicit in the candidate’s performance. Even if done consistently for all candidates, it threatens the validity of the assessment result. Unfortunately, this type of prompting often occurs in practice, whether intentional or otherwise.


Guiding principles for practice

1.     Strive to be neutral in interactions with the candidate

Whenever examiners prompt, they should try to do so in a way which neither discourages nor reassures the candidate. Candidates should be alerted to this principle of neutrality, and encouraged not to seek affirmation or censure in examiners’ utterances or body language. Positive comments such as ‘Good job’ or ‘Doing well. . .’, or corrections such as ‘Well, it was actually condition y’. – or worse, the dreaded eye-roll – can have significant impact on candidates’ state of mind and subsequent performance. In our experience, developing an appropriate examination ‘poker face’, offering neither affirming nor disapproving clues to candidates’ performance, can be a significant challenge for many examiners. Where certain examiners excel, they can provide helpful role models through strategic pairing of examiners, where appropriate.

 2.     Use prompting in a consistent way for all candidates

Unfairness arises when candidates have variable opportunities to display their knowledge and understanding. Examiners should try to be consistent in their approach, especially when probing candidate responses, although as noted, such consistency may need to be more in the manner and degree of probing as in the content itself. A further issue is how the degree of prompting should impact the candidate’s result. Typically, a greater need for prompting will translate into a lower score,8,14 but this will depend on the assessment context and criteria, and shouldn’t be assumed to be a universal principle. Respecting the candidate’s thinking processes is also important. Examiner impatience should not be a cue for hasty prompting. A well-trained and reflective examiner will combine assessment protocol with considered judgement to determine if and when a prompt is appropriate.

(…)

 

Reference: 

Pearce J, Chiavaroli N. (2020) Prompting Candidates in Oral Assessment Contexts: A Taxonomy and Guiding Principles. Journal of Medical Education and Curricular Development.  doi: 10.1177/2382120520948881