Sunday, 29 July 2012

Lost in translation? Postgraduate Medical Training Curricula

In a previous blog on WPBA, I observed there has been a culture shift in postgraduate medical education, from that of a time-served apprenticeship, to one of time-measured training. This observation arises from my doctoral research part of which involved an analysis of 20 years of policy relating to the training of doctors. I was interested in tracing the ways in which NHS and postgraduate training reform had ‘dismantled’ medical apprenticeship. My analysis led me to observe a gradual decoupling of ‘working’ and ‘training’. at least conceptually. To explain…

Historically, in a time-served apprenticeship, work was the curriculum for medical training; through engaging in increasingly complex work activity doctors made transitions to greater levels of responsibility. Supported by their ‘firms’, to greater or lesser extent, transitions were made on the basis of readiness to progress, in the eyes of those closest to their work activity. There are close parallels here with Lave and Wenger’s (1991) accounts of communities of practice, where newcomers to a community are invited to engage in the shared work of the communities they join. The goal of training, in this case, is full participation in the work of the community.  

In more recent years, we have witnessed the move to a national curriculum for postgraduate training, expressed in terms of competences to be acquired, or outcomes to be evidenced. The modernized time-measured curriculum for medical education stipulates much more closely the anticipated length of time for each stage of training; those who do not progress at a predetermined point are at risk of being seen as ‘failing’. The tension here of course, is that certain posts may afford greater opportunities to learn than others, simply in terms of the scope and amount of ‘suitable’ work available. Failure to progress may be a failure of the workplace to support the development of the trainee. The ultimate goal of any stage of training is expressed here in terms of ‘sign off’; doctors in training have demonstrated the acquisition of pre-determined outcomes, competences, knowledge, skill or attitudes, however these are expressed. Those familiar with Sfard’s (1998) account of two metaphors for learning might see time-served apprenticeship in terms of ‘learning-as-participation’ and time-measured training as ‘learning-as-acquisition’.

Does this distinction matter, other than conceptually? I think it does. I believe that the postgraduate medical training curriculum introduced over the past 5 or 6 years got ‘lost in translation’. Ultimately, doctors, whatever stage they are in their career, learn through working: work is the curriculum. The challenge is ensuring that the amount, range and complexity of work activity undertaken is both within the trainee’s capability and stretches them to be more capable. One way to do that is to develop a curriculum map, that captures where they have been, where they are going and where they might go next. In this way, it is possible to make explicit and surface up the learning that arises while working and to make adjustments, where needed, to offer a richer learning journey (to keep the mapping metaphor going). The map does not need to be too prescriptive; there are, after all, many possible routes to the same destination. Some trainers have a natural sense of direction, have walked the journey with trainees on many occasions and only need check in, from time to time, to make sure they are both still on track. Others may prefer to plan the itinerary much more tightly, checking in on a regular basis that all is going according to plan. This kind of mapping process, overlaid on the workplace, had real potential to guide training. Unfortunately, the associated mechanics of the new curriculum models, workplace based assessments, compulsory ‘reflections’, log books, portfolios etc got in the way. These new ‘souvenirs’ from the journey too readily became the journey. 'Trainer-trainee' relationships became enacted through these tools of curriculum engagement. The training curriculum moved from being the trainees work, to additional work for the doctor in training.

So where next for postgraduate medical education? I take some comfort in the revised foundation curriculum, although I believe it has some way to go.

The move away from competences is encouraging, although the scuttle back to the security of outcomes statements is, for me at least, a missed opportunity. I think the discourse around EPAs (entrustable professional activities) is worth extending. It is much more meaningful to think in terms of what you are confident in delegating to a more junior colleague, than relying on the competences they have once demonstrated.

The move away from workplace based assessments to supervised learning events, conceptually at least, is also promising. The value of having a more knowledgeable other (in Vygotsky’s terms) observing your work and engaging in a meaningful dialogue about it has rich learning potential. I am not sure we need the forms to evidence these conversations have happened, but that is a topic for another blog perhaps.

Finally, the new curriculum revives 'the firm', placing much more emphasis on the professional wisdom of clinical supervisors, educational supervisors and the clinical team in terms of guidance, support and decisions about readiness to progress.

Saturday, 28 July 2012

Why WPBA aren't working.

'Clare, you need a blog' - well Anne-Marie, here is my first attempt!

Curiously, as a medical educator who champions the use of blogs as a pedagogic strategy, I have seldom used one myself. However, recent twitter-conversations about workplace based assessment in medicine, lead me to extend and share my thinking about the ongoing debate about the value, or otherwise, of these tools.

I have spent the past six years, post MMC, engaging in rich, often challenging, conversations with educational and clinical supervisors, many of whom decry the learning value of WPBA and resort to meaningless, tick box practices to meet regulatory requirements. Others, wishing to use them as intended, raise anxieties about using the 'full range' of performance judgments, on the basis that the mean is 'above expectation' unless the trainee is experiencing difficulties. It is hardly surprising then, that Collin's (2010) evaluation of foundation training describes assessment of trainees as 'excessive, onerous and not valued'. The move away from WPBA to 'supervised learning events' is poignant, given that the original intent of WPBA was to capture learning arising from everyday working practices. WPBA were meant to 'script' in time for trainers and trainees to come together, to discuss observed practice and to ensure that feedback was an integral part of working life. I wonder then, what went wrong?

It seems to me that WPBA embodied a fundamental shift in the culture of medical training, a shift from time-served apprenticeship to time-measured training. Assessments, historically, were significant, loaded affairs, based around high-stake summative judgements allowing entry into the profession, or otherwise. WPBA in their fledgling foundation form, were designed to be something very different; a 'diagnostic' tool, illuminating profiles of performance, that could form the basis of a shared 'developmental' conversation. They were designed as formative tools, capturing everyday working practices, tracing development over a trainee's working year. For this reason, the foundation WPBA were individually formative (for learning) and collectively summative (of learning). They used both criterion (pre-determined descriptors of performance) and normative referencing, with the norm being what it was reasonable to expect of a trainee at the END of that stage in training. They were valid because they were designed to sample authentic practice (not assessed performance) and reliable in that they were based on multiple assessments, on multiple occasions, by multiple assessors.

In my experience, the lack of faculty development activity to support the introduction of these tools, meant that trainers defaulted to known assessment practices. i.e. WPBA were viewed as summative hurdles to overcome, and trainees baulked at any judgement below 'at the level of expect ion'. Boxes were ticked; feedback sections left empty. All too soon, WPBA started to shape working practices ('we better do all your assessments today, as your time here is nearly up') rather than respond to working practices. All too often, trainers unaware of the normative benchmark, used the tools to judge performance relative to the stage of training, rather than end point. All too soon, these skewed assessment practices diluted any potential learning value embedded in the tools.

So where next with WPBA? Given that WPBA are being held onto during specialty training, at least for now, are their ways to retrieve their learning value.

1. Assessors to be very clear about the intended purposes and parameters of the tools they are using. Formative or summative? Norm or criterion referenced? Understanding their purpose will lead to more purposeful use.
2. Royal Colleges should be explicit about the design and development of the tools they are advocating. How has their validity and reliability been assessed? What constitutes 'best practice' in their use. All assessments involve some kind of 'trade off';being clear about limitations of tools is just as important as knowing their strengths.
3. Trainers and trainees should have opportunities to familiarise themselves with tools before they are used; where merited this may include some shared faculty development activity so the learning value of tools can be enhanced.
4. Rather than seeing WPBA as a driver for learning, see them as a way of capturing and fostering learning. If you are going to use them, use them authentically and in so doing, add meaning to their use.
5. Feedback is key; not the rather instrumental 'good bad good' routine, rather a rich, developmental conversation which explores the ways in which performance might be developed, with 'what happens next' being a far more important topic than 'what just happened'.