There's lots and lots of evidence around the science of successful learning. Learning to make sound clinical decisions involves practice by doing, with failure, feedback, reflection and doing again. For junior students we do this in the 'safe' environment of discussing cases (virtual or real). But final year students really need to learn through work, and the UK's General Medical Council has said as much in it's updated document 'Promoting Excellence: standards for medical education and training' (2016).
Now have a look at this image and compare it with the type of teaching and learning in your institution. Can we change things to make learning more effective?

 
 
Hello
As requested, I recommend that all students read these two books (please share this e-mail freely with your colleagues):

1) Make it stick: the science of successful learning, by Peter C Brown, Harvard Univ Press. Available from Amazon. For your convenience I have summarised the key points in this book in some slides, attached. This evidence explained in this book is really useful when it comes to learning skills and learning for exams.

2) Bounce: the myth of talent and the power of practice, by Matthew Syed, Fourth Estate. Available from Amazon. This is a really well-written best selling book that summarises in lay-man’s terms the academic literature on expertise - how people get really good (at anything from sports to their profession). It covers important things from deliberate practice theory to mindset to studies of professional expertise. It helps you think about how you need to approach clinical placements to get the most out of them.

Based on this and a few other things related to learning, I put together the med student clerking proforma that I attach here. The main way to get good for finals and be prepared to be an actual doctor is to see as many patients as possible, and commit to what you think the problems are (using precise medical terms because that is how you store and retrieve previous learning) and then make a plan for each problem, like I showed you. Use hospital guidelines on the Intranet to make your plan. Then get feedback on your performance from a qualified doctor (ideally going back together to see the patient and clarifying a few things). Choose one area for strategic reading (30 mins max) the same day, related to the patient you saw. This helps conceptualise the topic, which is deeper than “memorising”. If you try to clerk in a minimum of one patient per day using this proforma - or in some placements the hospital’s own clerking proforma - doing it this way, you will stand out in finals as knowing what you are doing, and you will be a safe pair of hands in August. This type of learning feels more awkward and requires effort, you might feel as if you are not really learning, but trust me you are. Sitting in a tutorial/being “taught" feels like learning but it is often superficial and quickly lost, giving only the illusion of learning.

Don’t forget that doctors actively explore the history, it is not about only listening to the patient - use all available sources of info to get to the bottom of things (ambulance sheet, ED notes/GP letter, carers etc) because history is the essential starting point in diagnosis.

And finally, for people who are interested in how doctors think and how you can learn to think better, read the ABC of Clinical Reasoning, Wiley. Available from Amazon. You can also follow @clinicreasoning on Twitter.

Best wishes and good luck! Let me know how you get on.
Nicola C
 
 
A 30-year-old woman with no past medical history presented with a dry cough and three days of breathlessness on exertion with a fever. When she arrived in the Emergency Department she was tachycardic with a high temperature and her CRP was >200. She improved after paracetamol, intravenous fluid and antibiotics. Everyone thought she had pneumonia.
The next morning, her chest X-ray was normal, her lungs were clear on examination and she looked very well with normal vital signs. There was only one thing - he oxygen saturations were 92% on air.
A d-dimer was done to exclude pulmonary embolism (this was a low probability patient) - the result was exactly on the upper limit of normal. Everyone scratched their heads. A V/Q scan was requested and showed multiple pulmonary emboli.
"Something doesn't fit" is an important clinical tool - but usually requires experience to recognise. It is intuition
("unease") - or Type 1 thinking - which should force us to stop and analyse things more deliberately. However, even after deliberate analysis a modified Wells score and a d-dimer were "lying" in this case ... but  intuition won out. What is even more interesting is we only know about cases we get right, not necessarily the ones we get wrong!

 
 
Someone told me about two surgical cases where doctors trusted tests more than the history and physical examination. Tests are useful, they shift our thinking - but they rarely give us 'yes' or a 'no' answer. If a patient is highly likely to have a certain problem, then a negative test does not necessarily change this. It is this principle of interpreting diagnostic tests that all clinicians need to understand.
So when a patient known to have a large abdominal aortic aneurysm presented to hospital with severe abdominal pain, everyone thought he had a leaking aneurysm. When his emergency CT scan showed no leak the doctors stopped worrying, but this is in fact what the patient had. When a patient in atrial fibrillation presented with severe central abdominal pain, everyone thought he had an ischaemic bowel. When his lactate was normal, the surgical resident decided that could not be the diagnosis, but it was.
Teaching the correct use and interpretation of diagnostic tests, the fundamental starting point of pre-test probability (our judgement based on the history and physical examination findings) and how tests shift our thinking, but sometimes by less than we think, is a vital aspect of clinical reasoning and fundamental to safe clinical care.
 
 
Physicians make a diagnosis 70-90% of the time based on carefully listening to and teasing out the patient's story. But sometimes we forget the importance of the 'history' (as its called in medicine).
Recently at handover, a junior doctor told us a patient needed a lumbar puncture. When we asked why, he said, 'Because the radiology report recommends it.' What he was referring to was the fact that radiologists, sitting in remote dark rooms, often report CT head scans done for severe headache with the recommendation: 'A normal CT head does not exclude subarachnoid haemorrhage and an LP should be performed if clinically indicated.' This (subarachnoid haemorrhage) was subconsciously - and erroneously - translated in to the diagnosis in the mind of the doctor.
Some time ago, a man presented to the Emergency Department with flu like symptoms and a maculopapular rash. The junior doctor in ED telephoned the on-call Microbiologist for advice. The Microbiologist gave some thoughts over the phone - there was an outbreak of measles where the patient lived, and sometimes meningitis can present with an atypical rash like this. Ignoring the fact that there was a peak in influenza incidence, the patient had typical symptoms of flu and no symptoms whatsoever of meningitis, the diagnosis became 'measles / ?atypical meningitis'. This was repeated on the Acute Medical Unit by three different doctors. Even Public Health was notified of the 'case of measles'. It was not until the next morning when someone took a history, looked at the barely visible rash that looked nothing like measles, and decided that this very well patient was recovering from influenza that the entire 'event runaway' was stopped. (By the way, you can get a maculopapular rash with flu).
The moral of the story is - although telephone advice is commonplace in hospital practice, filter it through your own brain first. What is the diagnosis based on the patient's history and physical examination? This is our starting point.

 
 
The current edition of the journal BMJ Quality & Safety has an article describing an interesting experiment. Dutch researchers created eight cases in which patients either had difficult behaviour or neutral behaviour. Seventy-four internal medicine residents were asked to evaluate the cases and make a diagnosis. They were also asked to recall the patient's clinical findings and behaviours afterwards.
Mean diagnostic accuracy scores were significantly lower for difficult than neutral patients’ vignettes (0.41 vs 0.51; p<0.01). Time spent on diagnosing was similar. Participants recalled fewer clinical findings (mean=29.82% vs mean=32.52%; p<0.001) and more behaviours (mean=25.51% vs mean=17.89%; p<0.001) from difficult than from neutral patients.
The researchers concluded that difficult patients’ behaviours induce doctors to make diagnostic errors, apparently because doctors spend some of their mental resources on dealing with the difficult patients’ behaviours, impeding the adequate processing of clinical findings. Sometimes clinical reasoning requires lots of cognitive effort. The researchers state that efforts should be made to increase doctors’ awareness of the potential negative influence of difficult patients’ behaviours on clinical decision making.
The accompanying editorial has some suggestions on how we can do that.


 
 
A couple of slides from a teaching session to GPs who teach junior (CP1-3) and senior (CP4-5) medical students:
 
 
Learning prototypal presentations of common diseases helps learners build a database of ‘illness scripts’ that can be added to with increasing complexity throughout their training. This is an important foundation for the development of their pattern recognition abilities and can be done through case discussions as well as real patients. However, students should be encouraged to synthesise data they gather from history, examination and initial test results in to a problem list, as this can be used to teach important clinical reasoning steps:
  • Identification of key clinical data
  • Semantic competence (the use of precise medical language important in ‘chunking’ information in to larger units which helps to organise and store information)
  • Synthesising data in to problems (or ‘problem representation’)
  • Making relevant associations between problems
  • Critical thinking – for example, spotting and avoiding assumptions
  • Formulation of a management plan that takes all the patient’s problems in to account
For example, I was working with a final year medical student recently who did a good job at clerking in a patient. However, at the end of his assessment his problem list was: 1) acute confusion, 2) raised creatinine. His plan was extremely woolly. I then asked him to re-define the problems using precise medical terms. He thought about it and wrote: 1) delirium, 2) acute kidney injury. Suddenly he was able to retrieve previous learning in the filing cabinets in his brain with these labels on. He made a good plan.
The ability to identify key clinical data and create a problem list using precise medical terms is a key step in clinical reasoning development. Some problems  (e.g. low serum potassium) require action but not necessarily a differential diagnosis. Other problems (e.g. vomiting) do require a differential diagnosis. The process of generating a problem list ensures nothing is missed. It also ensures the plan is sound - if a person with pleuritic chest pain needs a CT scan to exclude a pulmonary embolism but their creatinine is too high to safely administer contrast, the plan has to take that into account.
Teach problem lists not differential diagnosis.

 
 
I'm reading someone's PhD thesis on clinical reasoning at the moment (!) One of the things I have been reflecting on lately is the concept of knowledge organisation. You see, clinical reasoning is not a generic skill - it is highly context and content specific. Knowledge matters. But it might also be that how that knowledge is organised matters more. Arthur Elstein famously published 'Medical Problem Solving' in the 1970s. He compared the performance of 'expert' clinicians with their 'normal' peers in simulated consultations followed by interviews. He found no clear differences in diagnostic ability or the cognitive processes used. This led to a major shift in clinical reasoning research. Ericsson, a well known expert on expert performance wrote, 'Failure to capture individual differences in expert diagnostic performance with simulations made the issue of capturing expertise mute and permitted researchers to explore phenomena related to the acquisition of diagnostic reasoning more freely.'
A few decades later, researchers recognised the structure of medical knowledge, its internal organisation and relationship with memory are crucial to explaining medical clinical reasoning development, rather than it being a generic problem-solving trait. Different researchers have focused on understanding how knowledge is organised in the minds of experts and how this impacts on clinical reasoning processes, while others have focused on the process of solving clinical problems - 'where do the hypotheses come from?'
It's fair to say that there is a lot we still do not know about how the clinical reasoning of experts differs from their peers. But this is all important when it comes to teaching, because sound clinical reasoning is vital for safe, effective care.
For years I have taught using concept maps, but now I understand why. Here is an example below which I use to teach junior doctors who ask me about partial seizures, usually just after we have seen someone who has presented with this problem. Teaching this way is usually followed by a 'light bulb moment'. Combined with reflection and some strategic reading, suddenly seizures make sense, and diagnostic reasoning has a structure for the next case. Concepts maps reflect how the brain organises knowledge, they help to conceptualise things, and aid knowledge retrieval - these are key to sound clinical reasoning.

 
 
I recently came across the 'ICE blog' - International Clinician Educators. Here's a link their blog on 'Diagnostic reasoning: can you trust your gut or are you biased?'
ICE blog: can you trust your gut or are you biased?