Well the US version
Anatomy begins at 7:00 am sharp.
Although even the dermatologists in Vienna were hard at work at 7am, I remember. It is just that the surgeons were there even earlier. Link here.
In all places, I came across the following in Dylan Wiliam’s most recent book (if you want to understand what teaching feedback really is, read it). After pointing out how some machine learning techniques can outperform some medics in some contexts, he writes
However, it is important to realize that the key factor in making jobs suitable for automating is not that they are manual or low skill. It is that they are routine. A task can require many years of training for humans to become good at it, but it can still be relatively routine, thus making it relatively straightforward to automate. This is just one example of a much more general principle, which is that many of the things that we thought would be easy to automate turn out to be rather complex, while many of the things that we thought would be hard to automate turn out to be reasonably simple. …. This observation—that high-level reasoning seems to require very little in the way of machine power, while many low-level sensorimotor skills appear to require huge computational resources—is known as Moravec’s paradox, named after Hans Moravec, who pointed out that “it is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility” (Moravec, 1988, p. 15).
Now, of course I work in a domain that is heavily perceptual and, as yet, AI systems have made little inroad. This may well be because the task is difficult (and that the human visual system is powerful) but also (critically) that the available training sets are orders of magnitude too small. This will only change if the clinical workflow is fully digital. We have published some work in this area, and if you visit the Dermofit app on the iOS store you will see an app that uses some machine learning. But a long, long way to go, and the humans can still pay the bills. For the moment.
The question for medical education is that as we (reasonably) concentrate our procedures around high level processing, the sort of environments we need to develop perceptual skills are neglected. You can do both.
There is a nice piece by Nassim Nicholas Taleb in Medium. It is from a forward to a book (I think) on physical / strength training. If you have read Taleb you will know this is not too surprising.
You will never get an idea of the strength of a bridge by driving several hundred cars on it, making sure they are all of different colors and makes, which would correspond to representative traffic. No, an engineer would subject it instead to a few multi-ton vehicles. You may not thus map all the risks, as heavy trucks will not show material fatigue, but you can get a solid picture of the overall safety.
Likewise, to train pilots, we do not make them spend time on the tarmac flirting with flight attendants, then switch the autopilot on and start daydreaming about vacations, thinking about mortgages or meditating about corporate airline intrigues — which represent about the bulk of the life of a pilot. We make pilots learn from storms, difficult landings, and intricate situations — again, from the tails.
In one sense he is saying something that is easy to agree with. But if you delve a little deeper, it is not what we always do in medical education.
The structures we create to enable learning in a clinical discipline are not mirrors of what goes on in the real world. Pace the airline example. We shouldn’t expect teaching time to mirror disease prevalence; we don’t spend most of our time in dermatology teaching students about viral warts, or dandruff, or toxic erythema. When you try to recognise objects, you do not just study those particularly objects. Rather, you have to study all the other objects. If you want to be able to ‘call out’ whenever you see a dog, you have to study cats. And chimps, and wolves and so on. This is one of the reasons why just learning about the top ten conditions makes little sense, if acts of recognition are involved. Most things are defined by what they are not. To think in the box, you have to know what is outside the box. This is what makes medical education a hard problem.
There are implications for clinical practice for the expert, too. Everyday practice appears to minimise the role of the statistical tails. Your learning about common condition may be ‘everyday stuff’ requiring little formal study. But for rare conditions, or odd presentations of common conditions, everyday practice, may not be sufficient — simply put, you do not see rare events frequently enough to consolidate and strengthen your memories. Everyday practice rarely provides enough critical mass, you might say. A practical example.
When I was a trainee in Newcastle if we saw an ‘interesting patient’ or a patient in which the diagnosis was unclear, we pressed a buzzer. The buzzer and flashing light went off in all the clinic rooms, the laboratories, the professor’s office and the seminar room. What happened then, resembled the Stepford wives. All descended on the particularly clinic room, as though under some malign influence. There were times when this was quite funny, although some patients might have told this differently.
This simple tool was just an implementation of another one of Rees’s rules: routine clinical practice is not sufficient to consolidate or acquire the skills you need to provide routine clinical practice. This seems like a paradox, but it isn’t. “A sailor gets to know the sea only after he has waded ashore.” Rather, I always view it as a solution to the forgetting curve that Ebbinghaus described (although I think there may be other justifications)
There is a simple learning point here. The acquisition or maintenance of clinical competence requires much more than seeing patients (and by this, I do not just mean reading research papers). Software, and virtual worlds that we control, might help. But the Rees maxim remains: routine clinical practice is not sufficient to consolidate or acquire the skills you need to provide routine clinical practice
There was a story in the FT a few weeks back (paywall). It concerned the painting ‘Portrait of a Man’, by the Dutch artist Frans Hals. Apparently, the Louvre had wanted to buy the painting some time back, but were unable to raise the funds. However, a few weeks ago, the painting was declared a “modern forgery” by Sotheby’s — trace elements of synthetic 20th-century materials have been discovered in it. The story has a wider resonance however. The FT writes:
But if anything the fake Hals merely highlights an existing problem in how we determine attribution. In their quest to confirm attributions, dealers and auction houses seek the imprimatur of independent, usually academic, experts. Often that person’s “expertise” is deduced by whether they have published anything on a particular artist. But the skills required to publish a book are different to those needed to recognise whether a painting is genuine. Many academics are also fine connoisseurs. One of the few to doubt the attribution to Parmigianino of the St Jerome allegedly connected to Ruffini was the English scholar, David Ekserdjian. But too often the market values being a published writer over having a good “eye”.
Here is a non trivial problem: how can we designate expertise, and to what extent can you formalise it. In some domains — research for example — it is easier than in others. But as anybody who reads Nature or the broadsheets knows, research publication is increasingly dysfunctional, partly because of the scale of modern science; partly because the ‘personal knowledge’ and community has been exiled; and partly because it has become subjugated to academic accountancy because the people running universities cannot admit that they do not possess the necessary judgment to predict the future. To use George Steiner’s tidy phrase, there is also the ‘stench of money’.
But the real danger is when the ‘research model’ is used in areas where it not only does not work, but does active harm. I wrote some time back in a paper in PLoS Medicine:
Herbert Simon, the polymath and Nobel laureate in economics, observed many years ago that medical schools resembled schools of molecular biology rather than of medicine . He drew parallels with what had happened to business schools. The art and science of design, be it of companies or health care, or even the type of design that we call engineering, lost out to the kudos of pure science. Producing an economics paper densely laden with mathematical symbols, with its patently mistaken assumptions about rational man, was a more secure way to gain tenure than studying the mess of how real people make decisions.
Many of the important problems that face us cannot be solved using the paradigm that has come to dominate institutional science (or I fear, the structures of many universities). For many areas (think: teaching or clinical expertise), we need to think in ‘design’ mode. We are concerned more with engineering and practice, than is normal in the world of science. I do not know to what extent this expertise can be formalised — it certainly isn’t going to be as easy as whether you published in ‘glossy’ or ’non-glossy’ cover journals, but reputations existed long before the digital age and the digital age offers new opportunities. Publishing science is one skill, diagnosing is another, but there is a lot of dark matter linking the two activities. What seems certain to me, is that we have got it wrong, and we are accelerating in the wrong direction.
No, it doesn’t: pure clickbait. But how many does it need? The headline was taken from a comment by Eric Schmidt, the former CEO of Google, that the ‘UK needs 10,000 computer science academics’, When I saw the headline, I initially read it as saying the UK needed another 10,000 computer science graduates. Oops. He means staff, not students.
But then I wondered, as I often have, how many academics in medicine we need, and how we might go about working out what the number should be. And I should add, I am sceptical we can know how many doctors we need, only those untouched by reality like Jeremy Hunt, know answers to question like that. But there are some numbers that are relevant, even if I cannot match Enrico Fermi’s ability to perform back of the envelope calculations (how many piano teachers are there in New York?).
Depending on how you parse the data, skin disease is said to be the commonest reason to visit a GP in the UK. Estimates suggest there are 15 million visits to GPs with a skin problem each year. In many countries all these patients would go direct to an office dermatologist (this distinction is important, but marginal to my argument here).
Each year about one million people with skin disease are referred from primary care to secondary care. New to follow up ratios are falling — being forced down
without any clinical reason because of money — but assume 1 to 1.5. In terms of visits, the ratio is much higher, because we have to include surgery and phototherapy, so the ratio of new to follow up is much higher, at a guess 1:4. This would mean 4 million visits. This seems frighteningly high.
There are around 70,000 GPs on the register, and around 600 consultant dermatologists in the UK. GP recruitment problems are well known, and estimates are that close to one third of all dermatologist posts are vacant (‘no suitable candidates’). There are juniors (sic) on top, and other miscellaneous doctors too. In terms of new patients, I see 26 per week, and I am clinically part time, so around 1000 per year, plus some on call work, which is light. If we divide the 1,000,000 new referrals by 400 consultants, we get each consultant seeing around 2,500. But if we add in juniors, staff grades and locus, the numbers *feel* about right.
If we were to look at academic staffing, we have about 30-35 clinical academics in dermatology in the UK. They spend their time between clinical practice, research and teaching. Most UK students are taught for most of their time by people who are not ‘academics’ or at least by people without what in most subjects and in most advanced countries would be recognised as an academic apprenticeship. Skin biology or skin science is notable by its almost complete absence in many — possibly the majority— of medical schools. If we argue — and I would — that those who run and organise teaching in higher education need to view this task as a *professional* task, we are running with say 15 FTE providing the undergraduate teaching resource that underpins clinical practice and early training / education. Note: my argument is about undergraduate education, and not specialist training; and I believe that teaching is not a ‘bolt-on’ activity at the undergraduate level (if you don’t agree with this view, I suggest you could largely dispense with university medical schools).
There is a simple way to frame any answer to my question. Do you think it is possible to produce and maintain a culture of learning and clinical expertise given the numbers above?
Attention: Some slipping of the causal nexus is evident.
An article in the Economist reviewing, or at least discussing, a couple of books about the rate of innovation caught my eye, in particular a snippet that I will expand on below. The books were “The Rise and Fall of American Growth” by Robert Gordon, and the “The Innovation Illusion” by Fredrik Erixon and Bjorn Weigel. I haven’t read either, but enjoyed a review of the Gordon book in the NYRB by Willian Nordhaus. Based on my reading of the Nordhaus book review the issue is that the rate of innovation and productivity is declining — we are not hurtling towards any singularity — and that the century of out of the ordinary innovation was 1870 to 1970. Here is Nordhaus:
Gordon focuses on growth in the United States. Living standards, as measured by GDP per capita or real wages, accelerated after 1870. The growth rate looks like an inverted U. Productivity growth rose from the late nineteenth century and peaked in the 1950s, but has slowed to a crawl since 1970. In designating 1870–1970 as the special century, Gordon emphasizes that the period since 1970 has been less special. He argues that the pace of innovation has slowed since 1970 (a point that will surprise many people), and furthermore that the gains from technological improvement have been shared less broadly (a point that is widely appreciated and true).
In the Economist article, we read:
The figures from recent years are truly dismal. Karim Foda, of the Brookings Institution, calculates that labour productivity in the rich world is growing at its slowest rate since 1950. Total factor productivity (which tries to measure innovation) has grown at just 0.1% in advanced economies since 2004, well below its historical average.
I do not find this view strange. Medical advance is slowing, not accelerating. Medicine was transformed between 1940 and 1970, but the rate of new discovery has slowed. There is more data , more activity, and more scientists, of course. And a lot more hype and university press officers. Just less advance in comparison with what went before. The same is true about university education, too.
Criticisms of these views include questions about the data used to support the various arguments. In the Economist piece, the ‘techno optimists’ make two criticisms. The second is that the ‘techno’ revolution hasn’t really started yet, but it is the first one that caught my eye:
The first is that there must be something wrong with the figures. One possibility is that they fail to count the huge consumer surplus given away free of charge on the internet. But this is unconvincing. The official figures may well be understating the impact of the internet revolution, just as they downplayed the impact of electricity and cars in the past, but they are not understating it enough to explain the recent decline in productivity growth.
Paul Mason elsewhere uses the example of Wikipedia:
Wikipedia is a non-market form of activity—it’s a $3bn hole in the advertising world.
Now bringing this back to my own little world, I am intrigued by how the battle between, on the one hand, free or OER, and on the other, books or content, you have to pay for, will work out. I touched on this in an article on teaching and learning several years ago, and one of the reasons I wrote the freely accessible textbook of skin cancer, www.skincancer909.com* was out of frustration at the poor quality of dermatology textbooks targeted at medical students. When I surveyed medical students a large fraction did not buy a dermatology textbook, yet it is clear that the university did not provide suitable alternatives, nor was the university able to provide reasonable online alternatives. Now, I do not believe that free is always best, nor do I think that the endgame is anytime soon. But I do believe content is critical, and despair at how the med ed (medical education) world largely ignores it. But there are amazing commercial books out there — think Molecular Biology of the Cell for instance — and there is a battle to be waged about whether you invest large amounts of money in producing material used by many, or continue with the traditional approach taken by universities (those ‘bloody PowerPoints’ and dull lectures, all done on a shoestring budget).
Woodie Flowers touched on cognate issues in a critique of MOOCs and MITx
In the United States, our “education” system is choking to death on a failed training system. Each year, 600,000 first-year college students take calculus; 250,000 fail. At $2000/failed-course, that is half-a-billion dollars. That happens to be the approximate cost of the movie Avatar, a movie that took a thousand people four years to make. Many of those involved in the movie were the best in their field. The present worth of losses of $500 million/year, especially at current discount rates, is an enormous number. I believe even a $100 million investment could cut the calculus failure rate in half.
The criticism stings because Flowers is an educational legend (it also speaks to MIT that they broadcast such critiques of their own activities). Here is Flowers again:
Properly designed new media materials can improve K–12, residential, distance, and life-long learning. In their highly developed form, these learning materials would be as elegantly produced as movies and video games and would be as engaging as a great novel.
I do not know how all of this will work out. I am intrigued by the view that we might be underestimating ‘production’ because much of it is free, but I think we are seeing real market failure, both from the commercial world and from the universities.
* Skincancer909 is due an update, and I am aiming for early 2017.
I was sat in a meeting recently. We were discussing teaching, amongst other things. And I pencilled out what we in dermatology deliver each year, every year (subtext: I doubted that people realise how much effort and resource we need to teach clinical medicine).
We provide clinical placements for 36 weeks per year, with 12-14 students attached for each two week period, in 18 ‘cohorts’. Over the year we provide just under 400 hours of clinical seminars, in which patients appear, but are there for teaching purposes only, with the students in groups of around ten, and with the teacher having no other responsibility (they are not managing patients). In addition we provide around 1500 hours of clinical experience — timetabled events in which a student attends a session in which they are not the focus of attention. These latter sessions are real: they are timetabled by person and time, start and finish on time, and are rarely cancelled or changed.
Students like what we do, they like the online stuff, too, and the staff are enthusiastic. But this system does not run itself and, in the long term, I fear might not be sustainable, even though the funding is said to be there. It is certainly not optimal, even though our students get a better deal than students at most other UK medical schools. We need to do something else, building on what we do well. Just thinking.
In my ‘online’ textbook of ‘rashes’ (ed.derm.101), that is the non-cancer bits of dermatology, I have a chapter called ragbag. I used to have two ragbag chapters, but now by combining them, the subject has been made simpler and easier, even though the content is unchanged. I am sure students agree. I put in this chapter all the things I do not put elsewhere. I thought I should now fess up a little, as Gödel would not have said.
In ‘John Wilkins’ Analytical Language’, Borges refers to the work of Franz Kuhn and his work on the the Chinese encyclopaedia the ‘Heavenly Emporium of Benevolent Knowledge’ (John Luis Borges, Selected Non-fiction, ISBN978-0-14-029011-0). This work is a classification of the world. For instance, you can classify the world into various groupings or categories. For animals we have:
All very straightforward stuff, as any medical student will agree. I also like the system proposed by the Bibliographical Institute of Brussels (after Borges). It parcelled the universe into 1000 subdivisions with number 262 corresponding to the Pope, 268 to Sunday Schools, 298 to Mormonism, and 179 to cruelty to animals, duelling and suicide.
We have lots of similar systems in medicine, some of which seem less fit for purpose than those described above. My incomplete categorisation of categorisation (the grant was turned down) is as follows:
Dreyfus and Dreyfus, the US philosophers and students of AI, pointed out that although we like to use rule-based systems in teaching, experts quickly forget them, and do not appear to use them (From Socrates to Expert Systems:The Limits of Calculative Rationality, 1984). We just inflict them on the young either because that is the only way we know how to encourage learning, or because we repeat what happened to us earlier in our career without good reason. Quoting Dreyfus and Dreyfus:
The beginning student wants to do a good job, but lacking any coherent sense of the overall task, he judges his performance mainly by how well he follows his learned rules. After he has acquired more than just a few rules, so much concentration is required that his capacity to talk or listen to advice is severely limited.
They were not writing about medical students. But I recognise what is going on. I can remember it too. Classifications may or may not be useful in learning a subject, and in chunking, may provide a guide to action. But the more experience you get, the less the clinician uses them. Academics, of course, play by different rules, and get to write the rules.
They are qualified practitioners called on to make life-and-death decisions in conditions often far from ideal; simultaneously, they are treated rather as children at school, obliged to tick boxes to show progression, document feedback on performance, demonstrate written evidence of reflection, and comply with burdensome bureaucracy. Their protest is both an expression of breaking point frustration with their training and a clarion call to the country to wake up and recognise the true state of the nation’s health services.
From Neena Modi, President of the UK Royal College of Paediatrics and Child Health.
None of the above is surprising or not widely thought. It is just that untruth has been deemed more important than truth in postgraduate medical education. Politics reigns. BTW written reflection continues to be the bullshit canary in the colliery of medical education.
There are hidden rules not just in grammar, but at every level of language production. Take pronunciation. The –s that marks a plural in English is pronounced differently depending on the previous consonants: if the consonant is “voiced” (ie, the vocal cords vibrate, as in “v”, “g” and “d”), then the –s is pronounced like a “z”. If the consonant is “unvoiced” (like “f”, “k” and “t”), then the –s is simply pronounced as an “s”. Every native English-speaker uses this rule every day. Children master it by three or four. But nobody is ever taught it, and almost nobody knows they know it.
From the Economist: Hidden in plain sight.
I am fascinated by linguistics. It never existed in my mind as a subject before I read some Chomsky, and yet I find it fascinating. See the papers in last week’s Nature about human genetic diversity and our history, and linguistic diversity in Australia (there is an article about this in Science).
But one reason why the above amuses and intrigues me, is that it is clear in so many clinical situations that we know more than we can say. We really are very poor at explaining clinical competence, and until we think hard about this issue, our teaching remains worse than it might be.
I can seldom revisit anything Alan Kay has said or written and not find ideas worth exploring. Here is one such quote:
If you take all the anthropological universals and lay them out, those are the things that you can expect children to learn from their environment—and they do. But the point of school is to teach all those things that are inventions and that are hard to learn because we’re not explicitly wired for them. Like reading and writing.
Virtually all learning difficulties that children face are caused by adults’ inability to set up reasonable environments for them. The biggest barrier to improving education for children, with or without computers, is the completely impoverished imaginations of most adults
If we think of medical education, and subjects like mine in particular, it is clear that some of the skills we wish to encourage are ‘natural’. I think children in the right environment could acquire them. Humans are hard wired to learn to be able to classify their environment and divide the world on the basis of form. We can do this even when we have little idea of causality or underlying structure, or of that branch of formal knowledge called science. Feedback is required but the basic tools are there. Think of the way children learn to distinguish between cats and dogs, and even though it is hard to formalize the basis for this expertise, it is easy to demonstrate. Jared Diamond in ‘Guns, Germs and Steel’, tells how he compared his ability to classify the natural world with that of Yali, a native of Papua New Guinea. Diamond is an an expert ornithologist, and natural historian and yet Yali had difficulty understanding how and why Diamond was so poor (relatively) at some classification tasks. Classifying fauna and flora — or at least the machinery that allows expertise in this area— is hard wired. And of course it is not unique to humans or even mammals, but humans have the ability to meld these faculties with cultural transmission and make them very powerful. Of course, varying degrees of formal and informal learning is part of this process.
Not all skills we want students to know are like this. Statistics and insight into probability — key clinical skills — are wonderfully counterintuitive. Worse still, we know that on many occasions our strongly held convictions are mistaken, and hard for us to self-diagnose. But to return to the ‘natural’ skills and related to the points Kay is making, one question is whether beyond childhood, we create the right environment that allows natural learning to take place. Many of us suspect that when we explain why lesion X is diagnosis X because of appearance Y and Z, or worse still some formal rule, we are not seeing the world as it really is. Rather, we should realise that whilst feedback of one form or another is critical, students have to discover and grow their own abilities, even though they too may not know how they do it. So I might change some words:
The biggest barrier to improving medical education is the completely impoverished imaginations of medical schools.
Inefficiencies in medical eduction are in part borne by students. They are also borne by patients. As a medical student, I ‘delivered’ around 30 babies; I also did some episiotomies with varying degrees of supervision. As a medical registrar I put 32 pacing wires in, inserted a number of chest drains and took pleural biopsies; and put various central lines in. One of the latter interventions, led to a major complication. Things may have changed since I was a junior, but my argument may also apply to lesser procedures: taking blood, suturing, and indeed any interaction with patients. Even a clumsy bedside manner or history taking. On the other hand, I was sometimes useful to patients. I did my elective here in Edinburgh in psychiatry on the late Prof Bob Kendell’s unit. I spent three months on PU2, and, both at the time, and looking backward, feel I contributed positively to the care of patients.
The issue about the cost of training — especially in practical matters — to patients is not easy. There is always a learning curve. We also know that in some situations a non-expert is all we can afford — think of the example of a single doctor on an Antarctic research base who might have been instructed how to pull a tooth or release a dental abscess, before they went.
The point I make is about whether procedures are genuinely part of a learning curve — that is, a curve in which the individual aims to get better and better, and will carry on with that technique throughout their professional career. Or whether the organisation of training takes little account of known career trajectories, in which case there is no learning curve, and the moral argument more suspect.
I was never going to be an obstetrician, I was never going to be a cardiologist, nor a chest physician. And I knew all that before I qualified. But I was going to need to take blood; and to do dermatological surgery at an intermediate level. Once somebody has decided on a final destination, the route has to change accordingly.
“And still her life is a relative mess. I like the message in that: that we can tick off the boxes, and yet we still don’t quite have it together. And that’s pretty much the truth of growing up, isn’t it?” NYT
Well this is from Renée Zellweger talking about her Bridget Jones persona. But everywhere I look now I see the great and the good from HEE and the RCP admitting that all this tick- boxing has been a disaster and has subverted medical education. They were told this years ago. It is an irony of the age that those charged with directing postgraduate medical education, are most in need of it themselves. Worse still, the postgraduate world has been allowed to infect the undergraduate world.
Interesting article about US medical students complaining about their national licensing exams. I think the students have some very valid points. Some quotes:
Dr. Peter Katsufrakis, the senior vice president of the National Board of Medical Examiners, agreed that the exam isn’t difficult, but pointed out that 871 students did fail it in the 2013-14 academic year. Besides, he said, most medical school faculty don’t have time to observe third- and fourth-year students doing a complete physical exam, so it’s important to test those skills as part of the licensing process. [emphasis added]
The National Board of Medical Examiners doesn’t give feedback to test takers — in part because that would be expensive and in part because it would make it too easy for students to cheat by sharing their feedback forms, which would likely contain hints about the specific scenarios being tested, Katsufrakis said.
Well, we have a new angle on Shaw’s ‘conspiracy against the laity’.
I think sequencing — the order — of how we put teaching together is a big issue in medical education. Not the only big issue, just one of a handful. Historically medical education hid behind the idea of education as a form of apprenticeship. There is a lot to be said for post-graduate medical education as an apprenticeship (despite the attempts by the NHS and HR (aka postgraduate deans) to kill it off). But at the undergraduate level it fails; class sizes are too large; the sense of belonging gone; specialisation has led to lots of small attachments; and nobody has been quite certain how to deliver teaching when the responsible body is a university, but where the patients are physically located in the NHS.
In some mythical far distant past, students would be lectured to, and then appear on a ward where they would be supervised, mentored, and where their progress would be monitored in real time (as in real feedback). And many of those delivering the teaching, would know exactly what standards would be expected of the students. This is not how it works now. No surprises here then. Like much of modern education: it doesn’t work.
Even when I was a student the clinical attachments through year 3 and 4 would be in the mornings, with lectures in the afternoon. The problem was that the two activities were out of sync: the mornings might be spend in paediatrics, but the lectures could have been on geriatrics. The time hallowed linkage between seeing patients and reading about them was rendered problematic. My solution was to not attend lectures. It worked for me 🙂
There were attempts to get round this problem. Dermatology teaching in Newcastle in those days was made up of 4 weeks of clinical mornings, but the lectures were delivered on another ‘out-of-phase’ period, and each afternoon, after say a lecture on psoriasis, 10–15 patients with psoriasis would appear, along with 10–15 staff who would demonstrate physical signs and patient stories, to students. You needed a lot of staff, and a lot of seminar rooms, and seminar rooms close to each other (i.e. a medical school). When I was in charge, I kept that system going for a few years, but eventually we had to abandon it due to a lack of resource and central support (‘who is paying the patients’ travel expenses’).
The prompt for for all of this is merely to remark how badly we organise or instruct what we want students to know before they appear on the ‘wards’. Tech allows us to think of ways to do this that were simply impossible 20 or even 10 years ago. But it needs a sea-change in how we view medical education. And much as I fear the expropriation of medical education from the ‘ward’, (simply because bedside teaching is so expensive) we have to think hard about allowing our students to take most advantage of the clinical exposure we can provide.
So, we started a new academic year this morning. The first group of students — there will be another 17 groups this academic year — who will spend two weeks with us throughout the year. And what surprised me, and cheers me up enormously, is how, when medical students are given firm and coherent guides as to what to cover by themselves, they can, with little interaction, achieve so much (connectivists and social constructivists, please note). And when you then interrogate them interactively on these topics, you can feel and see them struggling (successfully) to make sense of so much new material. And with an evident sense of pleasure and achievement.
There is sometimes a prejudice in medical education that somehow teaching at the bedside is always best. Of course most medical encounters are not at the bedside (any more) simply because most clinical encounters are not on wards, but in offices, whether the offices are in hospitals or elsewhere. The arguments for the bedside include tradition, but also reflect the fear that medical education will be expropriated from the clinical context. I have a lot of sympathy with the latter view, but it will sometimes lead to error.
Yesterday, I talked about the Dermofit App, to which I contributed. One of the rationales for this whole approach almost a dozen years ago now, was my belated realisation that clinical exposure — however intense — in dermatology might not be as efficient as a learning environment in a virtual world. In dermatology, simulation is over one and a half centuries old, and the history of this simulation, tracks the development of technology. It is just that this simulation relies on something we have got used to because it is all around us: high quality graphics. Pictures of lesions.
Several years later we published a paper, exploring this. We wrote:
“The overwhelming majority of students 82% (n = 41) did not see an example of each of the three major skin cancers (BCC, SCC, melanoma) and only a single student (2%) witnessed two examples of each. The percentage of students witnessing 1, >3 and >5 examples is given for each of the 16 lesions and demonstrates that there was not only a lack of breadth but also of depth to the students’ exposure.”
In one sense this is all very obvious. We know that (perceptual) classification tasks require practice, and that practice requires multiple training examples. The training signal: noise ratio can be higher in the virtual world, and it is easier to manipulate events in the virtual world. If the quip is that technology is everything that gets invented after your teenage years, we don’t recognise the obvious technology here simply because it is has been around so long. It is just that silicon really allows it to be done so much better. The caveat is whether the business model allows this.
Students will prefer the clinic, for reasons I understand. But they will often be to wrong to do so.
If you are interested in making undergraduate medical education work, you have to be interested in scale. We have to think about scale in at least two ways.
First, within certain limits it is possible to invest more if you teach bigger numbers. If you want to produce high class online material, it will cost a lot. If you want to find out what works, larger numbers of ‘trials’ will help. If you want to produce meaningful online material, rather than just some dismal Powerpoints, it is only cost effective if class sizes are large. For many disciplines in many medical schools, there is simply not enough critical mass to produce great content. Or at least there isn’t, if teaching always plays second fiddle to research and clinical service.
The other side of the coin is simply that bedside teaching does not scale. The larger the group, the worse the teaching; and patient resource is limiting. There are of course plenty of patients, but medical schools are modelled around where the resource was fifty years ago, rather than where it is now. They are reluctant to change because the ‘start up’ costs are large, and because schools are fixated on short term rather than long term educational goals. In the old bedside model, the ready availability of suitable patients was a large (hidden) subsidy. As it disappears, people are waking up to how expensive it will be to replace. Many of these costs will be direct costs to universities, rather than hospitals, simply because the political realities in the UK mean than hospitals are simply unable to find the finance to change the way they work. Most of the money that is said to support student teaching is siphoned off to support clinical service. It is just that nobody wants to call it fraud.
From a student perspective these issues matter enormously. Student experience is often poor, and the sense of ‘place’ lacking in many, if not most, UK medical schools. The (justified) disenchantment felt by junior doctors at the hands of the NHS employers and so-called educational establishment, will spread to our undergraduates (more than it has already).
There is a quote from Clay Shirky that is germane here*.
You have to find some way to protect your own users from scale. This doesn’t mean the scale of the whole system can’t grow. But you can’t try to make the system large by taking individual conversations and blowing them up like a balloon; human interaction, many to many interaction, doesn’t blow up like a balloon. It either dissipates, or turns into broadcast, or collapses. So plan for dealing with scale in advance, because it’s going to happen anyway.
*I got this via Mike Caulfield’s blog post here
There is an interview in EdSurge with Dan Schwartz, the Dean of the Stanford School of Education. He talks about many interesting ideas elsewhere, but in this interview he says some fairly standard things about universities versus industry.
‘As an academic: I’m a great starter, and I can prove that something works. But my desire to ever have a 1–800-Call-Dan hotline for people who want to know how to use my inventions is like…zero.
Now, if its software, I could put it up on the Internet and let people use it. But I can’t market it, and I can’t keep maintaining the code. And I need to get a business plan, but I don’t know how to do that. ‘
This is very much about universities being the sort of place you start things, not where you finish them. I agree. Indeed one of the problems I see is that people want research to increasingly resemble product development. This will make all the figures look great (‘D’ is more expensive), but eventually will bankrupt universities.
Medical schools have taken on all sorts of activities that are critical to the practice of medicine, it is just that some or many of them should be done elsewhere. Please invent new statistical methodologies, or ways of measuring disease, but let others outwith medical schools apply them in the ‘D’ of the R + D. Economists use the census, but they do not do the legwork themselves.
But this got me thinking (not surprising given the theme of his article) about the one area where business or product development is central to a university’s activity: teaching delivery and learning (ugly phrases all, I know, but they are placeholders for more). And I do not believe we are good at doing this. We are good starters, but developing coherent programmes that dovetail into a particular niche, is not something we do well — certainly not in undergraduate medicine. The most obvious reasons for this deficiency are:
I am not convinced people get this, or realise the time for fiddling or small tweaks has long gone.
The structural framework of many university activities, especially advanced research, are well suited to their goals, and few institutions are as efficient at the business of genuine invention (leave aside, that universities are getting worse at this — this is only in part their fault). But if I look at the modern medical school we are not good finishers in our central ask, and we need to be.
A basic introduction to clinical photobiology for our students.
I have come across this from Paul Graham before, but I think he is saying something very important about learning in general. He points out that there are lots of skiing instructors, but not many running instructors. When you learn to ski, often your intuition is to lean back. Big mistake. Often you need to lean forward. It takes a while to re program the intuition.
I used to think most medical students knew how to learn and acquire expertise efficiently. I no longer think this way. In particular, there is an over emphasis on rule based strategies, rather than naturalistic ones. Just as people over emphasise predicate calculus in thinking about the world, so we and they, are often prisoners of mistaken theories about how ‘learning works’. Much learning is very unnatural.
“Such anecdotal claims aside, little empirical evidence supports the use of reflection as a tool to enhance professionalism, so further research is needed in this area.”
Paper in Academic Medicine here.
Depending on you prejudices, you might consider the full paper more nuanced. But if you cannot sort out causality, you cannot master nature.
A couple of comments and events I heard about in the Re:learning Podcast series got me re:thinking about ideas that I have tossed around before. The session with Christine Ortiz, currently a MIT graduate dean, was about her attempt to set up a new university / college. This is very much a bricks and mortar place. By contrast, Tyler Cowan, he of Marginal Revolution, is setting up a university of the air. Cowan argues that the distinction between what is a university and what is not a university is breaking down. His ‘institution’ is an outgrowth from the distinctive Marginal Revolution blog. He is not giving up the day job, unlike Christine Ortiz. There are lots of interesting things, here.
First, barriers to entry are getting lower. Of course, barriers are higher for bricks and mortar places, but many modern universities are in danger of losing the power — and failing to appreciate the value — of residential institutions. If you keep piling students into lectures theatres and student staff co-knowledge is poor, you have to ask why online might not be a better value proposition. Awhile back I was chatting to a physics professor from Holland, who was talking about their attempts to build liberal arts colleges within a larger university framework. I have no idea how well it is working, but the idea is interesting. Many medical schools, it seems to me, are largely places where student placements are organised, rather then delivered.
Second, there really is a sense of experimentation, that was not there 20 years ago. This is why being in higher education is so much more interesting now that it was even just 10 years ago. MOOCs, hype or not, doesn’t really matter. Change is in the air.
Third, universities are multisided markets or platforms, in which students have limited power. This will have to change. Cross subsidies between teaching and learning, research, ‘wealth creation’ are all going to come up for scrutiny. If we imagine university accreditation like we would APIs, we might see real change. If we were really serious about ‘competency’ (self evidently, we are not) we might free up the ways in which students can master whatever it is that is judged important. Or what they want to achieve, with or without my flavour of libertarian paternalism (no happy chart feedback here!).
But I have to pinch myself at how exciting this all could be. I think we have to protect and nurture the academic ideal, but not at the expense of all that is outside it. Nor our students. Things really are different.
* Part of Marginal revolution’s, byline
A couple of sentences from this article on the value of interdisciplinary research got me thinking — or at least pulling some memes off my dusty intellectual shelf of clutter. The article is about Ian Goldin, and some ideas I am sure he talks about in his new book, which I haven’t read.
He added that “one of the reasons” for the 2008 financial crisis was that “people lost their ethics, their judgement, and their wisdom” because of disciplinary silos.
I agree. I remember the Economist putting it more harshly: …’professors fixated on crawling along the frontiers of knowledge with a magnifying glass’[Economist, December 10th 2011]. Economics, a bit like psychiatry in medicine, is the canary in the mine. Nor would teaching mandatory ethics courses (‘I am certified in ethics A+!), do very much. Enron’s management were stars at HBS. This is one of the tragedies of many modern universities, so busy edging their way up largely meaningless ranking scales, that they are unable to tackle the problems society faces.
Golden was quoted as saying, ‘[there is] a “real pressure” on universities to be “thinking ahead” and teaching information that will remain relevant when current students “reach their mid-careers”’.
There are two aspects to this. One is that the whole idea of education is a way of hedging against a changing environment. If the world was constant, we could dispense with much (but not all) education — training would suffice. This is just another way of saying advance comes from when sons do not do what their fathers did (‘20th century physics was made by the sons of coblers’. Substitute your gender, please). But from a teaching perspective there is another facet to think about. We cannot adequately judge how well we educate our students over the short term (alone). Yes, they can pass finals. Yes, they can take a history etc. But the test of education is how well they behave and think 20 years down the line. This is a large search space that we can only navigate using theories about what makes the world change, and what makes people push at the boundaries: do not cite Cronbach’s alpha, at me. But in examinations and certification, like so much else in science and society, we are blinded by the apparent certitude of short term goals. And the allure of summary measures, rather than the messiness of the real world.
Interesting exchange of letters in Academic Medicine. At first, it almost seems counterintuitive, that defining a curriculum might afford more control and greater flexibility. But what the authors (see the earlier letters) are highlighting is the increasing damage ‘standardised’ testing may do to long term goals. The letter also would make me wonder how much of the medical school curriculum could be moved on online outwith the bricks and mortar, with medical schools just certifying some preclinical knowledge. Yes, the whole online world has been talking disintermediation for some time, but this is the serious literature catching up with the real world. If we are serious about broadening access and reducing costs and training more doctors, this seems to me to be one way to go. Of course this means we might need to look at how the course can be more modular, and flexible.
Implementing a universal preclinical curriculum could redefine the requirements for entering medical school and could even call into question the necessity of delivering the pre–Step 1 curriculum in a brick-and-mortar setting. Students could decide whether to take courses offered by medical schools, enroll in courses at other academic institutions, or study the material on their own (as they, in part, do now). Perhaps Step 1 will eventually become the entrance examination for medical school admission. It is essential that medical school curricula provide greater scope and nuance than that of the Step 1 blueprint. Consideration of a universal preclinical curriculum could help us focus the medical school’s primary role in establishing and implementing the teaching and evaluation of essential knowledge, attitudes, and skills required of students as they are trained to become competent, entrustable physicians, not master test takers.
Some dark thoughts from Geoff Norman in one of his iconic editorials:
In some of my darker moments, I can persuade myself that all assertions in education (a) derive from no evidence whatsoever (adult learning theory), (b) proceed despite contrary evidence (learning styles, self-assessment skills), or (c) go far beyond what evidence exists. I suspect most readers of AHSE are aware of the first two kinds of assertion, but in this editorial I want to elaborate on the third, the challenge of arriving at general conclusions about the way the world works based on the empirical evidence derived from limited studies.
The link between a particular study, and generalisability is of course what science is all about: this is why much science is about theory, rather than A/B testing. It is why RCTs are not usually about science, but about product comparisons.
Some Much medical education research is hurtling into the abyss of studies that have little interest beyond the fact that they got published, and that somebody has a copy of SPSS.
At this stage I always muse over out Clark Glamour’s statements about department’s of education:
‘We leave training to schools of education, the bottom feeders of academe’,
‘Almost all advanced degrees held by teachers are in education, which means that they are academically rock bottom’
I could feel guilty for sharing such thoughts, if it were not for the fact that Glymour is a leading figure in statistics, learning and philosophy; and one who has made major contributions to how we can envision learning (see for instance his wok on Bayes Nets and Causal thinking), or his work on statistics teaching and learning, described in Galileo in Pittsburgh).
A long time ago I enjoyed a book called ‘What is this thing called science‘. The one on medical education has yet to be written. Norman’s voice is the closest we have to it. Somehow we have to carve out a space between the sorts of biology that explain human psychology and culture that the Premacks describe (see for instance, Original Intelligence: The Architecture of the Human Mind); George Steiner’s, Lessons of the Masters; and the knowledge of the bazaar that reminds us that we have understood certain aspects of apprenticeship and craft learning for at least a thousand years. And finally, the sorts of deep insights based on experiments that often appear deceptively simple, that some areas of experimental psychology are well known for.
This article from Health Affairs is from over the pond, but it chimes with much of the conversations you hear in hospital corridors here (if you want to know what is going on in institution don’t go to meetings or read minutes — listen to what those outside the meeting rooms say).
The strain that third-party payers and other practice intrusions have put on physicians is obvious to those who know physicians or work with them, and it is evident from the 2014 Physicians Foundation survey.3 Fifty-six percent of respondents described their feelings about the medical profession as negative, and 51 percent said that they were pessimistic about the future of the profession. Fifty percent would not recommend medicine as a career to their children, and 29 percent would not choose to be a physician if they had their careers to do over.
The article points out the move to employee status (rather than independent practice), and the resulting loss of the patient voice: ‘ “ownership” of the patient in this model may no longer be personal but institutional’
Despite these changes, the medical profession will doubtless continue to exist, as there is no lack of young people vying to get into it (applications to medical school were at an all-time high in 2015).14 The question is, Will medicine remain a calling with patient care at its heart or become a mere occupation, characterized by bureaucracy and a focus on the bottom line? The former is to be hoped for, but given current trends, the latter should be anticipated.
I wonder if we will start to see a broadening of post med school career choices. As in some parts of Europe, many graduates will not go into traditional medical practice. Do I find this all depressing? No, quite the opposite in fact, but we need to ensure that we educate our students about the possibilities.
Maybe I have reading Pulse too much, but it is always hard to attach meaning to the first tremors, before the ground visibly shakes. Lots of interesting memes: whether a medical degree is now a signal not of medical competence but of general skills (hard work, group work, perseverance, ambition); and whether we are seeing a widening of expectation gap between different groups of our own students. What seems clear, and has been for some while, is that an undergraduate medical degree needs to be dissociated from a career in the NHS: this can no longer be its prime educational purpose. Rash talk, perhaps. But please don’t think making medical apps will pay the bills.