senility vanity, but I remember being taught by an ‘ancient’ GP in my first year of med school, in 1976. His name was Andrew Smith, and most of us thought him amazing in many ways. One of the stories that made a deep impression on me, was how— the day after he graduated — he was delivering a baby using forceps in the mother’s own house at 3am. I would have been 18 or so and he in his early sixties —not far from where I am now. So, he would have been a medical student in the late 1930s, and I will probably stop practising medicine in the early 2020s. When I add the two professional lifetimes together at the extremes (med student to final year of practice) I am always amazed how big the number is — a span of 80 years or so. And one of our problems in undergraduate education is that we have to be concerned with these extremes: I am teaching students who will practice for another 40 years, but I have inherited a set of code written as many years in the past.
Now the above reminisce was set off by some words from Benedict Evans. He is talking about much shorter timeframes and is concerned with the commercial world. But my question for medical students (and others) is how is medicine really going to look in a few more score years, and how do we imagine all the system wide interactions that will make the future so different? This is surely more meaningful that memorising biochemical pathways.
“Everything bad that the internet did to media is probably going to happen to retailers. The tipping point might now be approaching, particularly in the US, where the situation is worsened by the fact that there is far more retail square footage per capita than in any other developed market. And when the store closes and you turn to shopping online (or are simply forced to, if enough physical retail goes away), you don’t buy all the same things, any more than you read all the same things when you took your media consumption online. When we went from a corner store to a department store, and then from a department store to big box retail, we didn’t all buy exactly the same things but in different places – we bought different things. If you go from buying soap powder in Wal-Mart based on brand and eye-level placement to telling Alexa ‘I need more soap’, some of your buying will look different….In parallel to this, TV, which so far has not really been touched by the internet, is also starting to look unstable.”
Nice letter in Academic Medicine. Not convinced by the exact details, but the author is on to something important. The first victim of insincerity is language (Orwell, if I remember correctly).
Medical professionalism is espoused as a necessity in health care, setting an important precedent of excellence and respect towards peers and patients. In many medical schools, a portion of the curriculum is dedicated to the intricacies of medical professionalism. Though typically taught through specific tenets and case studies, professionalism is still a general principle, resulting in varied definitions across institutions. This is, in fact, part of the beauty of professionalism—the lack of definition makes it a flexible concept, applicable in a wide variety of situations. However, the downside to this vagary is that it allows for the weaponization of professionalism, leaving space for “professionals” to reject certain approaches to health care.
Comment on an FT article. How things have changed. Even I can remember a colleague — a few years my senior — who went for a Wellcome Training Fellowship, only to be interviewed by one person, with the opening question being, ‘Imagine I am an intelligent layperson: tell me what you want to do!’
I was a war baby, a small farmer’s son and in 1960, at 17, I had a chat with my most trusted teacher about what I should do to apply to become a doctor for which I had just acquired a good group of Scottish highers. He advised me that because I should have applied a number of months before, to write a letter to the University enclosing my qualifications. I was asked to come and have a chat with the Bursar and the only thing I remember him saying was that my qualifications were good but did I realise that I might be preventing somebody else from getting in. I am ashamed to say that I replied that I was not really too troubled about that. I was accepted, and was fine.
When you want to find your way around a city, you might memorise key streets or more likely use a simplified map as a guide as you travel. But when you know a city, you navigate by being able to recall how you get from A to B. In fact you may have difficulty drawing a map — certainly to scale — but your memory is made up of lots of instances of what lies around a particular corner. Much of what you learn about diseases is the map in this analogy. By contrast, what the experienced clinician knows are lots of instances of what lies round particular corners. Those instances have a name: they are called patients.
You cannot change or reform undergraduate medical education in a significant way without changing the way doctors work and behave.
I keep coming back to a few central insights that have — in the best sense of the word — disturbed my world view. These are from a wonderful article in a journal I had never heard of, written by Frank Davidoff. (But I do not buy the term ‘revolution’)
Competence, in contrast, is like “dark matter” in astronomy: although it makes up most of the universe of working knowledge, we understand relatively little about it. What does it really consist of? Which of its components are most important? How do people acquire it? What’s the best way to measure it? And how can you tell when they have enough of it?
Most importantly, it is increasingly clear that competence is acquired primarily through experiential learning – a four-element cycle (or spiral) in which learners move from direct personal involvement in experiences, to reflection on those experiences, integration of their observations with sense-making concepts and mental models, and finally back to more experiences. Formal training for all high-performance (applied) professions, for example, music, architecture, theater, and athletics, is grounded in the unique requirements of experiential learning: case-based coaching, rather than lectures by content experts; hands-on, practicum experiences (including simulations, if necessary) in addition to written end-objectives; repeated experiences and outcome evaluations over time rather than initial, one- shot exercises; and, ultimately, acquisition of the advanced skills of “reflection-in-action,” which is required for high-level performance and “reflection-on-action,” which is required for continued self-evaluation and self- instruction (Schon, 1987).
Mens Sana Monographs:2008 | Volume:6 | Issue:1 | Page:29–40
Focus on Performance: The 21 st Century Revolution in Medical Education
Bruce Alberts talks a lot of sense about science education and education in general. And of course he produced a book that ‘educated’ a whole generation (or more) of people like me. But in this recent Science piece he is taking on some of the big questions, questions that have been asked before, but for which few have managed to follow through on. As ever, the emphases are mine.
In previous commentaries on this page, I have argued that “less is more” in science education, and that learning how to think like a scientist—with an insistence on using evidence and logic for decision-making—should become the central goal of all science educators. I have also pointed out that, because introductory science courses taught at universities define what is meant by “science education,” college science faculty are the rate-limiting factor for dramatically improving science education at lower levels.
For example, there is a long-standing belief that every introductory college biology course must “cover” a staggering amount of knowledge. There is no time to focus on a much more important goal—insisting that every student understand exactly how scientific knowledge is generated. Science is not a belief system; it is, instead, a very special way of learning about the true nature of the observable world.
His phrase, “college science faculty are the rate-limiting factor for dramatically improving science education at lower levels”, could equally apply to medicine and medical teachers. It is not hyperbole to say these are some of the central problems of our time. And it is not just science education that is the issue.
“Core surgical training in the UK has been dubbed “core service training” because many trainees believe it does not provide enough surgical experience. At the southern tip of Africa, I felt I was being taught to operate, not to just watch and hold retractors. My commitment and progression were judged on hard work and merit, not on how many courses I had attended.”
A little awhile back after a teaching session, I asked — as I often do — a googly: ‘What is the worst thing about being a medical student?’ The response: ‘Feeling like a spare part’.
My quick and facetious response was to argue that at least spare parts were useful, whereas students were (usually) not. The humour was appreciated 🙂
But if you think this through, it is possible to argue that students are indeed less useful than they once were. At one time, final year students were a key component of clinical service. They clerked people, they could insert iv lines, write up drugs, and they were around long enough in one environment for people to make meaningful judgments of their abilities and remember them. And to be able to trust them. They could do paid locums, and even when they were not being paid, they could not be absent, nor did you need to formalise start and finish times. One of my colleagues reported that he used to be able to ‘prescribe anything apart from diamorphine’ as a final year student.
This all raises some interesting questions
Medical education is indeed always advancing: but many of us think it is increasingly out of phase with the world we live in. Magnitudes matter.
I have a secret admiration for some aspects of surgical training. We all know the bad ones, so I do not need to talk about them. When Lisa was doing her Mohs’ fellowship, it was the following vector: you watch X procedures, you perform Y procedures under supervision and you then perform Z procedures ‘independently’, with help on hand. After that, you keep learning. Sensible, and has the essential character of what has been known about craft apprenticeships for over one thousand years: apprentice; journeyman; master. This BMJ piece by a urologist asks:
If you were applying for a certificate of completion of training (CCT) in urology in 2015 you had to have seen or assisted in at least 20 radical prostatectomies before being signed off as competent. A year later, for no apparent reason, it appears that 10 will do.
He then goes on:
Standing in a theatre, unscrubbed, so you can say you’ve seen a procedure was never a part of surgical training, nor should it be now. It has no value. Unless you are very good at the procedure already and you are learning nuanced techniques from a master surgeon, watching a procedure will never make you a better surgeon.
Now, I despair of this sort of thing even when we ask medical students to do it. Why, is the question? What value is there, in watching? That this is considered meaningful at this level of training is even more worrying. And of course the figures will be pushed down, over time. This is the NHS, after all; never let expert judgment get in the way of a political imperative or somebody paid by the government: “we have to revise the speed of light for operational reasons….”
There is a more subtle point which makes thinking about the article even more worth while.
Trainees should spend their training doing the things that they’ll be spending their lives doing, not watching procedures they will never perform.
Now, it is clear that the current bull coming out from HEE, NHS, Deans etc is that we don’t need experts anymore, just people to cope with whatever disease is the flavour of the month (that there are demographic changes — pace the lectures I received from John Grimey Evans in 1976— was apparently not obvious to NHS managers or Jeremy Hunt till late 2016). Here is a problem.
When people finish formal training they are not as expert as they will be in 10 or 20 years. I do want an experienced dermatopathologist to be reading the samples I sent him. Wisdom is not the sole preserve of the old, but in many craft or perceptual disciplines I know about, the old guys and women do it better. So, problem one, is that when people come off a training scheme they are not the doctor they want to be. They are not qualified, they are just setting out, able to work without immediate supervision — as they choose and judge. This is the ticket.
The second problem, as the author makes clear, is that the training schemes are wasteful and not geared to excellence. Again, in a world of ‘pull’ (John Seely Brown’s phrase) the NHS is still trapped within the metaphors of the same industrial age that Donald Trump thinks is going to bring all those jobs back.
We have lost our way in much of what is important in medicine. It’s time that we focused on what really makes a surgeon better and stopped the pointless processes that surround training
Amen. But the surgeons have got some things going for them. IMHO many other branches of medicine are much, much worse.
Nice paper by Erik Driessen, and much more nuanced that my quotes might suggest. Worth a read. But still a phrase that I use as a bullshit detector.
Many students do not value reflection as a learning strategy, especially not when they are forced to use an artificial and fixed format and they have little opportunity to direct their learning as a result of these reflections.
“Few things seem to irritate doctors and medical students so much as mandatory reflection. (Tomlinson 2015)”
Twenty-five years of portfolio reveal a clear story: without mentoring, portfolios have no future and are nothing short of bureaucratic hurdles in our competency-based education program
But I do prefer the term ‘learner chart’, but why not just say, ‘a record of what you did’, just like handing a lab book in. Once again, school teachers might be our guides.
People who make predictions of how many doctors or even what specific type of doctor we need in (say) 20 years are IMHO generally deluded. Or they are telling fibs. Or selling something. The following is from the NEJM and is about ‘hospitalists’
Twenty years ago, we described the emergence of a new type of specialist that we called a “hospitalist.”. Since then, the number of hospitalists has grown from a few hundred to more than 50,000 — making this new field substantially larger than any subspecialty of internal medicine (the largest of which is cardiology, with 22,000 physicians), about the same size as pediatrics (55,000), and in fact larger than any specialty except general internal medicine (109,000) and family medicine (107,000). Approximately 75% of U.S. hospitals, including all highly ranked academic health centers, now have hospitalists. The field’s rapid growth has both reflected and contributed to the evolution of clinical practice over the past two decades.
The only way you can play ‘make believe’ like the DoH and all the NHS ‘experts’ so keen to trample all over our medical students’ futures is if you think Stalin is still alive and sorting out the tractor numbers.
I have written before about the problems that learning outcomes fail to deal with (Jorge Luis Borges and learning outcomes), but there is another cognate issue that bugs me from time to time. The following is a quote from The Undercover Economist in the FT
Should the rules and targets we set up be precise, clear and sophisticated? Or should they be vague, ambiguous and crude? I used to think that the answer was obvious — who would favour ambiguity over clarity? Now I am not so sure. Ponder the scandal that engulfed Volkswagen in late 2015, when it emerged that the company had been cheating on US emissions tests. What made such cheating possible was the fact that the tests were absurdly predictable — a series of pre-determined manoeuvres on a treadmill.
I think there is a very clear downside to making too precise what it is that students should learn. I actually think we need noise in the system, simply because assessment methods are imperfect and unnatural, and the more you seek particular psychometric qualities the greater the distance between what is important and what and how you test. This is not a popular position to take, and I confess I am one of the worst offenders in terms of producing tightly defined content. But if:
assessment drives learning
the I would add, to make a couplet
assessment wrecks learning
Not so long ago, at an internal meeting, the message was ‘things are tough for this year, and next, but after that all the tightening and discipline will pay off, and things will get back to normal’. I doubted that at the time, and stick to my conviction that things are going to get a lot worse for much Higher Education in the UK. There is plenty of blame to go around: institutions have preferred their own propaganda to reality; they have allowed their business (sic) to grow fat on subsidies; and they have lost touch with the academic ideal. Most of all, they have failed to keep with up the rate of societal and technological change — ironic, since this is a world that universities, more than any other institutions, created. As we can see so bluntly in medical education (for an egregious example, just read this recent editorial in the BMJ), governments view universities not as meaningful components of a healthy society, but as providers who are required to do contract work for the government. The more the students have to pay for their own training rather than their own education, the better. The independence of many or most universities is illusory. They are like temporary post-docs, jumping from grant to grant, doomed to follow the ideas of others: renters not home owners. The institutional question remains: where do you position yourself? And how.
(Wonke has a little on some vibrations — aka shocks — that will only become bigger)
It was reading Herb Simon’s ‘Sciences of the Artificial’ that woke me up what some professional schools had in common. I even wrote a piece in PLoS Medicine arguing that medicine is more engineering than science (‘The problem with academic medicine: engineering our way into and out of the mess’). And I think I called it right. But the parallels between medicine and many other other traditional professions is large. I am thinking law, architecture, teaching, and engineering. These are all design sciences, or since I sort of object to this use of the word science, design domains. One of the reasons medical education — and to a lesser extent medicine is in such a mess — is the way that we have failed to grasp this distinctions. I wrote last year:
Simon was a genuine — and it is an overused word— polymath, and at that time I was ignorant of his many contributions. His work ranged through business administration, economics (for which he was awarded a ‘Nobel’ prize), cognitive science, computing, and artificial intelligence. But what fascinated me most was the content of his most famous book, ‘sciences of the artificial’. In this work Simon set out to unify and provide a common intellectual framework for many human activities that involve creating artefacts that that realise a purpose of our choosing. Unlike our dissection of the natural world, whether that be identification of a gene for a disease, or a virus that causes a human disease, Simon was concerned with how humans build artefacts. In particular how do we navigate search spaces that are large, and where uncertainty is all around, and where there may be no formal calculus to allow us to fire across boundaries. He was thinking about thinking machines of course, but quite explicitly he was concerned with the professions, architecture, law, and of great interest to me, medicine and teaching and learning. I was hooked.
One of my favourite quotes is from Simon’s ‘Models of My Life’
More and more, business schools were becoming school of operations research, engineering schools were becoming schools of applies physics and math, and medical schools ere becoming schools of biochemistry and molecular biology. Professional skills were disappearing from the faculties.…they did not fit the general norms of what is properly considered academic. As a result, they were gradually squeezed out of professional schools to enhance respectability in the eyes of academic colleagues.
So I warmed to an article titled ‘Building a future for engineering’ in the Times Higher, linking to a Royal Academy of Engineering’s 2014 report, ‘Thinking Like an Engineer – Implications for the Education System’. I have not read all of the latter, but I warm to the phrase in the THE, referring to the report: ‘Even more fundamentally, engineering is a set of habits of mind’. Clinical medicine is more engineering than science.
’The average doctor who emerges from this process is usually a basically bright person with a mutilated mind.’
This is an absolutely terrific article about ‘Choosing a specialty’. It focuses on psychiatry and the particularly problems that effect psychiatry but contains many powerful insights that most medics will recognise even if you have not expressed them. Many will not admit to them, and the medical schools will look the other way.
When you ask me whether you should enter psychiatry, your question also becomes whether I would go into psychiatry once again, knowing what I know now. Most people will tell you to enter their profession for that reason. They are justifying their own decisions. Their reply to you is a means of reassuring themselves.
You should ask yourself: Is your main purpose in choosing this line of work to make a living? If it is, then you should know it is, and don’t put too much effort or care into worrying about the work. It isn’t your main purpose in life. Your main purpose in life could be your marriage, or your children, or your larger family. Or it could be another activity other than your main paid work, such as writing, or art, or music, or faith.
The it gets into the DSM….
All to often people mortgage their honesty in order to purchase that which their colleagues mistake for rigour.
I think I get it. In fact, I wrote these words a few years back. The context was some of the absurdity of Bloom’s taxonomy and the empty rituals of learning outcomes. I haven’t changed my mind, but will keep saying the same thing until things change for the better. Original post here.
“In the village in which I live there is a pleasant doctor who is a little deaf. He is not shy about it and he wears a hearing aid. My young daughter has known him and his aid since she was a baby. When at the age of two she first met another man who was wearing a hearing aid, she simply said, ‘That man is a doctor.’ Of course she was mistaken. Yet if both men had worn not hearing aids but stethoscopes, we should have been delighted by her generalization. Even then she would have had little idea of what a doctor does, and less of what he is. But she would have been then, and to me she was even while she was mistaken, on the path to human knowledge which goes by way of the making and correcting of concepts.”
Science and Human Values, Jacob Bronowski
Compare this to the fate of Lymeswold, which was created in the 1980s and touted as the first new English cheese in 200 years. It was initially highly successful, but when demand outstripped supply, the manufacturers cut corners and released stocks before they had matured, resulting in its demise less than a decade after its birth.
Sounds a little like some aspects of doctor training in the NHS.
Exams begin next week. Type-A Anita is particularly nervous. Beginning last week she has refused to learn anything that is more in-depth than the NBME questions: “only high-yield.” She interrupts class once per day to complain when a professor gives more detail than the Step 1 exam books do. She also requests clarification about the number of questions per exam topic. She dropped her sweet Midwestern demeanor and submitted a formal complaint to the administration when an older physician said males have to work more to learn patient interviewing because women are more naturally caring.
via Philip Greenspun
Sunday evening a few students were invited to my favorite professor’s cabin. She is a never-married woman in her late 60s who has dedicated her life to the craft of trauma surgery. She entered medicine expecting to go into family practice. While a third year student, she requested to be sent for her family medicine rotation to a rural area. She drove into the mountains to a small mining town of 10,000 with two family physicians. Although regretting her decision at first, it was here that she learned to love emergency medicine. Sitting around the bonfire, she shared vivid memories of driving the ambulance up moonlit dirt roads to a mine and going down the shaft to retrieve injured miners.
What has changed in trauma surgery? “Well the cases have changed,” she answered. “I started out treating young males in high-velocity, multi-trauma injury cases: car accidents, gunshot wounds, stabbings. Now it is mostly low-velocity cases: an elderly patient who has fallen. The family feels terrible for not having been there when the trauma occurred. The family flies cross-country to say ‘Do everything you can to keep Grandpa alive,’ not understanding what this requires doctors to do. Too often they ignore palliative care.” She’d learned about hospital funding priorities: “It is easy to find donors for a state-of-the-art pediatrics wing; there is no money to remodel a decrepit geriatrics ward.” Her bonfire advice to us: (1) find a field where you will get more interested in it as you go on; (2) you can be happy in more than one residency field (i.e., don’t cry if you don’t get your first choice).
Statistics for the week… Study: 8 hours. Sleep: 6 hours/night; Fun: 2 outings. Example fun: Camping with Jane and Sunday BBQ at trauma surgeon’s cabin.
The bonfire advice (1) is very true (‘find a field where you will get more interested in it as you go on’). Just tricky. Note the ‘editor’s comments: “From the editor: Health care is nearly 20 percent of our GDP. The surest way to be a full participant in this massive and growing sector of the economy is to get an MD.”)
Well the US version
Anatomy begins at 7:00 am sharp.
Although even the dermatologists in Vienna were hard at work at 7am, I remember. It is just that the surgeons were there even earlier. Link here.
In all places, I came across the following in Dylan Wiliam’s most recent book (if you want to understand what teaching feedback really is, read it). After pointing out how some machine learning techniques can outperform some medics in some contexts, he writes
However, it is important to realize that the key factor in making jobs suitable for automating is not that they are manual or low skill. It is that they are routine. A task can require many years of training for humans to become good at it, but it can still be relatively routine, thus making it relatively straightforward to automate. This is just one example of a much more general principle, which is that many of the things that we thought would be easy to automate turn out to be rather complex, while many of the things that we thought would be hard to automate turn out to be reasonably simple. …. This observation—that high-level reasoning seems to require very little in the way of machine power, while many low-level sensorimotor skills appear to require huge computational resources—is known as Moravec’s paradox, named after Hans Moravec, who pointed out that “it is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility” (Moravec, 1988, p. 15).
Now, of course I work in a domain that is heavily perceptual and, as yet, AI systems have made little inroad. This may well be because the task is difficult (and that the human visual system is powerful) but also (critically) that the available training sets are orders of magnitude too small. This will only change if the clinical workflow is fully digital. We have published some work in this area, and if you visit the Dermofit app on the iOS store you will see an app that uses some machine learning. But a long, long way to go, and the humans can still pay the bills. For the moment.
The question for medical education is that as we (reasonably) concentrate our procedures around high level processing, the sort of environments we need to develop perceptual skills are neglected. You can do both.
There is a nice piece by Nassim Nicholas Taleb in Medium. It is from a forward to a book (I think) on physical / strength training. If you have read Taleb you will know this is not too surprising.
You will never get an idea of the strength of a bridge by driving several hundred cars on it, making sure they are all of different colors and makes, which would correspond to representative traffic. No, an engineer would subject it instead to a few multi-ton vehicles. You may not thus map all the risks, as heavy trucks will not show material fatigue, but you can get a solid picture of the overall safety.
Likewise, to train pilots, we do not make them spend time on the tarmac flirting with flight attendants, then switch the autopilot on and start daydreaming about vacations, thinking about mortgages or meditating about corporate airline intrigues — which represent about the bulk of the life of a pilot. We make pilots learn from storms, difficult landings, and intricate situations — again, from the tails.
In one sense he is saying something that is easy to agree with. But if you delve a little deeper, it is not what we always do in medical education.
The structures we create to enable learning in a clinical discipline are not mirrors of what goes on in the real world. Pace the airline example. We shouldn’t expect teaching time to mirror disease prevalence; we don’t spend most of our time in dermatology teaching students about viral warts, or dandruff, or toxic erythema. When you try to recognise objects, you do not just study those particularly objects. Rather, you have to study all the other objects. If you want to be able to ‘call out’ whenever you see a dog, you have to study cats. And chimps, and wolves and so on. This is one of the reasons why just learning about the top ten conditions makes little sense, if acts of recognition are involved. Most things are defined by what they are not. To think in the box, you have to know what is outside the box. This is what makes medical education a hard problem.
There are implications for clinical practice for the expert, too. Everyday practice appears to minimise the role of the statistical tails. Your learning about common condition may be ‘everyday stuff’ requiring little formal study. But for rare conditions, or odd presentations of common conditions, everyday practice, may not be sufficient — simply put, you do not see rare events frequently enough to consolidate and strengthen your memories. Everyday practice rarely provides enough critical mass, you might say. A practical example.
When I was a trainee in Newcastle if we saw an ‘interesting patient’ or a patient in which the diagnosis was unclear, we pressed a buzzer. The buzzer and flashing light went off in all the clinic rooms, the laboratories, the professor’s office and the seminar room. What happened then, resembled the Stepford wives. All descended on the particularly clinic room, as though under some malign influence. There were times when this was quite funny, although some patients might have told this differently.
This simple tool was just an implementation of another one of Rees’s rules: routine clinical practice is not sufficient to consolidate or acquire the skills you need to provide routine clinical practice. This seems like a paradox, but it isn’t. “A sailor gets to know the sea only after he has waded ashore.” Rather, I always view it as a solution to the forgetting curve that Ebbinghaus described (although I think there may be other justifications)
There is a simple learning point here. The acquisition or maintenance of clinical competence requires much more than seeing patients (and by this, I do not just mean reading research papers). Software, and virtual worlds that we control, might help. But the Rees maxim remains: routine clinical practice is not sufficient to consolidate or acquire the skills you need to provide routine clinical practice
There was a story in the FT a few weeks back (paywall). It concerned the painting ‘Portrait of a Man’, by the Dutch artist Frans Hals. Apparently, the Louvre had wanted to buy the painting some time back, but were unable to raise the funds. However, a few weeks ago, the painting was declared a “modern forgery” by Sotheby’s — trace elements of synthetic 20th-century materials have been discovered in it. The story has a wider resonance however. The FT writes:
But if anything the fake Hals merely highlights an existing problem in how we determine attribution. In their quest to confirm attributions, dealers and auction houses seek the imprimatur of independent, usually academic, experts. Often that person’s “expertise” is deduced by whether they have published anything on a particular artist. But the skills required to publish a book are different to those needed to recognise whether a painting is genuine. Many academics are also fine connoisseurs. One of the few to doubt the attribution to Parmigianino of the St Jerome allegedly connected to Ruffini was the English scholar, David Ekserdjian. But too often the market values being a published writer over having a good “eye”.
Here is a non trivial problem: how can we designate expertise, and to what extent can you formalise it. In some domains — research for example — it is easier than in others. But as anybody who reads Nature or the broadsheets knows, research publication is increasingly dysfunctional, partly because of the scale of modern science; partly because the ‘personal knowledge’ and community has been exiled; and partly because it has become subjugated to academic accountancy because the people running universities cannot admit that they do not possess the necessary judgment to predict the future. To use George Steiner’s tidy phrase, there is also the ‘stench of money’.
But the real danger is when the ‘research model’ is used in areas where it not only does not work, but does active harm. I wrote some time back in a paper in PLoS Medicine:
Herbert Simon, the polymath and Nobel laureate in economics, observed many years ago that medical schools resembled schools of molecular biology rather than of medicine . He drew parallels with what had happened to business schools. The art and science of design, be it of companies or health care, or even the type of design that we call engineering, lost out to the kudos of pure science. Producing an economics paper densely laden with mathematical symbols, with its patently mistaken assumptions about rational man, was a more secure way to gain tenure than studying the mess of how real people make decisions.
Many of the important problems that face us cannot be solved using the paradigm that has come to dominate institutional science (or I fear, the structures of many universities). For many areas (think: teaching or clinical expertise), we need to think in ‘design’ mode. We are concerned more with engineering and practice, than is normal in the world of science. I do not know to what extent this expertise can be formalised — it certainly isn’t going to be as easy as whether you published in ‘glossy’ or ’non-glossy’ cover journals, but reputations existed long before the digital age and the digital age offers new opportunities. Publishing science is one skill, diagnosing is another, but there is a lot of dark matter linking the two activities. What seems certain to me, is that we have got it wrong, and we are accelerating in the wrong direction.
No, it doesn’t: pure clickbait. But how many does it need? The headline was taken from a comment by Eric Schmidt, the former CEO of Google, that the ‘UK needs 10,000 computer science academics’, When I saw the headline, I initially read it as saying the UK needed another 10,000 computer science graduates. Oops. He means staff, not students.
But then I wondered, as I often have, how many academics in medicine we need, and how we might go about working out what the number should be. And I should add, I am sceptical we can know how many doctors we need, only those untouched by reality like Jeremy Hunt, know answers to question like that. But there are some numbers that are relevant, even if I cannot match Enrico Fermi’s ability to perform back of the envelope calculations (how many piano teachers are there in New York?).
Depending on how you parse the data, skin disease is said to be the commonest reason to visit a GP in the UK. Estimates suggest there are 15 million visits to GPs with a skin problem each year. In many countries all these patients would go direct to an office dermatologist (this distinction is important, but marginal to my argument here).
Each year about one million people with skin disease are referred from primary care to secondary care. New to follow up ratios are falling — being forced down
without any clinical reason because of money — but assume 1 to 1.5. In terms of visits, the ratio is much higher, because we have to include surgery and phototherapy, so the ratio of new to follow up is much higher, at a guess 1:4. This would mean 4 million visits. This seems frighteningly high.
There are around 70,000 GPs on the register, and around 600 consultant dermatologists in the UK. GP recruitment problems are well known, and estimates are that close to one third of all dermatologist posts are vacant (‘no suitable candidates’). There are juniors (sic) on top, and other miscellaneous doctors too. In terms of new patients, I see 26 per week, and I am clinically part time, so around 1000 per year, plus some on call work, which is light. If we divide the 1,000,000 new referrals by 400 consultants, we get each consultant seeing around 2,500. But if we add in juniors, staff grades and locus, the numbers *feel* about right.
If we were to look at academic staffing, we have about 30-35 clinical academics in dermatology in the UK. They spend their time between clinical practice, research and teaching. Most UK students are taught for most of their time by people who are not ‘academics’ or at least by people without what in most subjects and in most advanced countries would be recognised as an academic apprenticeship. Skin biology or skin science is notable by its almost complete absence in many — possibly the majority— of medical schools. If we argue — and I would — that those who run and organise teaching in higher education need to view this task as a *professional* task, we are running with say 15 FTE providing the undergraduate teaching resource that underpins clinical practice and early training / education. Note: my argument is about undergraduate education, and not specialist training; and I believe that teaching is not a ‘bolt-on’ activity at the undergraduate level (if you don’t agree with this view, I suggest you could largely dispense with university medical schools).
There is a simple way to frame any answer to my question. Do you think it is possible to produce and maintain a culture of learning and clinical expertise given the numbers above?
Attention: Some slipping of the causal nexus is evident.
An article in the Economist reviewing, or at least discussing, a couple of books about the rate of innovation caught my eye, in particular a snippet that I will expand on below. The books were “The Rise and Fall of American Growth” by Robert Gordon, and the “The Innovation Illusion” by Fredrik Erixon and Bjorn Weigel. I haven’t read either, but enjoyed a review of the Gordon book in the NYRB by Willian Nordhaus. Based on my reading of the Nordhaus book review the issue is that the rate of innovation and productivity is declining — we are not hurtling towards any singularity — and that the century of out of the ordinary innovation was 1870 to 1970. Here is Nordhaus:
Gordon focuses on growth in the United States. Living standards, as measured by GDP per capita or real wages, accelerated after 1870. The growth rate looks like an inverted U. Productivity growth rose from the late nineteenth century and peaked in the 1950s, but has slowed to a crawl since 1970. In designating 1870–1970 as the special century, Gordon emphasizes that the period since 1970 has been less special. He argues that the pace of innovation has slowed since 1970 (a point that will surprise many people), and furthermore that the gains from technological improvement have been shared less broadly (a point that is widely appreciated and true).
In the Economist article, we read:
The figures from recent years are truly dismal. Karim Foda, of the Brookings Institution, calculates that labour productivity in the rich world is growing at its slowest rate since 1950. Total factor productivity (which tries to measure innovation) has grown at just 0.1% in advanced economies since 2004, well below its historical average.
I do not find this view strange. Medical advance is slowing, not accelerating. Medicine was transformed between 1940 and 1970, but the rate of new discovery has slowed. There is more data , more activity, and more scientists, of course. And a lot more hype and university press officers. Just less advance in comparison with what went before. The same is true about university education, too.
Criticisms of these views include questions about the data used to support the various arguments. In the Economist piece, the ‘techno optimists’ make two criticisms. The second is that the ‘techno’ revolution hasn’t really started yet, but it is the first one that caught my eye:
The first is that there must be something wrong with the figures. One possibility is that they fail to count the huge consumer surplus given away free of charge on the internet. But this is unconvincing. The official figures may well be understating the impact of the internet revolution, just as they downplayed the impact of electricity and cars in the past, but they are not understating it enough to explain the recent decline in productivity growth.
Paul Mason elsewhere uses the example of Wikipedia:
Wikipedia is a non-market form of activity—it’s a $3bn hole in the advertising world.
Now bringing this back to my own little world, I am intrigued by how the battle between, on the one hand, free or OER, and on the other, books or content, you have to pay for, will work out. I touched on this in an article on teaching and learning several years ago, and one of the reasons I wrote the freely accessible textbook of skin cancer, www.skincancer909.com* was out of frustration at the poor quality of dermatology textbooks targeted at medical students. When I surveyed medical students a large fraction did not buy a dermatology textbook, yet it is clear that the university did not provide suitable alternatives, nor was the university able to provide reasonable online alternatives. Now, I do not believe that free is always best, nor do I think that the endgame is anytime soon. But I do believe content is critical, and despair at how the med ed (medical education) world largely ignores it. But there are amazing commercial books out there — think Molecular Biology of the Cell for instance — and there is a battle to be waged about whether you invest large amounts of money in producing material used by many, or continue with the traditional approach taken by universities (those ‘bloody PowerPoints’ and dull lectures, all done on a shoestring budget).
Woodie Flowers touched on cognate issues in a critique of MOOCs and MITx
In the United States, our “education” system is choking to death on a failed training system. Each year, 600,000 first-year college students take calculus; 250,000 fail. At $2000/failed-course, that is half-a-billion dollars. That happens to be the approximate cost of the movie Avatar, a movie that took a thousand people four years to make. Many of those involved in the movie were the best in their field. The present worth of losses of $500 million/year, especially at current discount rates, is an enormous number. I believe even a $100 million investment could cut the calculus failure rate in half.
The criticism stings because Flowers is an educational legend (it also speaks to MIT that they broadcast such critiques of their own activities). Here is Flowers again:
Properly designed new media materials can improve K–12, residential, distance, and life-long learning. In their highly developed form, these learning materials would be as elegantly produced as movies and video games and would be as engaging as a great novel.
I do not know how all of this will work out. I am intrigued by the view that we might be underestimating ‘production’ because much of it is free, but I think we are seeing real market failure, both from the commercial world and from the universities.
* Skincancer909 is due an update, and I am aiming for early 2017.
I was sat in a meeting recently. We were discussing teaching, amongst other things. And I pencilled out what we in dermatology deliver each year, every year (subtext: I doubted that people realise how much effort and resource we need to teach clinical medicine).
We provide clinical placements for 36 weeks per year, with 12-14 students attached for each two week period, in 18 ‘cohorts’. Over the year we provide just under 400 hours of clinical seminars, in which patients appear, but are there for teaching purposes only, with the students in groups of around ten, and with the teacher having no other responsibility (they are not managing patients). In addition we provide around 1500 hours of clinical experience — timetabled events in which a student attends a session in which they are not the focus of attention. These latter sessions are real: they are timetabled by person and time, start and finish on time, and are rarely cancelled or changed.
Students like what we do, they like the online stuff, too, and the staff are enthusiastic. But this system does not run itself and, in the long term, I fear might not be sustainable, even though the funding is said to be there. It is certainly not optimal, even though our students get a better deal than students at most other UK medical schools. We need to do something else, building on what we do well. Just thinking.
In my ‘online’ textbook of ‘rashes’ (ed.derm.101), that is the non-cancer bits of dermatology, I have a chapter called ragbag. I used to have two ragbag chapters, but now by combining them, the subject has been made simpler and easier, even though the content is unchanged. I am sure students agree. I put in this chapter all the things I do not put elsewhere. I thought I should now fess up a little, as Gödel would not have said.
In ‘John Wilkins’ Analytical Language’, Borges refers to the work of Franz Kuhn and his work on the the Chinese encyclopaedia the ‘Heavenly Emporium of Benevolent Knowledge’ (John Luis Borges, Selected Non-fiction, ISBN978-0-14-029011-0). This work is a classification of the world. For instance, you can classify the world into various groupings or categories. For animals we have:
All very straightforward stuff, as any medical student will agree. I also like the system proposed by the Bibliographical Institute of Brussels (after Borges). It parcelled the universe into 1000 subdivisions with number 262 corresponding to the Pope, 268 to Sunday Schools, 298 to Mormonism, and 179 to cruelty to animals, duelling and suicide.
We have lots of similar systems in medicine, some of which seem less fit for purpose than those described above. My incomplete categorisation of categorisation (the grant was turned down) is as follows:
Dreyfus and Dreyfus, the US philosophers and students of AI, pointed out that although we like to use rule-based systems in teaching, experts quickly forget them, and do not appear to use them (From Socrates to Expert Systems:The Limits of Calculative Rationality, 1984). We just inflict them on the young either because that is the only way we know how to encourage learning, or because we repeat what happened to us earlier in our career without good reason. Quoting Dreyfus and Dreyfus:
The beginning student wants to do a good job, but lacking any coherent sense of the overall task, he judges his performance mainly by how well he follows his learned rules. After he has acquired more than just a few rules, so much concentration is required that his capacity to talk or listen to advice is severely limited.
They were not writing about medical students. But I recognise what is going on. I can remember it too. Classifications may or may not be useful in learning a subject, and in chunking, may provide a guide to action. But the more experience you get, the less the clinician uses them. Academics, of course, play by different rules, and get to write the rules.