From a review in the FT of Steve Silberman, of what sounds like a book for our time, ‘How to survive a plague’, by David France.
“Jesse Helms, the five-term Republican senator from North Carolina, personally blocked spending on Aids prevention, treatment or research for years, pontificating from the Senate floor, “We’ve got to call a spade a spade, and a perverted human being a perverted human being.”
“Some of the most haunting passages in the book record twists of fate that delayed effective prevention and treatments for years. A young chemist at Merck who suspected early on that a class of drugs called protease inhibitors might yield promising avenues for research was killed on Pan Am Flight 103, downed by Libyan terrorists over Lockerbie, Scotland, in 1988. Years passed before other researchers went down that road again, discovering a drug that became a template for the “cocktails” that have turned HIV infection into a manageable chronic illness rather than a certain death sentence.”
In 1978, the distinguished professor of psychology Hans Eysenck delivered a scathing critique of what was then a new method, that of meta-analysis, which he described as “an exercise in mega-silliness.”
Matthew Page and David Moher here in a commentary on a paper by the ever ‘troublesome’ John Ioannidis, in his article titled, “The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses”
To which, some of use would say, this was all predictable when the EBM bandwagon jumped on the idea that collating some information, and ignoring other information was ‘novel’. Science advances by creating and testing coherent theories of how the world works. Adding together ‘treatment effects’ is messier, and more prone to error. Just because you can enter data in a spreadsheet, doesn’t mean you should.
UCAS has just released its data on medical school applications for this year (2017 entry)……A few bits of background information are useful first. Total applications in 2015 were 82,034 for 7,424 places, which leads to the oft-cited ‘one place for eleven applicants’. However these numbers include multiple applications by students. The number of individual applicants was 20,100 and success at gaining a place depended on country of origin. Success rates were 40% for UK applicants, 10% for EU applicants and 20% for non-EU applicants.
Andrew Goddard, here. I am surprise that the acceptance rate is this high, but I guess there is a lot of self-selection. I do not have a good feel for what is influencing career choices, but the baby boomers had it better — once they got in, sadly. I will stick with my default opinion and change as the evidence changes: what happened to teaching, will happen to medicine. Politics is a dominant negative mutation, and all buckle before it.
This (‘Meaning and the Nature of Physicians’ Work’, NEJM 275;19:1813-5) is one of the best things I have read in a long time. It is right on some big issues, and is only depressing if you think positive action is not possible. The article speaks to the dehumanisation of the practice of medicine, and what happens when there is a gulf between those who practice, and those who think that a summary of practice is the same as actual practice. Tacit knowledge doesn’t fill cells in Excel spreadsheets. It is of course, not just medicine that is changing this way, but other professions — look at what has happened to schoolteaching and the bogus politics and ‘common sense’ that masquerades as evidence in education.
There are lots of downsides to technology making it harder for some things to be forgotten. But lots of advantages too. I use a simple diarying App that is available on mobile and Mac, Day One.
One of its nice features is that gives you the option to see comments you have made on the same date in previous years. Now, like many people I tend to often agree with myself — at least over the short term— but it is fun to read earlier musings and wonder if the nuance needs changing, but also to see the same underlying memes appearing again and again. Often, I am still in agreement, with my earlier comments. Sometimes not. Here is one from three years ago.
On the usually sound principle that there is nothing in UK medicine that can’t be made worse by the involvement of the General Medical Council
Nigel Hawkes in BMJ,10 November 2012
This film from 38 degrees is incredibly moving. And I recognise the places and the hospitals. Worth a critical review, but the abuse of trust is manifest (which is why they are called ‘trusts”). My post title — ‘Forty year ago we would have started a revolution’ — is my favourite line.
The quick retort is, of course, that they are actually treated worse — at least worse than some factory workers. The heading is from an article in BMJ careers. Jane Dacre is quoted as saying we need more juniors, and ‘more boots on the ground’. By contrast, Catherine Calderwood, Scotland’s CMO didn’t agree that an any increase in junior doctors was necessary:
“I’m not sure that more is definitely better. We’ve had a 38% increase in consultants in Scotland in the past eight years, and almost a doubling of our emergency medicine consultants, and I’m not sure if I walked into an emergency department they would tell me it’s half as much work as it was eight years ago.”
She went on to spread the kool-aid a little more:
“I think sometimes doctors have not embraced others doing some tasks, and I would like us to be much more like a conductor for the orchestra. So (only) the really difficult stuff, the really responsible stuff, and the really clever stuff is what comes to the doctor as the senior leader.”
I think some of what they both say is correct, but I also fear some of what they both say is politics. The UK — including Scotland — is desperately short of doctors. Appointment times are too brief, waiting times out of control, clinical expertise increasingly patchy, and doubts about the adequacy of training, widespread. Demand is rising, and wants and needs increasingly confused. From where I stand, clinical service in some areas is getting worse. Scotland’s NHS is, in some parts, second world standard, as a colleague from mainland Europe once reminded me.
The issue about paramedical staff is important, but we have been here before. I may be misquoting the figures a little — and they are for the US— but around 1900 one in three health care workers was a doctor. Now the figure is around 1 in 14. This trend in the ratio will and should continue. The discussions about physician assistants and the like have been going on for over half a century, with little evidence of action. The key issue is that if you want to encourage people to move into new roles, you have to create a certification system that rewards and encourages people to do these jobs. That is why we have radiographers, pharmacists and the like. But successive UK government officials hate the idea of certification, and prefer that ‘nurses’ moves into roles that they have little formal training for, and instead, end up existing free of any meaningful regulation. Governments cannot face up to the fact that these ‘health care workers’ (apologies: an awful bloody phrase) will need more than subsistence wages, and setting the system up will require upfront funding. I used to laugh at the image of a doctor’s office with all the certificates on the wall behind the desk, but now I advise patients always to ask whether the person providing diagnostic or therapeutic activities has a recognised medical qualification. How many melanomas do you diagnose a year?; how many times have you performed this procedure?; what exams did you have to pass etc.
If you ask whether you can train graduates to become physician assistants in dermatology, dermatological surgery and the like, the answer is a clear yes. But to make this a sensible career choice, we need certification — theory and practical exams and so on — and job titles that are transferable wherever somebody works. At present we simply to not have this. The dentists have done this and — key to any debate — practitioners (hygienists etc), whether they are dentists or not are registered with the General Dental Council. Now, the GDC is not a very popular organisation, but the idea of formal certification —something that means that practitioners can move jobs easily — is a key component of making this system work. Institutions matter, as do incentives.
In all places, I came across the following in Dylan Wiliam’s most recent book (if you want to understand what teaching feedback really is, read it). After pointing out how some machine learning techniques can outperform some medics in some contexts, he writes
However, it is important to realize that the key factor in making jobs suitable for automating is not that they are manual or low skill. It is that they are routine. A task can require many years of training for humans to become good at it, but it can still be relatively routine, thus making it relatively straightforward to automate. This is just one example of a much more general principle, which is that many of the things that we thought would be easy to automate turn out to be rather complex, while many of the things that we thought would be hard to automate turn out to be reasonably simple. …. This observation—that high-level reasoning seems to require very little in the way of machine power, while many low-level sensorimotor skills appear to require huge computational resources—is known as Moravec’s paradox, named after Hans Moravec, who pointed out that “it is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility” (Moravec, 1988, p. 15).
Now, of course I work in a domain that is heavily perceptual and, as yet, AI systems have made little inroad. This may well be because the task is difficult (and that the human visual system is powerful) but also (critically) that the available training sets are orders of magnitude too small. This will only change if the clinical workflow is fully digital. We have published some work in this area, and if you visit the Dermofit app on the iOS store you will see an app that uses some machine learning. But a long, long way to go, and the humans can still pay the bills. For the moment.
The question for medical education is that as we (reasonably) concentrate our procedures around high level processing, the sort of environments we need to develop perceptual skills are neglected. You can do both.
There is a nice piece by Nassim Nicholas Taleb in Medium. It is from a forward to a book (I think) on physical / strength training. If you have read Taleb you will know this is not too surprising.
You will never get an idea of the strength of a bridge by driving several hundred cars on it, making sure they are all of different colors and makes, which would correspond to representative traffic. No, an engineer would subject it instead to a few multi-ton vehicles. You may not thus map all the risks, as heavy trucks will not show material fatigue, but you can get a solid picture of the overall safety.
Likewise, to train pilots, we do not make them spend time on the tarmac flirting with flight attendants, then switch the autopilot on and start daydreaming about vacations, thinking about mortgages or meditating about corporate airline intrigues — which represent about the bulk of the life of a pilot. We make pilots learn from storms, difficult landings, and intricate situations — again, from the tails.
In one sense he is saying something that is easy to agree with. But if you delve a little deeper, it is not what we always do in medical education.
The structures we create to enable learning in a clinical discipline are not mirrors of what goes on in the real world. Pace the airline example. We shouldn’t expect teaching time to mirror disease prevalence; we don’t spend most of our time in dermatology teaching students about viral warts, or dandruff, or toxic erythema. When you try to recognise objects, you do not just study those particularly objects. Rather, you have to study all the other objects. If you want to be able to ‘call out’ whenever you see a dog, you have to study cats. And chimps, and wolves and so on. This is one of the reasons why just learning about the top ten conditions makes little sense, if acts of recognition are involved. Most things are defined by what they are not. To think in the box, you have to know what is outside the box. This is what makes medical education a hard problem.
There are implications for clinical practice for the expert, too. Everyday practice appears to minimise the role of the statistical tails. Your learning about common condition may be ‘everyday stuff’ requiring little formal study. But for rare conditions, or odd presentations of common conditions, everyday practice, may not be sufficient — simply put, you do not see rare events frequently enough to consolidate and strengthen your memories. Everyday practice rarely provides enough critical mass, you might say. A practical example.
When I was a trainee in Newcastle if we saw an ‘interesting patient’ or a patient in which the diagnosis was unclear, we pressed a buzzer. The buzzer and flashing light went off in all the clinic rooms, the laboratories, the professor’s office and the seminar room. What happened then, resembled the Stepford wives. All descended on the particularly clinic room, as though under some malign influence. There were times when this was quite funny, although some patients might have told this differently.
This simple tool was just an implementation of another one of Rees’s rules: routine clinical practice is not sufficient to consolidate or acquire the skills you need to provide routine clinical practice. This seems like a paradox, but it isn’t. “A sailor gets to know the sea only after he has waded ashore.” Rather, I always view it as a solution to the forgetting curve that Ebbinghaus described (although I think there may be other justifications)
There is a simple learning point here. The acquisition or maintenance of clinical competence requires much more than seeing patients (and by this, I do not just mean reading research papers). Software, and virtual worlds that we control, might help. But the Rees maxim remains: routine clinical practice is not sufficient to consolidate or acquire the skills you need to provide routine clinical practice
There was a story in the FT a few weeks back (paywall). It concerned the painting ‘Portrait of a Man’, by the Dutch artist Frans Hals. Apparently, the Louvre had wanted to buy the painting some time back, but were unable to raise the funds. However, a few weeks ago, the painting was declared a “modern forgery” by Sotheby’s — trace elements of synthetic 20th-century materials have been discovered in it. The story has a wider resonance however. The FT writes:
But if anything the fake Hals merely highlights an existing problem in how we determine attribution. In their quest to confirm attributions, dealers and auction houses seek the imprimatur of independent, usually academic, experts. Often that person’s “expertise” is deduced by whether they have published anything on a particular artist. But the skills required to publish a book are different to those needed to recognise whether a painting is genuine. Many academics are also fine connoisseurs. One of the few to doubt the attribution to Parmigianino of the St Jerome allegedly connected to Ruffini was the English scholar, David Ekserdjian. But too often the market values being a published writer over having a good “eye”.
Here is a non trivial problem: how can we designate expertise, and to what extent can you formalise it. In some domains — research for example — it is easier than in others. But as anybody who reads Nature or the broadsheets knows, research publication is increasingly dysfunctional, partly because of the scale of modern science; partly because the ‘personal knowledge’ and community has been exiled; and partly because it has become subjugated to academic accountancy because the people running universities cannot admit that they do not possess the necessary judgment to predict the future. To use George Steiner’s tidy phrase, there is also the ‘stench of money’.
But the real danger is when the ‘research model’ is used in areas where it not only does not work, but does active harm. I wrote some time back in a paper in PLoS Medicine:
Herbert Simon, the polymath and Nobel laureate in economics, observed many years ago that medical schools resembled schools of molecular biology rather than of medicine . He drew parallels with what had happened to business schools. The art and science of design, be it of companies or health care, or even the type of design that we call engineering, lost out to the kudos of pure science. Producing an economics paper densely laden with mathematical symbols, with its patently mistaken assumptions about rational man, was a more secure way to gain tenure than studying the mess of how real people make decisions.
Many of the important problems that face us cannot be solved using the paradigm that has come to dominate institutional science (or I fear, the structures of many universities). For many areas (think: teaching or clinical expertise), we need to think in ‘design’ mode. We are concerned more with engineering and practice, than is normal in the world of science. I do not know to what extent this expertise can be formalised — it certainly isn’t going to be as easy as whether you published in ‘glossy’ or ’non-glossy’ cover journals, but reputations existed long before the digital age and the digital age offers new opportunities. Publishing science is one skill, diagnosing is another, but there is a lot of dark matter linking the two activities. What seems certain to me, is that we have got it wrong, and we are accelerating in the wrong direction.
No, it doesn’t: pure clickbait. But how many does it need? The headline was taken from a comment by Eric Schmidt, the former CEO of Google, that the ‘UK needs 10,000 computer science academics’, When I saw the headline, I initially read it as saying the UK needed another 10,000 computer science graduates. Oops. He means staff, not students.
But then I wondered, as I often have, how many academics in medicine we need, and how we might go about working out what the number should be. And I should add, I am sceptical we can know how many doctors we need, only those untouched by reality like Jeremy Hunt, know answers to question like that. But there are some numbers that are relevant, even if I cannot match Enrico Fermi’s ability to perform back of the envelope calculations (how many piano teachers are there in New York?).
Depending on how you parse the data, skin disease is said to be the commonest reason to visit a GP in the UK. Estimates suggest there are 15 million visits to GPs with a skin problem each year. In many countries all these patients would go direct to an office dermatologist (this distinction is important, but marginal to my argument here).
Each year about one million people with skin disease are referred from primary care to secondary care. New to follow up ratios are falling — being forced down
without any clinical reason because of money — but assume 1 to 1.5. In terms of visits, the ratio is much higher, because we have to include surgery and phototherapy, so the ratio of new to follow up is much higher, at a guess 1:4. This would mean 4 million visits. This seems frighteningly high.
There are around 70,000 GPs on the register, and around 600 consultant dermatologists in the UK. GP recruitment problems are well known, and estimates are that close to one third of all dermatologist posts are vacant (‘no suitable candidates’). There are juniors (sic) on top, and other miscellaneous doctors too. In terms of new patients, I see 26 per week, and I am clinically part time, so around 1000 per year, plus some on call work, which is light. If we divide the 1,000,000 new referrals by 400 consultants, we get each consultant seeing around 2,500. But if we add in juniors, staff grades and locus, the numbers *feel* about right.
If we were to look at academic staffing, we have about 30-35 clinical academics in dermatology in the UK. They spend their time between clinical practice, research and teaching. Most UK students are taught for most of their time by people who are not ‘academics’ or at least by people without what in most subjects and in most advanced countries would be recognised as an academic apprenticeship. Skin biology or skin science is notable by its almost complete absence in many — possibly the majority— of medical schools. If we argue — and I would — that those who run and organise teaching in higher education need to view this task as a *professional* task, we are running with say 15 FTE providing the undergraduate teaching resource that underpins clinical practice and early training / education. Note: my argument is about undergraduate education, and not specialist training; and I believe that teaching is not a ‘bolt-on’ activity at the undergraduate level (if you don’t agree with this view, I suggest you could largely dispense with university medical schools).
There is a simple way to frame any answer to my question. Do you think it is possible to produce and maintain a culture of learning and clinical expertise given the numbers above?
I try to avoid writing on this topic, finding it too depressing — although not as depressing as I once did, as I am closer to the end rather than the beginning. And there are signs of hope, just not where they once were.
There is an editorial in Nature titled ‘Early-career researchers need fewer burdens and more support’. It makes depressing reading. The contrast is with a talk on YouTube I listened to a few days back, by the legendary computer engineer (and Turing award winner and much else) Alan Kay, in which he points out that things were really much better in the 1960s and people at the time knew they were much better. Even within my short career, things were much better in 1990 than 2000, 2000 than 2010 and so on. When people ask me, is it sensible to pursue a career in science, I am nervous about offering advice. Science is great. Academia, in many places, is great. But you can only do most science or academia in a particular environment, and there are few places that I would want to work in if I were starting out. And I might not get into any of them, anyway (Michael Eisen’s comment: never a better time to do science, never a worse time to be a scientist’). I will share a few anecdotes.
Maybe 10-15 years ago I was talking to somebody who — with no exaggeration — I would describe as one of the UKs leading biologists. This person described how one of their offspring was at university and had, for the first few years not taken his/ her studies too seriously. Then things changed, and they wondered about doing a PhD and following a ‘classical’ scientific career. The senior biologist expressed concern, worried that there was now no sensible career in science, and that much as though he/she had enjoyed their career, he/she could not longer recommend it. There was some guilt, but your children are your children.
The second, was a brief conversation with the late physicist John Ziman. I had read some of Ziman’s work — his ‘Real Science’ is for me essential reading for anybody who wants to understand what has happened to the Mertonian norms, and why science is often increasingly dysfunctional — but he shared a bit of his life history with me. When he was appointed as a lecturer at Cambridge in physics, the topic of his lectures was ‘new’ and there were no established books. So he set out to remedy the situation and spent the first two years writing such a book (still available, I think), and after that, turned his attention back to physics research, and later much more (‘you have to retire to have the time to do serious work’). He commented that this would simply be impossible now.
With respect to medicine, there has been attempts for most of my life to develop schemes to encourage and support young trainees. I benefited from them, but I question whether they target the real problem. There are a number of issues.
First, the model of training of clinical academics in medicine is unusual. Universities tend to want external funders to support the research training of clinical academics (Fellowships), but that is a model with severe limitations. Nurturing talent is a core business of the universities, and they need to devote resource to it. It is their resposibility. Of course, they need to train and support academics, not just researchers. This is what career progression within academia is about: lecturer, reader, professor etc. What medical schools want to do is to off load the risk on to the person, and then only buy when the goods have been tasted. In a competitive world, where other career options are open, this might not work well. Worst of all, it funnels a large number of institutions — institutions that should show diversity of approaches — into the lowest common denominator of what is likely to be funded by the few central funders. Until you have independence of mind and action, you cut your chances of changing the world. (Yes, I hear you say, there is not enough money, but most universities need to cut back on ‘volume’.)
The second issue, is about whether the focus should be on schemes encouraging young people into science. I know I may sound rather curmudgeonly, but I worry that much activity relating to pursuing certain careers is reminiscent of ‘wonga like’ business models. I think we should do better. If youngsters look at what life is like at 40, 50 and 60 or beyond, and like it, they might move in that direction. You would not need to encourage them — we are dealing with bright people. A real problem for science funding is that for many individuals, it resembles a subsistence society, with little confidence about long term secure funding, and little resilience against changes in political will. Just look at Brexit. I remember once hearing somebody who had once considered a science career telling me that it seemed to him that most academics spent their life writing grants, and feeling uncomfortable about replacing what they wanted to do, with what might be funded. Conversations about funding occupied more time than serous thinking. I listened nervously.
Finally, I take no pleasure in making the point, but I do not see any reason to imagine that things will get better over a ten or twenty year period. One of my favourite quotes of the economist Kenneth Galbraith, is to the effect that the denigration of value judgement is one of the ways the scientific establishment maintains its irrelevance. I think there is a lot in that phrase. If we were to ask the question, what is more critical: understanding genetics, or understanding how institutions work, I know where my focus wold be be. I suspect there is more fun there too, just that much of the intellectual work might not be within academia’s walls.
Note: After writing this I worried that people would think that I was opposing schemes to encourage young people, or that I failed to understand that we have to treat those with new ideas differently. That was not my intention. Elsewhere I have quoted Christos Papadimitriou, and he gets my world view, too.
“Classics are written by people, often in their twenties, who take a good look at their field, are deeply dissatisfied with an important aspect of the state of affairs, put in a lot of time and intellectual effort into fixing it, and write their new ideas with self-conscious clarity. I want all Berkeley graduate students to read them.”
“The immune system is unknowable, dynamic, complicated, and it always surprises you.” Stephen Deeks quoted in Science. And yet, useful discoveries are made, and have been made for a long time.
The University believes the Secretary of State should not have the ability to determine, or even to have significant influence over, which subjects universities can and cannot teach.
Written evidence submitted by the University of Cambridge (HERB 17). Higher Education and Research Bill Committee.
Years ago, we would have said that the state trying to control education would have been a characteristic of societies — Russia, China, spring to mind — that paid little attention to individual liberty or which failed to understand that most expertise — and power — should reside outwith government. No longer. Sadly, medical education is already increasingly subjugated to the state.
“The NHS has developed a widespread culture more of fear and compliance, than of learning, innovation and enthusiastic participation in improvement.” It also said “Virtually everyone in the system is looking up (to satisfy an inspector or manager) rather than looking out (to satisfy patients and families)” and “managers ‘look up, not out.’”
The Institute for Healthcare Improvement (IHI), a US organisation, report on the NHS ( quoted by Brian Jarman in the BMJ. BMJ 2012;345:e8239 doi: 10.1136/bmj.e8239 (Published 19 December 2012)).
The NHS could then become a threadbare charity, available to avoid the embarrassment of visible untreated illness. Julian Tudor Hart BMJ 2016:354;i4934
Abraham Vergese, an infectious disease physician, gave a talk here in Edinburgh last week. It was a very mixed audience, but I suspect the many students who were there enjoyed it. I have not read any of this books — nor looked at his TED talk — but his Wikipedia entry gives you a flavour of how interesting he is, and how varied a career can be — when you have courage.
One issue that came up tangentially, was the history of diagnosis, and there were some opinions ventured by the audience in terms of when diagnosis was historically established. I may have missed key points, but I found it hard to accept that the idea of diagnosis was something you could date except in very broad terms, even less that you could associated it with the 1870s or with the idea of stethoscopes being a key marker of when modern ideas of diagnosis were established. For instance — and since the lecturer was an ID physician — my first thoughts turned to scabies. The scabies mite was identified in the 1690s, and it was recognised as the cause of the disease ( I am not quoting primary sources so let me know if……) So here we have a clear linking of symptoms, signs, causality, a causal agent, and a broader theory about pathogenesis and epidemiology. So, this it got me thinking about how I view the topic of diagnosis.
Diagnosis is the mapping of one state with another, with the two states being linked by a network of attributes. Diagnosis is a suitcase term: it may contain lots of different tools, tools suited to various purposes, and tools for which we may find different purposes over time. Diagnosis represents an attempt to classify the world into particular states with often the goal of making some predictions about some other state. Most of the time, we think in terms of prediction, about what might happen to that person with or without some intervention. If you see these physical signs (burrows) and the patient describes particular symptoms (itch), then the ‘state’ is scabies. If the diagnosis is correct, you can say something about what causes the state, what might happen, and what effect a particular intervention (permethrin / malathion etc) might have. If you are lucky, you can feel happy with causal arrows linking much of what you say and think. Prediction is important but it is of course not the only quality we want in a theory. We tend to prefer some theories to others, even when they why make similar predictions. Think of Copernicus. We tend to prefer one of the following, irrespective of whether both allow the same quantitative clinical predictions:
Our suitcase of diagnostic concepts have changed over time, however. For instance, even in modern medicine, causality is often lacking. We may use proxy or associated factors to define particular states. We may use simple heuristics as our guide to action, even though we have little idea of where the causal arrows are going. Think much of psychiatry. This does not mean we are powerless, just that we are more ignorant than we would like. We are of course wedded to particular metaphysical systems.
Diagnosis might have been used in the absence of knowledge about particular interventions to attribute blame, as an explanation. If a patient behaved in this way or suffered some state, it was a divine punishment for some behaviour. Now, I may not agree with this world view, but this too is diagnosis. The theory my seem wrong, it may seem primitive, but then my ideas of physics are primitive too if they are applied to the world of the very small.
Galen thought in terms of the mean, and the treatment by opposites (hot treatments for cold; moist treatment for drying diseases etc). This all sounds slightly crazy to modern ears (although dermatologists among you will point out the latter has definite therapeutic merit within very particular skin states). Or how about the idea of therapeutic ‘signatures’. This is from Ian Hacking :
Syphilis is signed by the market place where it is caught; the planet Mercury has signed the market place; the metal mercury, which bears the same name, is therefore the cure for syphilis.
As Hacking points out this allowed Paracelsus to kill lots of people simply because he knew that mercury worked. But whatever the metaphysical system linking two states, the idea of diagnosis was firmly established. Just as Newton got most things right in his physics, and most of us ignore what came after — except when we use the GPS.
Diagnosis was not limited to medicine. Our ancestors spent their lives making diagnoses about what to eat and what not to eat. Making diagnoses about what particular weather states would do to crops etc. Plumbers make diagnoses, as do any humans trying to make sense of an environment that is not static, and where we value intervention.
What may have been specific to medicine was our hangs up about whether there was something special about humans, and whether the simple rules, experimentations and demonstrations of efficacy that allowed other types of human technological progress or indeed much of everyday life, applied in the domain of disease. Successful interventions or demonstrations will have had an effect on metaphysical beliefs in the long term. And of course much of this story is tied up with the growth of that particular branch of formal knowledge we call science. 1870 is just a little late.
 Hacking I. The emergence of probability : A philosophical study of early ideas about probability, induction and statistical inference. Cambridge: Cambridge University Press; 1984.
“Physicists studying sport have established that many fieldsmen are very good at catching balls, but bad at answering the question: “Where in the park will the ball land?” Good players don’t forecast the future, but adapt to it. That is the origin of the saying “keep your eye on the ball”.
As complex systems go, the interaction between the ball in flight and the moving fieldsman is still relatively simple. In principle, most of the knowledge needed to compute trajectories and devise an optimal strategy is available: we just don’t have the instruments or the time for analysis and computation. More often, the relevant information is not even potentially knowable. The skill of the sports player is not the result of superior knowledge of the future, but of an ability to employ and execute good strategies for making decisions in a complex and changing world. The same qualities are characteristic of the successful executive. Managers who know the future are more often dangerous fools than great visionaries.”
I think you could say the same about education and medicine: you can say less than you know.
Donald “D.A.” Henderson, an American epidemiologist who led the international war on smallpox that resulted in its eradication in 1980, has died.
“But it was in the fight on smallpox — perhaps the most lethal disease in history and one that killed an estimated 300 million people in the 20th century alone — that he became known around the world…”
“I think it can be fairly said that the smallpox eradication was the single greatest achievement in the history of medicine,” Richard Preston, the best-selling author of volumes including “The Hot Zone,” about the Ebola virus, and “The Demon in the Freezer,” about smallpox, said in an interview. He described Dr. Henderson as a “Sherman tank of a human being — he simply rolled over bureaucrats who got in his way.”
“If we’re really going to save money in health care, it means that somebody’s going to get paid less,”
Of course, this is not always true, but just in the same way that you should never say never in medicine.
Austin Frakt, quoted in the Boston Globe
The (medical) future is here, just unevenly distributed
The lessons from Glybera, the first gene therapy to be sold in Europe, still loom large. It cures a genetic condition that causes a dangerously high amount of fat to build up in the blood system. Priced at $1m, the product has only been bought once since 2012 and stands out as a commercial disaster. Economist
Terminology is a big problem in dermatology, and not just for dermatologists or other doctors. Years ago, many doctors avoided referring to basal cell carcinomas (BCC) as cancers. The reasons are obvious: whilst they are invasive and are carcinomas, their behaviour puts them in a class of their own. Once you say carcinoma to many patients, the baggage is so large, that you must then give a mini lecture explaining the inadequacy of generic terminology. And you then worry whether they believe you entirely. When the newspapers run the summer stories, what sort of cancer are they talking about: BCC or melanoma? Anyway, my point is that diagnosis is both positive and negative. Negative, in the sense that you have to explain what any entity is not.
I was reminded of this when I say a patient from East Asia for whom English was perhaps a second or third language. She had, what was for her, a frightening pigmented lesion. Fortunately it was medically banal and harmless. I gave the ‘positive name’, and then outlined the list of ‘negatives’: it wasn’t this, it wasn’t that etc. Suddenly, her eyes lit up, and a smile crossed her face. ‘I’m safe’ she said. Yes.
“Being in the hospital is horrible. They woke me up at 4:00 am once to ask whether I was sleeping well.”
Via Philip Greenspun, describing a friend with a terminal disease, and why he wanted to avoid chemo.
“Johnson deftly states, “the curse of this age of microspecialization and the proliferation of ‘’omics’ is to separate the ridiculome from the relevantome.””
From a review of George Johnson’s ‘The Cancer Chronicles’ in Science.
“Taxpayers have spent billions of pounds in interest costs through the private finance initiative to enable tens of billions of pounds to be taken off the government balance sheet – money that should have been spent on the schools and hospitals. When in opposition, the present government acknowledged the issues and committed itself to clarity and openness in presentation. But it seems the exigencies of office have proved too demanding. Why lose weight when you can reset the scales?”
John Kay on ‘fiddling’ (aka lying) by HMG
‘We also identified a significant increase over time in the percentage of people who incurred catastrophic health expenditures (greater than 30 percent of the household income) in the Czech Republic, Italy, and Spain…
These findings indicate the substantial weakening of financial protection for people ages fifty and older in European health systems after the Great Recession.’
From ‘The Great Recession And Increased Cost Sharing In European Health Systems’, published in Health Affairs.
What continues to surprise me is how long it takes to appreciate the catastrophe that has been allowed to unfold, without any good reason.
“Scandals, however, raised questions about whether to trust U.S. researchers. In 1964, news broke that 22 patients at the Jewish Chronic Disease Hospital in Brooklyn had been injected with cancer cells without their knowledge”
NEJM. Worth a read.
A basic introduction to clinical photobiology for our students.