A query with the catchy expression “global pandemic” or “global pandemic preparedness” in scientific databases, restricted to a 2009–19 range, will return more than 1400 results in JAMA (Journal of the American Medical Association), 30 in-depth papers in ArXiv (Cornell University), and a stunning 17,000 results in Google Scholar, which aggregates multiple repositories. As for the general public, it had the choice between no less than 98 TED Talks on the matter.
We had no excuses.
Just before the H1N1 episode in 2009, France had accumulated an inventory of 1 billion high protection masks (N95 equivalent). It was the consequence of the SARS epidemic. In the same way, the government had stored 20 million doses of vaccine. Later, the Health Ministry responsible for this precaution was blasted for this “excessive” stockpile — which was eventually destroyed as it decayed.
Frederik Filloux makes (and has for a while been making) an argument about journalism and journalism schools that I have not seen advanced by anybody else. The changing economics of the press mean that the modern Fourth Estate lacks expertise across many domains of modern life. He suggests that journalism schools need to regroup and change how they work and take advantage of the fact that most expertise will reside with those who did not go to journalism school in their 20’s. Rather, the press will need to rely on those with professional skills gained in particular domains. He writes:
The shortage of experts is also rooted in a priority shift that plagues major news organizations. All of them became obsessed with not being left in the dust by digital-native organizations riding the wave of social networks. As a consequence, newsroom managers, supported by bean-counters, found it clever to hire bunches of expendable digital “content” serfs who were mandated to keep up with the social frenzy. It was seen as a better investment than keeping a former doctor turned medical correspondent, even if he or she was loaded with decades of expertise, able to lean on a reliable network on practitioners, surgeons, epidemiologists, public health officials, etc. A pure cost vs. benefit choice, and ultimately a bad one.
I do not think there will be any shortage of candidates who possess medical degrees and medical experience.
We are living in dark times, and since I have been sifting through the ashes of a career, it is no surprise that failures signal through like radioactive tracers. Below is one.
Through most of my career I have been interested in the relation between science and medicine. In truth, if what matters is what you think about in the shower, I have been more interested in the relation between science and medicine than I have been interested in either activity in isolation. If I were to use a phrase to describe my focus, although it is a term that I would not have used then, I am interested in the epistemological foundations of medical practice. Pompous, I agree. I could use another phrase: what makes medicine and doctors useful? Thinking about statistical inference is a part of this topic, but there is much more to explore.
These issues became closer to my consciousness soon after I moved to Edinburgh. My ideas about what was going on were not shared by many locally, and I was nervous about going public in person rather than in print at a Symposium hosted by the Royal College of Physicians of Edinburgh. My nervousness was well founded: whilst I liked my abstract, my talk went down badly. Not least because it was truly dreadful (and the evident failure still rankles). Jan Vandenbroucke, one of the other speakers and somebody whose work I greatly admire (his paper in the Lancet, Homoeopathy trials: Going nowhere. [Lancet.1997;350:824], was to me the most important paper published in the Lancet in the 1990s), said some kind words to me afterwards, muttering that I had tried to say far too much to an audience that was ill prepared for my speculations. All true, but he was just being kind. It was worse than that.
Anyway, some tidying up deep in my hard drive surfaced the abstract. I still like it, but it is a shame that at the appropriate time I was unable to explain why.
JAMES LIND SYMPOSIUM: From scurvy to systematic reviews and clinical guidelines: how can clinical research lead to better patient care? (31-10-2003, RCPE Edinburgh)
There are three great branches of science: theory, experiment, and computation. (Nick Trefethen)
Advance in the mid-third of the twentieth century, the golden age of medical research, was predicated on earlier discoveries in the nineteenth century in both physiology and medicinal chemistry (1). Genetics dominated biology in the latter third of the twentieth century and many believe changes in medical practice will owe much to genetics over the next third-century (1). I disagree, and I will give an alternative view more credence: in 30 years’ time we will look back more to Neumann and Morgenstern than we will to Watson and Crick. What the Nobel laureate Herbert Simon referred to as The Sciences of the Artificial (2), subjects which have largely been peripheral to medicine, will become central.
Over the last 20 years we have seen the first (largely inadequate, I would add) attempts to explicitly demarcate methods of obtaining and promulgating knowledge about clinical practice (3,4). This has usually taken the form of proselytising a particular set of terms – systematic reviews, evidence-based practice, guidelines and the like, terms that have little to commend them or rigour. What is interesting, however, is that they reflect a long overdue renaissance of interest with the practice of medicine and medical epistemology.
The change of emphasis from the natural to the artificial is being driven by a number of forces, mostly extraneous to biomedicine: the increasing instrumental role of science in medicine and society; the increase in corporatisation of knowledge, whether by private corporations or monopsonistic institutions like the NHS (5); the rising costs of healthcare; and a remaining inability to frame questions with broad support about how to chose between alternative disease states at the level of society (6,7).
I will try to illustrate some of these issues by the use of three examples. First, the widespread use of a mode of statistical inference largely ill-suited to medicine, namely Neyman-Pearson hypothesis testing (decision-making), and the way in which this paradigm has been used to undermine expert opinion (8). Second, I will argue that we need to think much harder about clinical practice and fashion a more appropriate theoretical underpinning for clinical behaviour. Third, I will suggest how UK medical schools, in so far as they remain interested in clinical practice, should look to alternative models, perhaps business and law schools, for ideas of how they should operate (2).
Afterword. The symposium used structured abstracts, a habit that might have a place somewhere in this galaxy, but out of choice I would prefer to live in another one. Anyway, in the published version, it reads:
A fair cop.
Alfred North Whitehead: “Some of the major disasters of mankind have been produced by the narrowness of men with a good methodology” (The Function of Reason).
Comments that seem germane to some of our current day covid-19 debates.
People are always demanding that medical students must learn this or that (obesity, psychiatry, dermatology, ID, eating disorders). The result is curriculum overload, a default in favour of rote learning by many students, and the inhibition of curiosity. It was not meant to be like this, but amongst others, the GMC, the NHS, and others have pushed a vision of university medical education that shortchanges both the students and medical practice over the long term. Short-termism rules. Instead of producing graduates who are ready to learn clinical medicine is an area of their choice, we expect them to somehow come out oven-ready at graduation. I do not believe it is possible to do this to a level of safety that many other professions demand, nor is this the primary job of a university. Sadly, universities have given up on arguing, intimidated by the government and their regulatory commissars, and nervous of losing their monopoly on producing doctors.
But I will make a plea that one area really does deserve more attention within a university : the history of how medical advance occurs. No, I do not mean MCQs asking for the date of birth of Robert Koch or Lord Lister, but a feel for the historical interplay of convention and novelty. Without this our students and our graduates are almost confined to living in the present, unaware of the past, and unable to doubt how different the future will be. Below is one example.
”In 1938 Albert Hofmann, a chemist at the Sandoz Laboratories in Basel, created a series of new compounds from lysergic acid. One of them, later marketed as Hydergine, showed great potential for the treatment of cerebral arteriosclerosis. Another salt, the diethylamide (LSD), he put to one side, but he had “a peculiar presentiment,” as he put it in his memoir LSD: My Problem Child (1980), “that this substance could possess properties other than those established in the first investigations.
In 1943 he prepared a fresh batch of LSD. In the final process of its crystallization, he started to experience strange sensations. He described his first inadvertent “trip” in a letter to his supervisor:
At home I lay down and sank into a not unpleasant, intoxicated-like condition, characterized by extremely stimulated imagination. In a dream-like state, with eyes closed (I found the daylight to be unpleasantly glaring), I perceived an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors.
After eliminating chloroform fumes as a possible cause, he concluded that a tiny quantity of LSD absorbed through the skin of his fingertips must have been responsible. Three days later he began a program of unsanctioned research and deliberately ingested 250 micrograms of LSD at 4:20 PM. Forty minutes later, he wrote in his lab journal, “Beginning dizziness, feeling of anxiety, visual distortions, symptoms of paralysis, desire to laugh.” He set off home on his bicycle, accompanied by his laboratory assistant. This formal trial of what Hofmann considered a minute dose of LSD had more distressing effects than his first chance exposure:
Every exertion of my will, every attempt to put an end to the disintegration of the outer world and the dissolution of my ego, seemed to be wasted effort. A demon had invaded me, had taken possession of my body, mind, and soul. I jumped up and screamed, trying to free myself from him, but then sank down again and lay helpless on the sofa…. I was taken to another world, another place, another time.
A doctor was summoned but found nothing amiss apart from a marked dilation of his pupils. A fear of impending death gradually faded as the drug’s effect lessened, and after some hours Hofmann was seeing surreal colors and enjoying the play of shapes before his eyes.
Many editors of learned medical journals now automatically turn down publications describing the sort of scientific investigation that Albert Hofmann carried out on himself. Institutional review boards are often scathing in their criticism of self-experimentation, despite its hallowed tradition in medicine, because they consider it subjective and biased. But the human desire to alter consciousness and enrich self-awareness shows no sign of receding, and someone must always go first. As long as care and diligence accompany the sort of personal research conducted by Pollan and Lin, it has the potential to be as revealing and informative as any work on psychedelic drugs conducted within the rigid confines of universities.
Richard Horton in the Lancet writes:
Imagine if the entire edifice of knowledge in medicine was built upon a falsehood. Systematic reviews are said to be the highest standard of evidence-based health care. Regularly updated to ensure that treatment decisions are built on the most up-to-date and reliable science, systematic reviews and meta-analyses are widely used to inform clinical guidelines and decision making. Powerful organisations have emerged to construct a knowledge base in medicine underpinned by the results of systematic reviews. One such organisation is Cochrane, with 11 000 members in over 130 countries. This extraordinary movement of people is justifiably passionate about the idea that it is contributing to better health outcomes for everyone, everywhere. The industry that drives the production of systematic reviews today is financed by some of the most influential agencies in medical research. Cochrane, for example, points to three funders providing over £1 million each—the UK’s National Institute for Health Research (NIHR), the US National Institutes of Health (NIH), and Australia’s National Health and Medical Research Council (NHMRC).
Well, it really is a bit late for all this soul searching. See my earlier post here ‘Mega-silliness’ (commenting on what others had already pointed out); or my Evidence Based Medicine: the Epistemology That Isn’t, written over 20 years ago; and my contribution to the wake (even if I didn’t put my hand in my pocket), Why we should let “evidence-based medicine” rest in peace. The genesis of EBM was as a cult whose foundational myth was that P values could act as a true machine. Those followers who had originally hoped for a place in the promised afterlife, soon settled for paying the bills, and EBM morphed into a career opportunity for those who found accountancy too daring. So, pace John Mayall on Jazz Blues Fusion, don’t come here to listen to an old record. I promise.
Dr Chris Day writes:
Two weeks ago, I swabbed my first positive Covid-19 patient during an A&E Locum shift. I must say back then, I hadn’t fully taken in what we as a country will have to face over the coming months. The reports from colleagues in Italy and China are beyond belief.
The UK has been left to fight Covid-19 with half the Intensive Care beds per capita of Italy. Back in 2014, the trigger for my whistleblowing case was my attempt to try and secure more ICU resources for South East London (see Private Eye).
Instead of spending 5 years and £700k fighting /smearing me and damaging whistleblowing law, the NHS could have just fixed the problem. There has never been a more important time for the public and the politicians to understand Intensive Care resourcing and what is decided on their behalf by NHS leaders.
Two letters in Lancet Oncology. This bloody story never ends. We have not invented truth machines: judgement has never been exiled from discovery.
Since he shared every passing observation online, it was not surprising that on December 30th he put up a post about an odd cluster of pneumonia cases at the hospital. They were unexplained, but the patients were in quarantine, and they had all worked in the same place, the pungent litter-strewn warren of stalls that made up the local seafood market. Immediately this looked like person-to-person transmission to him, even if it might have come initially from bats, or some other delicacy. Immediately, too, it raised the spectre of the sars virus of 2002-03 which had killed more than 700 people. He therefore decided to warn his private WeChat group, all fellow alumni from Wuhan University, to take precautions. He headed the post: “Seven cases of sars in the Huanan Wholesale Seafood Market”. That was his mistake.
The trouble was that he did not know whether it was actually sars. He had posted it too fast. In an hour he corrected it, explaining that although it was a coronavirus, like sars, it had not been identified yet. But to his horror he was too late: his first post had already gone viral, with his name and occupation undeleted, so that in the middle of the night he was called in for a dressing down at the hospital, and January 3rd he was summoned to the police station.
But most of Case and Deaton’s ire focuses on the health care industry, which not only underperforms but is also wrecking the US economy. We [USA] spend twice per capita what France spends on health care, but our life expectancy is four years shorter, our rates of maternal and infant death are almost twice as high, and, unlike the French, we leave 30 million people uninsured. The amount Americans spend unnecessarily on health care weighs more heavily on our economy, Case and Deaton write, than the Versailles treaty reparations did on Germany’s in the 1920s. If, decades ago, we’d built a health system like Switzerland’s, which costs 30 percent less per capita than ours does, we’d now have an extra trillion dollars a year to spend, for example, on replacing the pipes in the nearly four thousand US counties where lead levels in drinking water exceed those of Flint, Michigan, and on rebuilding America’s bridges railroads, and highways—now so rundown that FedEx replaces delivery van tires twice as often as it did twenty years ago.
In the US, health insurance accounts for 60 percent of the cost of hiring a low-wage worker. Many employers opt instead to hire contract workers with no benefits, or illegal immigrants with no rights at all.
Terrific article on Covid-19 (Sars-CoV-2). in the LRB by Rupert Beale. He says written in haste but it doesn’t read that way. It contains some memorable lines.
As the US health secretary Michael Leavitt put it in 2006, ‘anything we say in advance of a pandemic happening is alarmist; anything we say afterwards is inadequate.’
And how do you think hard about research funding for the long term (I am old enough to remember when stroke and dementia were virtually non-subjects as far as ‘good research funding’ was concerned).
Virologists need more than clever tricks: we also need cash. Twenty years ago, funding wasn’t available to study coronaviruses. In 1999, avian infectious bronchitis virus was the one known truly nasty coronavirus pathogen. Only poultry farmers really cared about it, as it kills chickens but doesn’t infect people. In humans there are a number of fairly innocuous coronaviruses, such as OC43 and HKU1, which cause the ‘common cold’. Doctors don’t usually bother testing for them – you have a runny nose, so what?
And note the conditional tense:
The global case fatality rate is above 3 per cent at the moment, and if – reasonable worst case scenario – 30-70 per cent of the 7.8 billion people on earth are infected, that means between 70 and 165 million deaths. It would be the worst disaster in human history in terms of total lives lost. Nobody expects this, because everyone expects that people will comply with efficient public health measures put in place by responsible governments.
And to repeat my own mantra (stolen from elsewhere): the opposite of science is not art, but politics:
The situation isn’t helped by a president [Trump] who keeps suggesting that the virus isn’t that bad, it’s a bit like flu, we will have a vaccine soon: stopping flights from China was enough. Tony Fauci, the director of the National Institute of Allergy and Infectious Disease, deftly cut across Trump at a White House press briefing. No, it isn’t only as bad as flu, it’s far more dangerous. Yes, public health measures will have to be put in place and maintained for many months. No, a vaccine isn’t just around the corner, it will take at least 18 months. Fauci was then ordered to clear all his press briefings on Covid-19 with Mike Pence in advance: the vice president’s office is leading the US response to the virus. ‘You don’t want to go to war with a president,’ Fauci remarked.
And Beale ends by quoting an ID colleague.
This is not business as usual. This will be different from what anyone living has ever experienced. The closest comparator is 1918 influenza.
Caution: pace the author, ‘This is a fast-moving situation, and the numbers are constantly changing – certainly the ones I have given here will be out of date by the time you read this.’
Link. (London Review of Books: Vol. 42 No. 5, 5 March 2020: “Wash your Hands”: Rupert Beale)
I titled a recent post musing over my career as ‘The Thrill is Gone’. But I ended on an optimistic note:
‘The baton gets handed on. The thrill goes on. And on’
But there are good reasons to think otherwise. Below is a quote from a recent letter in the Lancet by Gagab Bhatnaga. You can argue all you like about definitions of ‘burnout’, but good young people are leaving medicine. The numbers who leave for ever may not be large but I think some of the best are going. What worries as much is those who stay behind.
The consequences of physician burnout have been clearly observed in the English National Health Service (NHS). F2 doctors (those who are in their second foundation year after medical school) can traditionally go on to apply to higher specialist training. Recent years have seen an astounding drop in F2 doctors willing to continue NHS training4 with just over a third (37·7%) of F2 doctors applying to continue training in 2018, a decrease from 71·3% in 2011. Those taking a career break from medicine increased almost 3-fold from 4·6% to 14·6%. With the NHS already 10 000 doctors short, the consequences of not recruiting and retaining our junior workforce will be devastating.
Henry characterise the less attractive teaching rounds as examples of shifting dullness
Henry Miller (apologies, a medic joke)
My earliest conscious memory of disease and doctors was in the management of my atopic dermatitis. Here is Sam Shuster writing poetically about atopic dermatitis in ‘World Medicine’ in 1983.
A dozen years of agony; years of sleeplessness for child and parents, years of weeping, itching, scaling skin, the look and feel of which is detested.
The poverty of our treatments is made all the worse by the unfair raising of expectations: I don’t mean by obvious folk remedies; I mean medical folk remedies like the recent pseudoscientific dietary treatments which eliminate irrelevant allergens. There neither is nor ever was good evidence for a dietary mechanism. And as for cows’ milk, I would willingly drown its proponents in it. We have nothing fundamental for the misery of atopic eczema and that’s why I would like to see a real treatment—not one of those media breakthroughs, and not another of those hope raising nonsenses like breast-feeding: I mean a real and monstrously effective treatment. Not one of your P<.05 drugs the effect of which can only be seen if you keep your eyes firmly double-blind, I mean a straightforward here today and gone tomorrow job, an Aladdin’s salve—put it on and you have a new skin for old.
Nothing would please me more in the practice of clinical dermatology than never again to see a child tearing its skin to shreds and not knowing how long it will be before it all stops, if indeed it does.
Things are indeed better now, but not as much we need: we still don’t understand the itch nor can we easily block the neural pathways involved. Nor has anything replaced the untimely murder of ‘World Medicine’. A glass of milk has never looked the same since, either.
There is an interesting review in the Economist of the ‘Great Pretender: The Undercover Mission that Changed out Understanding of Madness,’ written by Susan Cahalan. The book is the story of the American psychologist David Rosenhan who “recruited seven volunteers to join him in feigning mental illness, to expose what he called the ‘undoubtedly counter-therapeutic’ culture of his country’s psychiatry”.
Rosenthal’s studies are well known and were influential, and some might argue that may have had have a beneficial effect on subsequent patient care. The question is whether they were true. The review states:
in the end Rosenham emerges as an unpalatable symptom of a wider academic malaise”.
As for the ‘malaise’, the reviewer goes on:
Many of psychology’s most famous experiments have recently been discredited or devalued, the author notes. Immense significance has been attached to Stanley Milgram’s shock tests and Philip Zimbardo’s Stanford prison experiment, yet later re-runs have failed to reproduce their findings. As Ms Cahalan laments, the feverish reports on the undermining of such theories are a gift to people who would like to discredit science itself.
I have a few disjointed thoughts on this. There are plenty of other considered critiques of the excesses of modern medical psychiatry. Anthony Clare’s ‘Psychiatry in Dissent’ was for me the best introduction to psychiatry. And Stuart Sutherland’s “Breakdown’ was a blistering and highly readable attack on medical (in)competence as much as the subject itself (Sutherland was a leading experimental psychologist, and his account is autobiographical). And might the cross-country diagnostic criteria studies not have happened without Rosenham’s work?
As for undermining science (see the quote above), I think unreliable medical science is widespread, and possibly there is more of it than in many past periods. Simple repetition of experiments is important but not sufficient, and betrays a lack of of understanding of why some science is so powerful.
Science owes its success to its social organisation: conjectures and refutations, to use Popper’s terms, within a community. Just repeating an experiment under identical conditions is not sufficient. Rather you need to use the results of one experiment to inform the next, and with the accumulation of new results, you need to build a larger and larger edifice which whilst having greater explanatory power is more and more intolerant of errors at any level. Building large structures out of Lego only works because of the precision engineering of each of the component bricks. But any errors only become apparent when you add brick-on-brick. When a single investigator or group of investigators have skin in the game during this process — and where experimentation is possible — science is at its strongest (the critiques can of course come from anywhere).
An alternative process is when the results of a series of experiments are so precise and robust that everyday life confirms them: the lights go on when I click the switch. This harks back to the reporting of science as ‘demonstrations’.
By these two standards much medical science may be unreliable. First, because the fragmentation of enquiry discourages the creation of broad explanatory theories or tests of the underlying hypotheses. The ‘testing’ is more whether a publishable unit can be achieved rather than nature understood. Second, in many RCTs or technology assessments there is little theoretical framework on which to challenge nature. Nor can everyday practice act as the necessary feedback loop in the way the tight temporal relationship between flipping the switch and seeing the light turn on can.
Perhaps, perhaps not. But when and where is even more important.
Hailed as a maths prodigy at school, Shields accepted a junior position at Merrill Lynch after studying engineering, economics and management at Oxford University because the trading room floor offered him a thrilling, dynamic environment. He was not alone: of 120 engineers in his year group at university, Shields added, only five went into engineering.
I think we should be much more cautious in attempting to direct young people’s choices beyond providing them with an education. We should feel proud of their independence of mind, remembering that supply side factors will likely win out over central planning. It is the supply side that we need to deal with, not least Putts Law. The same applies to medicine.
This personal story is worth a read for other lessons, too.
The government has instructed Health Education England to consult patients and the public on what they need from “21st century” medical graduates
It won’t end well.
One-third of everyone employed in London, 1.6 million people, work at night.
In 2018, at least 8,855 people slept rough on the streets of London, a 140% increase over the past decade, with similar trends globally.
“If biology is difficult, it is because of the bewildering number and variety of things one must hold in one’s head”.
John Maynard Smith (1977).
Leo Szilard recalled, that when he did physics he could lounge in the bath for hours and hours, just thinking. Once he moved into biology things were never the same: he was always having to get out to check some annoying fact. Dermatology is worse, trust me.
I spent near on ten years thinking about automated skin cancer detection. There are various approaches you might use — cyborg human/machine hybrids were my personal favourite — but we settled on more standard machine learning approaches. Conceptually what you need is straightforward: data to learn from, and ways to lever the historical data to the future examples. The following quote is apposite.
One is that, for all the advances in machine learning, machines are still not very good at learning. Most humans need a few dozen hours to master driving. Waymo’s cars have had over 10m miles of practice, and still fall short. And once humans have learned to drive, even on the easy streets of Phoenix, they can, with a little effort, apply that knowledge anywhere, rapidly learning to adapt their skills to rush-hour Bangkok or a gravel-track in rural Greece.
You see exactly the same thing with skin cancer. With a relatively small number of examples, you can train (human) novices to be much better than most doctors. By contrast, with the machines you need literally hundreds and thousands of examples. Even when you start with large databases, as you parse the diagnostic groups, you quickly find out that for many ‘types’ you have only a few examples to learn from. The rate limiting factor becomes acquiring mega-databases cheaply. The best way to do this is to change data acquisition from a ‘research task’ to a matter of grabbing data that was collected routinely for other purposes (there is a lot of money in digital waste — ask Google).
Noam Chomsky had a few statements germane to this and much else that gets in the way of such goals (1).
Plato’s problem: How can we know so much when the evidence is do slight.
Orwell’s problem: How do we remain so ignorant when the evidence is so overwhelming.
(1): Noam Chomsky: Ideas and Ideals, Cambridge University Press, (1999). Neil Smith.
In the essay “Telling,” he describes the upsetting case of the director of a hospital who, struck down by Alzheimer’s, is admitted to his own hospital. He behaves as if he were still running it, until one day by chance he picks up his own chart. “That’s me,” he says, recognizing his name on the cover. Inside, he reads “Alzheimer’s disease” and weeps. In the same hospital a former janitor is admitted; he too is convinced that he is still working there. He is given harmless tasks to perform; one day he dies of a sudden heart attack “without perhaps ever realising that he had been anything but a janitor with a lifetime of loyal work behind him.”
My mother, a nurse, took on such imagined roles when she too was demented and in a care home.
The article is about pharma and the way its interests flit because of perceived commercial rather than clinical value. There are two phrases that should make you sit up.
The first phrase, is scary. We already know how dishonest much of pharma is. We can manage well without more perverse incentives. Short term shareholder value wins over morality every time.
The second begs the question: if the evidence is good, why do you need to flog your medicine with advertising? A collection of data sheets — with citations — is all you need. And since most pharma spends more on advertising than research, here is a simple way to reduce drug costs. (The answer is of course, that advertising sells more than research — shame on us all).
This is from the Guardian. The background is serious allergic reactions to food components, and allowing accessible information about what purchased food contains. In her phrase, ‘high-profile casualties on the high street’ she is referring to businesses; I am sure others may have read it differently.
But Kate Nicholls, the chief executive of UKHospitality, said a law change could have a serious impact on the viability of some of the 100,000 restaurants her organisation represents. “Hospitality and particularly high street restaurants are under intense cost pressures and are struggling,” she said. We’ve had a number of high-profile casualties on the high street. Those businesses operate on tight net profit margins. And there’s no doubt some would not be able to cope with any significant change in their cost structure.”
(BTW: she thinks ‘training’ is the solution. Training and education are offered as the answer to everything…”education, education, education”. If only.)
Putt’s Law: “Technology is dominated by two types of people, those who understand what they do not manage and those who manage what they do not understand.”
Putt’s Corollary: “Every technical hierarchy, in time, develops a competence inversion.” with incompetence being “flushed out of the lower levels” of a technocratic hierarchy, ensuring that technically competent people remain directly in charge of the actual technology while those without technical competence move into management.
The following is from an advert for a clinical academic in a surgical specialty, one with significant on call responsibilities. (It is not from Edinburgh).
‘you will be able to define, develop, and establish a high quality patient-centred research programme’
‘in addition to the above, you will be expected to raise substantial research income and deliver excellent research outputs’
Leaving aside the debasement of language, I simply cannot believe such jobs are viable long term. Many years ago, I was looked after by a surgical academic. A few years later he/she moved to another centre, and I was puzzled as to why he/she had made this career move. I queried a NHS surgeon in the same hospital about this career path. “Bad outcomes”, was the response. She/He needed a clean start somewhere else…
Traditional non-clinical academic careers include research, teaching and administration. Increasingly it is recognised that it is rarely possible to all three well. For clinical academics the situation is worse, as 50% of your time is supposed to be devoted to providing patient care. Over time the NHS workload has become more onerous in that consultants enjoy less support from junior doctors and NHS hospitals have become much less efficient.
All sorts of legitimate questions can be asked about the relation between expertise and how much of your time is devoted to that particular role. For craft specialities — and I would include dermatology, pathology, radiology in this category — there may be ways to stay competent. Subspecialisation is one approach (my choice) but even this may be inadequate. In many areas of medicine I simply do not believe it is possible to maintain acceptable clinical skills and be active in meaningful research.
Sam Shuster always drilled in to me that there were only two reasons academics should see patients: to teach on them, and to foster their research. Academics are not there to provide ‘service’. Some juniors recognise this issue but are reticent about speaking openly about it. But chase the footfall, or lack of it, into clinical academic careers.
I am generally nervous about doctors or academics working for the government. Not that I think the roles are unnecessary, far from it. But what worries me is when instead of resigning from their academic role, they end up working for more than one master. So, I tire of the use of university titles when the principle employer does not subscribe to the academic ideal. I think if you have been at Stanford and you go to Washington it should be as a regular civil service post. I think the Americans get it right.
But the retiring CMO, Dame Sally Davies, in an interview in the RCP in-house journal ‘Commentary’ speaks some truths (Commentary | October 2019, p10).
I hear non-stop stories from unhappy juniors. In my day, we (consultants) made up the rotas for the juniors, but now administrators do it without understanding all of the issues. I’m told you can’t go back to the ‘firm’ structure because there are so many doctors in the system, but whenever I meet a roomful of young doctors I ask: ‘Does your consultant know your name?’ It’s rare that a hand goes up. We have depersonalised the relationships between doctors and that can’t help the workings of the medial team, or with the patients.
Your mileage may vary, but when I was a junior doctor it was us — not the consultants — who came up with the rotas. But the point she makes is important, and everybody knows this (already). At one time junior doctors didn’t work for the NHS, rather they worked within the NHS for other doctors, for good and bad. I find it hard to imagine that the current system can deliver genuine apprenticeship learning. Training and service may often have resembled a bickering couple, but there was a broader professional context that was shared. I am not certain that this is the case anymore. Whenever people keep pushing words such as ‘reflection’ or ‘professionalism’, you know — pace Orwell — that the opposite is going on. Politics is a dominant-negative mutation.
One of the mantras of psychometrics 101 is that you cannot have validity without reliability. People expel this phrase, like others equilibrate after eating curry and nan-breads with too much gassy beer. In truth, the Platonic obsession with reliability diminishes validity. The world of science and much professional practice, remains messy, and vague until it is ‘done’. The search space for those diamonds of sense and order remains infinite.
Many years in the making, DSM-5 appeared in 2013, to a chorus of criticism; Harrington summarises this crisply (Gary Greenberg’s 2013 Book of Woe gives a painful blow-by-blow account). Harrington suggests that the proliferating symptom categories ceased to carry conviction; in the USA, the leadership of the US National Institutes of Health pivoted away from the DSM approach—“100% reliability 0% validity”, as Harrington writes—stating they would only fund projects with clearly defined biological hypotheses. The big players in the pharmaceutical industry folded their tents and withdrew from the field, turning to more tractable targets, notably cancer. For some mental health problems, psychological therapies, such as cognitive behaviour therapy (CBT), are becoming more popular, sometimes in combination with pharmacotherapy; as Harrington points out, even as far back as the 1970s, trials had shown that CBT outperformed imipramine as a treatment for depression.
Biological psychiatry’s decline and fall | Anne Harrington, Mind Fixers: Psychiatry’s Troubled Search for the Biology of Mental Illness, W W Norton (2019), p. 384, US$ 27·95, ISBN: 9780393071221 – ScienceDirect
Tobacco killed an estimated 100 million people in the twentieth century. Without radical action, it is projected to kill around one billion in the twenty-first.
I used to use the phrase — with apologies to Freud — ‘eppendorf envy’ to describe the bias in much medical innovation whereby useful advance pretended it owed its magic to ‘basic’ science. Doctors wore white coats in order to sprinkle the laboratory magic on as a veneer. But I like this cognate term also: innovation theatre.
To be fair to the banks, they weren’t the first institutions to recognise the PR value of what Rich Turrin has dubbed innovation theatre. Many institutions before them had cottoned on to the fact that it was a way to score easy points with the public and investors. Think of high impact campaigns featuring “the science bit” for L’Oréal’s Elvive shampoo or Tefal appliance ads: “We have the technology because we have the brains”.
The financial sector has seen enough innovation theatre | Financial Times. The orignal reference is here.