“The NHS will have a workforce plan for the first time since 2000, England’s secretary of state for health has announced.”
“The NHS will have a workforce plan for the first time since 2000, England’s secretary of state for health has announced.”
The concept of continuity of care is important and with winter approaching rapidly in the UK, clinicians should lead the way in ensuring patients are looked after by the right specialist team, in the right place first time and avoid the ‘martini’ principle of hospital care – any time, any bed, anywhere. If we can reduce the number of boarded or outlying patients we will improve their care and also reduce overcrowding in the hospitals.
Abraham Flexner is of course famous in clinical medicine, for his report and influence on medical schools in the USA (and indirectly, the rest of the world). But I did not know of this book. A message I would strongly recommend to those regulators and their ilk who are shorting the future with ‘reforms’ and ‘competency’.
Economics is perfectly capable of incorporating questions of morality, says Mr Tirole. It simply imposes structure on debate where otherwise indignation would rule. It might make sense to ban some markets, like dwarf-tossing,
and before you get alarmed:
[of dwarf-tossing] ….its existence diminishes the dignity of an entire group. But a market in organs or blood, for example, should not be rejected on the basis of instinctive moral repugnance alone. Policymakers should consider whether payment would raise the supply of donated blood or kidneys, improving or even saving lives. (It might not, if the motivation of money makes generous people afraid of looking greedy.) Whatever the answer, policymakers should make decisions from “behind the veil of ignorance”: without knowing whether any one person, including the policymakers themselves, would be a winner or loser from a particular policy, which society would they choose?
From a review of “Economics for the Common Good”, by Jean Tirol in the Economist [link]. I assume the ‘veil’ reference is from John Rawls, an approach that I always like, but worry that I am missing something deeper.
I like computers (see previous post), but despair of them in the clinical context of keeping medical records. By contrast nobody sane doubts that computers are advantageous in other medical contexts: imaging, radiotherapy, or even using an insulin pump. We don’t have problems with the latter instances, because self-evidently computers work, and they are the result of a culture of improvement. Not so with electronic medical records, where a neutral observer might thing that the purpose is to save money in one budget at the expense of diminishing clinical care in another. The economists might talk about externalities, but essentially many electronic record keeping systems are a form of pollution of the clinical workspace.
The following quote caught my eye because, whilst in Scandinavia recently, a dermatologist from Denmark was expressing frustration with how bad their computer systems are; and how older physicians choose to ignore them by retiring early. I heard a similar tale from the US in the summer, from a dermatologist who takes a financial hit because he has not implemented electronic records. He says he can either manage patients or do IT (and yes, he is planning to get out early).
Electronic medical records (EMRs) have resulted in increased documentation burden, with physicians spending up to 2 hours on EMR-related tasks for every 1 patient-care hour. Although EMRs offer care delivery integration, they have decreased physician job satisfaction and increased physician burnout across multiple fields, including dermatology.
I would add, that I have read that the average ER doc on a shift in the US presses his mouse 4000 times.
A long time ago, Richard Doll wrote an article pointing out that hospital record systems such as hospital activity analysis were perhaps useful to managers, but not much use for doctors or researchers. He was right, and I even published a paper saying similar things. My experience of electronic records in hospitals is that they are designed for the purpose of ‘management’ not clinical care. Contrary to what many say, these two activities have little in common, and share few goals. Our care system is not designed for care or caring, and our software is not designed for clinicians or patients. As for EMR, we are still waiting for our VisiCalc or Photoshop. If somebody can pull it off, it would be worth a Nobel.
Today (Oct. 17) was International Spreadsheet Day, marking the day back in 1979 that VisiCalc first shipped for the Apple II. Creator Dan Bricklin devised the program originally to help him crunch numbers for an assignment at Harvard Business School. [Link]
I dislike spreadsheets, and think the world will end not in fire, but in one giant bloody spreadsheet (or as a result of one). I also think they are a great metaphor for what is often wrong in medicine: an Excel spreadsheet can calculate a PASI (pissing awful psoriasis index, in lay terms) but it cannot tell you when somebody has bad psoriasis. People get confused about the epistemology here.
But these comments are a little sour. I have never really had to use spreadsheets, instead preferring to use something like R for when I have need of matrices, or when I was really young, FORTRAN. And to be fair even then, I would (now) need to go via a spreadsheet / csv file to enter the data. And this ignores the fact that mostly spreadsheets are used as static tools to present multicoloured tables rather than do calculations. But spreadsheets were, and are, revolutionary. I knew of Dan Bricklin, their inventor, but not all of the following story about how he invented then because he needed them to carry out a set assignment at Harvard Business School
Bricklin knew all this, but he also knew that spreadsheets were needed for the exercise; he wanted an easier way to do them. It occurred to him: why not create the spreadsheets on a microcomputer? Why not design a program that would produce on a computer screen a green, glowing ledger, so that the calculations, as well as the final tabulations, would be visible to the person “crunching” the numbers?
Why not make an electronic spreadsheet, a word processor for figures?
Bricklin’s teachers at Harvard thought he was wasting his time: why would a manager want to do a spreadsheet on one of those “toy” computers? What were secretaries and accountants and the people down in DP for? But Bricklin could not be dissuaded. With a computer programmer friend from MIT named Bob Frankston, he set to work developing the first electronic spreadsheet program. It would be contained on a floppy disk and run on the then brand-new Apple personal computer. Bricklin and Frankson released VisiCalc (the name was derived from Visible Calculation) in late 1979.
There are some general points. Advances are often made by tool makers; and the best fillip for great software is a problem you personally need to solve (a point Paul Graham makes repeatedly). And of course, people who know better, will not think your efforts worthwhile. Little of this is true of hospital information systems.
Nevertheless, recent reviews find little evidence that cancer patients benefit after clinicians are taught communication.9, 14-16 Although training can change clinicians’ communication, for instance by increasing open questions or empathic statements, effects on patients’ satisfaction, well-being or clinical outcomes have proved elusive. The reviews’ authors recommend improved research designs in a continued effort to show that training does help patients. However, there are concerns that expert guidance on communication is often unrealistic,17-21 and many clinicians and students remain sceptical of it.11, 20, 22-31 Moreover, social scientists have challenged assumptions on which communication education and guidance in cancer and across health care are based.32-34
Allergan has been particularly aggressive in trying to skirt the IPR system. In September, it took the unprecedented step of transferring patents protecting its prescription eyedrop, Restasis, to the Saint Regis Mohawks. The tribe — which received a $13.5m fee and up to $15m in annual royalties — then claimed it had sovereign immunity from intellectual property challenges launched through IPR.
As I fond of repeating, Martin Wolf of the FT argued that Pharma may soon be shown the same contempt that many now feel towards the Banks.
“Whenever a company decides it’s ok to screw its suppliers, its customers, or its employees, it is only a matter of time until it gets around to screwing all 3 groups. That is because the idea that screwing people is ok becomes the corporate mindset.”
Comment by Howard on I, Cringely
Marriage, however, proved to be a towering practical problem — Princeton, where Feynman was now pursuing a Ph.D., threatened to withdraw the fellowships funding his graduate studies if he were to wed, for the university considered the emotional and pragmatic responsibilities of marriage a grave threat to academic discipline. [Here]
The above is about Richard Feynman, but reminds me of a story closer to home, told to me by a consultant dermatologist (who I will call CS) and former academic. CS, then a senior registrar, on entering the departmental library, was pleased to see the elderly professor reaching for books on the top shelf. CS, with evident pride, told the professor that he had good news: he was engaged to be married. The professor replied: ‘Sorry to hear that CS, I had high hopes for you’.
Geoff Norman has — as usual — a thoughtful editorial here. My clickbait version of it is:
As anyone who has engaged in the culture wars between qualitative and quantitative researchers will attest, the debate between the two groups are unlikely to resolve anytime soon. …….To put it bluntly, at the risk of offending some, constructivists are going around the world making sweeping generalizations about how you can’t make sweeping generalizations.
And I am glad he gives space to the Gigerenzer critique of some of the “heuristics and biases” school that has become so popular:
While the definitions of the heuristics in Kahneman’s hands appear unequivocal, Zwaan et al. (2017) showed that purported experts are completely unable to agree on the presence or absence of specific biases, and conversely are themselves strongly influenced by hindsight knowledge of the outcome
The foundations of research in medical education are not nearly as secure as many people wish to maintain. Plenty of physics envy to go around, and jobs to match.
Childhood, which is supposed to be the province of spontaneous play, has become highly administered, with parents and schools priming their human capital investments — children — for a merciless jobs market: “Between 1981 and 1997, elementary schoolers . . . recorded a whopping 146 per cent gain in time spent studying.”
FT link here
Woodrow Wilson once remarked that it is easier to change the location of a cemetery than it is to change a curriculum.
Via Jon Talbot, commenting on an article on the ?failures of online learning. I would only add the comment made by Henry Miller, in the context of medicine: curriculum reform, a disease of Deans.
This is a term I first learned from Clark Glamour and colleagues in Android Epistemology. Dermofit was a failed attempt to try and invent such a prosthesis.
Thinkers and thinking societies build tools that enhance their own thinking. When the speed of the positive feedback increases rapidly, we see a scientific and cultural revolution. When grit is put into the cogs or the base metals diluted, the opposite happens.
Last week I was giving a talk about tech, medicine and medical education, and for the life of me could not remember the following example, showing how key representation is to our intellectual toolbox. Worse, I knew it had an Edinburgh connection. Wikipedia has more.
I was sent links to both these videos together.
The first is reasonable, but not grounded in reality. As AJP Taylor once said: 90% right and 100% wrong. It is what happens when all the context of a thesis has been stripped away. The second is both more grounded in reality and philosophically sound.
Speaking at the Royal College of GPs’ annual conference in Liverpool on Thursday, he said: “The old model of 10-minute appointments doesn’t really work for patients with multiple long-term conditions who may need 30, 40, 50 minutes to get to the bottom of all their needs.”
Awhile back, I read that a Danish primary care doc had been prosecuted because he had made key decisions about a patient based on a 10 minute consultation. In one sense I was cheered by this. But talking to Danish dermatologists last week, I am not so sanguine. It seems that yet more new ‘efficient’ IT is the weapon to degrade the consultation even further. Eventually there really will be no time to let the patient into the consultation room. And still the mantra of consultations skills for medical students will continue. All irony intended.
Physicians have always been busy people, although they have generally controlled the way they use their time. In 1993, for example, family practitioners were seeing, on average, one patient every 20 minutes; general internists were seeing one every 26 minutes 1. These visit times were not long but perhaps were not unreasonable, particularly considering that they represented a mix of new and follow-up visits and that “fast” and “slow” British general practitioners had mean visit lengths of 7 and 9 minutes, respectively 2. Recently, however, the invisible hand of the marketplace has squeezed appointment schedules in an ever-tightening grip: In late 1995, 41% of physicians in an important U.S. survey reported that the amount of time they spent with their patients had decreased during the previous 3 years 3. This erosion of encounter time has taken its toll on physicians 1, 3. Moreover, it is equally distressing to patients because patients value their physicians’ “information giving” highly 4 and, as Howard Waitzkin has sensibly pointed out, “Information giving takes time. We cannot expect it to go well if we are too busy” 5. It does not take a rocket scientist (in the current parlance) to understand why both patients and their physicians have become increasingly dissatisfied as visit lengths have grown shorter 2, 6.
Frank Davidoff, MD, Editor, Annals of Internal Medicine ￼ , 15 September 1997 | Volume 127 Issue 6 | Pages 483-485
The they invented electronic records with the purpose of…
I always find there is something appealing about old university towns. I am in Uppsala, a city I have visited for work on many occasions. Seems so small, and yet in reality it is Sweden’s fourth largest city. I was speaking at a mini-symposium on academic publishing, and how tech fits into the world of teaching clinical medicine. But there is always some time to enjoy the sights— even as the days draw in.
This is from an article discussing the difficulties in recommending people to content they might like. The bigger picture is the dismal state of online journalism / news and polluters not paying. But Netflix’s understanding about how fine scale a taxonomy has to be, struck a chord with me. This is exactly the problem of diagnosis in some areas of medicine.
The latter is my favorite. Four years ago, I realized the size and scope of Netflix’s secret weapon, its suggestion system, when reading this seminal Alex Madrigal piece in The Atlantic. Madrigal was first in revealing the number of genres, sub-genres, micro-genres used by Netflix’s descriptors for its film library: 76,897! This entails the incredible task of manually tagging every movie and generating a vast set of metadata ranging from “forbidden-love dramas” to heroes with a prominent mustache.
This reminds me of the old story about how impressed somebody was, after being shown some small computing device that could ‘think’ using powerful algorithms. The observer did however ask about the aircraft hanger size machine that came with it: that was necessary to implement all the code for the exceptions to this universal reasoning machine, he was told.
Lots of room too, for fake news and fake diagnoses.
I came across these images from the late Antony Sampson’s series of books on “Who runs this place”. They were part of a fascinating presentation by Tom Loosemore on tech in government and inter alia the design of Universal Credit. A lot I didn’t know, and well worth a listen — even to somebody who used to make me rage with anger. But I write this, having just read the story over the weekend about how if you are on or below the poverty line, you have to pay 50p a minute for telephone advice**, whereas if you want to report ‘cheaters’ (other than bankers) the phone line is free. Even inspirational thinking and coding cannot escape this sort of evil. In the context of politics, Nye Bevan knew what to call such people.
Check out the universities, academia and scientists in Sampson’s perceptual maps of power and influence over 40 years in the UK . The designs reflect the dates (top to bottom: 1962, 1980, 2004).
** telephone advice: an interesting example of how a technology allows you to charge for what once was free and a right of any citizen.
Is it medical education or medical training? This is almost an age-old question, one that I am not going to resolve here. But every generation has to ask it anew. Not least because the sands of time keep moving.
In undergraduate medicine, in 2017, I fear we have got this wrong in a big way. Just when the future looks ever more uncertain, when we have to consider how much traditional ideas of medical careers — and even how we conceptualise doctors — is up for grabs, we are ever more focussed on short term goals: not medical education, but short term training (‘produce FY1 doctors’). But of course, the purpose of medical eduction is not to produce FY1 doctors — that is like saying that passing tests is the purpose of education. The purpose of medical education is to equip students to work (usually) in medicine for a lifetime. Graduates must be able to start learning safely in a clinical environment, but the purpose is not to be FY1’s or core medical trainees.
But the other reason that this problem needs revisiting, is that medical education was framed in time when few people went to university, and when spending five years at university seemed unusual. No matter that much of it was ‘training’ rather than education: by comparison with ‘average’ there was some education in there. But what I fear now is that many medical students are being left behind, increasingly ‘trained’ for one employer and one niche, at the cost of their education. A niche that is threatened by ecological change. And to echo a theme of the day, young people are being made to pay (via debt) for what many other corporations rightly accept is their ‘training’ responsibility.
Now, I do not see the solution in making medicine a postgraduate degree (for most), but I think we can start meaningfully thinking about what I would call ‘medicine plus’ degrees. Doing this, means we have to start unpicking ‘training’ and ‘education’ in ways that do not increase costs, and with an eye on the student’s future, not that of the NHS.
MIT’s WoodyFlowers has some interesting things to say in a completely different context (that of the failure of the MOOC movement), but which I think we can meld to our purpose .
The missed opportunity, I argued, involved recognition that education and training are different and that training could be dramatically improved through use of well structured, high quality modules that would help students train themself so person-to-person time could be used for education. Essentially the strategy would outsource training and nonjudgmental grading to digital systems, and thereby free instructors to serve as mentors.
Title: borrowed from “Subterranean Homesick Blues”. There are lots of lines that students of the fees era would do well to reflect on, including: “Don’t follow leaders, Watch the parkin’ meters”.
This was a comment on on the political question of our time by Janesh Ganash in the FT, but to me it has a wider relevance, including how we think about higher education. Of course, people will keep perseverating, believing the contrary.
There is no human resources solution to an ideological problem.
Writing was invented to support taxation. Elites’ insatiable appetite for fully domesticated workers boosted forced labour and slavery. It almost comes as a relief to be reminded that the oppressive character of the state was leavened by its own brittleness: rather than precipitating a calamitous slide into chaos, periodic collapse would simply have disassembled larger states into their constituent communities. Plenty of fetters were loosened in the process.
This is not too far from the antifragility, of Taleb
Review of ‘Against the Grain’, by James C Scott in the FT.
This was a quote from an article by an ex-lawyer who got into tech and writing about tech. Now some of by best friends are lawyers, but this chimed with something I came across by Benedict Evans on ‘why you must pay sales people commissions’. The article is here (the video no longer plays for me).
The opening quote poses a question:
I felt a little odd writing that title [ why you must pay sales people commissions]. It’s a little like asking “Why should you give engineers big monitors?” If you have to ask the question, then you probably won’t understand the answer. The short answer is: don’t, if you don’t want good engineers to work for you; and if they still do, they’ll be less productive. The same is true for sales people and commissions.
The argument is as follows:
Imagine that you are a great sales person who knows you can sell $10M worth of product in a year. Company A pays commissions and, if you do what you know you can do, you will earn $1M/year. Company B refuses to pay commissions for “cultural reasons” and offers $200K/year. Which job would you take? Now imagine that you are a horrible sales person who would be lucky to sell anything and will get fired in a performance-based commission culture, but may survive in a low-pressure, non-commission culture. Which job would you take?
But the key message for me is:
Speaking of culture, why should the sales culture be different from the engineering culture? To understand that, ask yourself the following: Do your engineers like programming? Might they even do a little programming on the side sometimes for fun? Great. I guarantee your sales people never sell enterprise software for fun. [emphasis mine].
Now why does all this matter? Well personally, it still matters a bit, but it matters less and less. I am towards the end of my career, and for the most part I have loved what I have done. Sure, the NHS is increasingly a nightmare place to work, but it has been in decline most of my life: I would not recommend it unreservedly to anybody. But I have loved my work in a university. Research was so much fun for so long, and the ability to think about how we teach and how we should teach still gives me enormous pleasure: it is, to use the cliche, still what I think about in the shower. The very idea of work-life balance was — when I was young and middle-aged at least — anathema. I viewed my job as a creative one, and building things and making things brought great pleasure. This did not mean that you had to work all the hours God made, although I often did. But it did mean that work brought so much pleasure that the boundary between my inner life and what I got paid to do was more apparent to others than to me. And in large part that is still true.
Now in one sense, this whole question matters less and less to me personally. In the clinical area, many if not most clinicians I know now feel that they resemble those on commission more than the engineers. Only they don’t get commission. Most of my med school year who became GPs will have bailed out. And I do not envy the working lives of those who follow me in many other medical specialties in hospital. Similarly, universities were once full of academics who you almost didn’t need to pay, such was their love for the job. But modern universities have become more closed and centrally managed, and less tolerant of independence of mind.
In one sense, this might go with the turf — I was 60 last week. Some introspection, perhaps. But I think there really is more going on. I think we will see more and more people bailing out as early as possible (no personal plans, here), and we will need to think and plan for the fact that many of our students will bail out of the front line of medical practice earlier than we are used to. I think you see the early stirrings of this all over: people want to work less than full-time; people limit their NHS work vis a vis private work; some seek administrative roles in order to minimise their face-to-face practice; and even young medics soon after graduation are looking for portfolio careers. And we need to think about how to educate our graduates for this: our obligations are to our students first and foremost.
I do not think any of these responses are necessarily bad. But working primarily in higher education, has one advantage: there are lost of different institutions, and whilst in the UK there is a large degree of groupthink, there is still some diversity of approach. And if you are smart and you fall outwith the clinical guilds / extortion rackets, there is no reason to stay in the UK. For medics, recent graduates, need to think more strategically. The central dilemma is that depending on your specialty, your only choice might appear to be to work for a monopolist, one which seeks to control not so much the patients cradle-to-grave, but those staff who fall under its spell, cradle-to-grave. But there are those making other choices — just not enough, so far.
An aside. Of course, even those who have achieved the most in research do not alway want to work for nothing, post retirement. I heard the following account first hand from one of Fred Sanger’s previous post-docs. The onetime post-doc was now a senior Professor, charged with opening and celebrating a new research institution. Sanger — a double Laureate — would be a great catch as a speaker. All seemed will until the man who personally created much of modern biology realised the date chosen was a couple of days after he was due to retire from the LMB. He could not oblige: the [garden] roses need me more!
There are now more demands and requirements placed on higher education institutions than ever before. It’s an unlikely truism, but Conservative governments generally tend to seek to centralise and control universities – in Michael Barber’s language of how policy is made: It’s the difference between “Trust and Altruism” and “Choice and Competition” drifting into “Command and Control.”
Wonke newsletter 16 October 2017
Phil McNaull, director of finance at the University of Edinburgh and chair of the British Universities Finance Directors Group, says that “it has been clear for some time” that direct income for research “does not cover the full economic cost of conducting it, and the net deficit is subsidised by other sources”, such as surpluses from teaching.
Quoted in THE, (emphasis mine). Factually, this is true. It is a mistake to believe that the price of things, equates to how much they cost to produce. Look at the differential pricing of home and non-EU students, for instance. Or the gap between the component parts of an iPhone and the retail price. Or why most successful drugs only cost a fraction of what pharma claims is the cost of development. But the possibilities for some sort of arbitrage are there. And in an area in which agents make up their own standards (i.e. higher education), I think a lot more scrutiny is required.
Patents or graduates? I guess the latter are worth more.
Education is an admirable thing,” wrote Oscar Wilde, “but it is well to remember from time to time that nothing that is worth knowing can be taught.
You don’t learn to draw by knowing how pencils are made
These sort of aphorisms always make me to want to think harder about what exactly is foundational in medical education. The suspicion is that it is far less than we think. Schooling is full of wasted time spent learning things that are of little use, but easy to test, meaning there is little time for students to learn things that are useful.
The case for anatomy in surgery is robust and self-evident. If you remove tumours in the preauricular area or on the temple, you have to know what structures to avoid or which ones may have become compromised. If you ask any competent surgeon they will of course know what these structures are. But when you move into many areas of clinical medicine, I am always amazed by how much competent physicians have forgotten about all these things that were labelled ‘foundational’, and formed the basis of high-stakes exams. It is of course possible to be aware of schema, that structure your behaviour, and be unable to recall them — schema that experts implicitly know and novices don’t. But I suspect we need some sort of minimalism project, to work out how far we can go.