This was a quote from an article by an ex-lawyer who got into tech and writing about tech. Now some of by best friends are lawyers, but this chimed with something I came across by Benedict Evans on ‘why you must pay sales people commissions’. The article is here (the video no longer plays for me).
The opening quote poses a question:
I felt a little odd writing that title [ why you must pay sales people commissions]. It’s a little like asking “Why should you give engineers big monitors?” If you have to ask the question, then you probably won’t understand the answer. The short answer is: don’t, if you don’t want good engineers to work for you; and if they still do, they’ll be less productive. The same is true for sales people and commissions.
The argument is as follows:
Imagine that you are a great sales person who knows you can sell $10M worth of product in a year. Company A pays commissions and, if you do what you know you can do, you will earn $1M/year. Company B refuses to pay commissions for “cultural reasons” and offers $200K/year. Which job would you take? Now imagine that you are a horrible sales person who would be lucky to sell anything and will get fired in a performance-based commission culture, but may survive in a low-pressure, non-commission culture. Which job would you take?
But the key message for me is:
Speaking of culture, why should the sales culture be different from the engineering culture? To understand that, ask yourself the following: Do your engineers like programming? Might they even do a little programming on the side sometimes for fun? Great. I guarantee your sales people never sell enterprise software for fun. [emphasis mine].
Now why does all this matter? Well personally, it still matters a bit, but it matters less and less. I am towards the end of my career, and for the most part I have loved what I have done. Sure, the NHS is increasingly a nightmare place to work, but it has been in decline most of my life: I would not recommend it unreservedly to anybody. But I have loved my work in a university. Research was so much fun for so long, and the ability to think about how we teach and how we should teach still gives me enormous pleasure: it is, to use the cliche, still what I think about in the shower. The very idea of work-life balance was — when I was young and middle-aged at least — anathema. I viewed my job as a creative one, and building things and making things brought great pleasure. This did not mean that you had to work all the hours God made, although I often did. But it did mean that work brought so much pleasure that the boundary between my inner life and what I got paid to do was more apparent to others than to me. And in large part that is still true.
Now in one sense, this whole question matters less and less to me personally. In the clinical area, many if not most clinicians I know now feel that they resemble those on commission more than the engineers. Only they don’t get commission. Most of my med school year who became GPs will have bailed out. And I do not envy the working lives of those who follow me in many other medical specialties in hospital. Similarly, universities were once full of academics who you almost didn’t need to pay, such was their love for the job. But modern universities have become more closed and centrally managed, and less tolerant of independence of mind.
In one sense, this might go with the turf — I was 60 last week. Some introspection, perhaps. But I think there really is more going on. I think we will see more and more people bailing out as early as possible (no personal plans, here), and we will need to think and plan for the fact that many of our students will bail out of the front line of medical practice earlier than we are used to. I think you see the early stirrings of this all over: people want to work less than full-time; people limit their NHS work vis a vis private work; some seek administrative roles in order to minimise their face-to-face practice; and even young medics soon after graduation are looking for portfolio careers. And we need to think about how to educate our graduates for this: our obligations are to our students first and foremost.
I do not think any of these responses are necessarily bad. But working primarily in higher education, has one advantage: there are lost of different institutions, and whilst in the UK there is a large degree of groupthink, there is still some diversity of approach. And if you are smart and you fall outwith the clinical guilds / extortion rackets, there is no reason to stay in the UK. For medics, recent graduates, need to think more strategically. The central dilemma is that depending on your specialty, your only choice might appear to be to work for a monopolist, one which seeks to control not so much the patients cradle-to-grave, but those staff who fall under its spell, cradle-to-grave. But there are those making other choices — just not enough, so far.
An aside. Of course, even those who have achieved the most in research do not alway want to work for nothing, post retirement. I heard the following account first hand from one of Fred Sanger’s previous post-docs. The onetime post-doc was now a senior Professor, charged with opening and celebrating a new research institution. Sanger — a double Laureate — would be a great catch as a speaker. All seemed will until the man who personally created much of modern biology realised the date chosen was a couple of days after he was due to retire from the LMB. He could not oblige: the [garden] roses need me more!
To some extent, many UK governments have trialled their plans for much of medicine on dentists and dentistry. To some extent, fear of what the ballot box may say, has limited their wishes. But here is a well written piece about dentists and dentistry that should be read by many medics, both as a warning, and a guide to action. Dermatology and dentistry share many genes.
There is perhaps too much Machine learning/AI hype in this article — or at least for my taste, but it is well worth a read. The problem the authors are grappling with is perhaps the central intellectual problem modern medicine faces: how formal can our methods become? And how can we use a variety of cognitive prostheses to guide clinical behaviour.
Asking doctors to work harder or get smarter won’t help. Calls to reduce “unnecessary” care fall flat: we all know how difficult it’s become to identify what care is necessary. Changing incentives is an appealing lever for policymakers, but that alone will not make decisions any easier: we can reward physicians for delivering less care, but the end result may simply be less care, not better care.
Informatics — using the term in the broadest possible sense — and the management of expertise, is the central challenge facing medicine and medical care. It is where the action should be. The authors are right about this, but their caution is also true: “The state of our health care system offers little reason for optimism.” Amen. And spare me ‘realistic medicine’.
No, I could not run, lead or guide a pharma company. Tough business. But I just hope their executives do not really believe most of what they say. So, in a FT article about the new CE of Novartis, Vas Narasimhan, has vowed to slash drug development costs; there will be a productivity revolution; 10-25 per cent could be cut from the cost of trials if digital technology were used to carry them out more efficiently.
And of course:
(he) plans to partner with, or acquire, artificial intelligence and data analytics companies, to supplement Novartis’s strong but “scattered” data science capability.
I really think of our future as a medicines and data science company, centred on innovation and access (read that again, parenthesis mine)
And to add insult to injury:
Dr Narasimhan cites one inspiration as a visit to Disney World with his young children where he saw how efficiently people were moved around the park, constantly monitored by “an army of Massachusetts Institute of Technology-trained data scientists”.
And not that I am a lover of Excel…
No longer rely on Excel spreadsheets and PowerPoint slides, but instead “bring up a screen that has a predictive algorithm that in real time is recalculating what is the likelihood our trials enrol, what is the quality of our clinical trials”
Just recall that in 2000 it would have been genes / genetics / genomics rather than data / analytics / AI / ML etc
So, looks to me like lots of cost cutting and optimisation. No place for a James Black, then.
For some companies, that can mean specifically focusing on young people, as Ahmet Bozer, president of Coca-Cola International, described to investors in 2014. “Half the world’s population has not had a Coke in the last 30 days,” he said. “There’s 600 million teenagers who have not had a Coke in the last week. So the opportunity for that is huge.”
“Physical exam skills are eroding fairly significantly. We see that year after year. The masters who taught us are gone, and we’re not teaching the people below us well enough, for all the reasons we talked about.
At the same time, we grossly overestimated the average clinician’s ability to do an extremely good physical exam and to make all of the relevant physical findings. It has been documented over and over again that the average person’s ability to use a stethoscope and document a murmur accurately is a coin flip. The ability of the average house officer to do volume assessment based on a physical exam is terribly low.”
“Just fun or a prejudice? – physician stereotypes in common jokes and their attribution to medical specialties by undergraduate medical students”
Paper here. Seriously.
Many many years ago I wrote a few papers about — amongst other things — the statistical naivety of the EBM gang. I enjoyed writing them but I doubt they changed things very much. EBM as Bruce Charlton pointed out many years ago has many of the characteristic of a cult (or was it a Zombie? — you cannot kill it because it is already dead). Anyway one of the reasons I disliked a lot of EBM advocates was because I think they do not understand what RCTs are, and of course they are often indifferent to science. Now, in one sense, these two topics are not linked. Science is meant to be about producing broad ranging theories that both predict how the world works and explain what goes on. Sure, there may be lots of detail on the way, but that is why our understanding of DNA and genetics today is so different from that of 30 years ago.
By contrast RCTs are usually a form of A/B testing. Vital, in many instances, but an activity that is often a terminal side road rather than a crossroads on the path to understanding how the world works. That is not to say they are not important, nor worthy of serious intellectual endeavour. But the latter activity is for those who are capable of thinking hard about statistics and design. Instead the current academic space makes running or enrolling people in RCT some sort of intellectual activity : it isn’t, rather it is a part of professional practice, just as seeing patients is. Companies used to do it all themselves many decades ago, and they didn’t expect to get financial rewards from the RAE/REF for this sort of thing. There are optimal ways to stack shelves that maths geeks get excited about, but those who do the stacking do not share in the kudos — as in the cudos  — of discovery.
Anyway, this is by way of highlighting a post I came to by Frank Harrell, with title:
Randomized Clinical Trials Do Not Mimic Clinical Practice, Thank Goodness
Harrell is the author of one of those classic books…. . But I think the post speaks to something basic. RCT are not facsimiles of clinical practice, but some sort of bioassay to guide what might go on in the clinic. Metaphors if you will, but acts of persuasion not brittle mandates. This all leaves aside worthy debates on the corruption that has overtaken many areas of clinical measurement, but others can speak better to that than me.
 I really couldn’t resist.
So, it is reported that the UK has the best health system in the developed world (according to the Commonwealth Fund). But when you read a little more, the UK comes third (of 11) on access, and 10th (of 11) on healthcare outcomes (mortality and morbidity). The UK also has the second highest rate of deaths amenable to healthcare after the US (Switzerland is lowest). The latter is no surprise if you know anything about Swiss healthcare or hospitals across Europe. Their doctors don’t look so scruffy as the UK ones, either. I guess you can make your composite measures in many ways, but I know what I would choose. (22 July 2017, BMJ 2017;358:j3442)
My mother in law has two ‘Italian’ cats. One has worms, the other apparently not. Medication time. The diseased cat refuses the medication, whatever enticements are on offer. The undiseased (or cat at ease) scoffs not only his own medication, but that of his partner.
There are plenty of cats in South Wales, and I doubt that coal dust kills off the worms. Julian (Tudor-Hart), where did the idea come from? [ Yes, before you write in, not quite the inverse care as JTH meant it, but metaphors are the viruses of novelty, so who knows.]
I often have great difficulty understanding the way the legal system works. The way ‘learned’ is usually appended to ‘judge’, and the track record of corruption when legal officers do the dirty work of governments, suggests to me that a great deal of scepticism is warranted. That is before we get to the inability of judges to understand what sort of statements we can make the empirical world. Then there is the problem of the interpretation of statistics. Physicists try to measure the speed of light, and they can only report the answer when they have worked out a way to do it. By contrast, law (like much economics) seems to just make it up — whether it is possible to answer a question or not, instead of remaining silent (“Wovon man nicht sprechen kann, darueber muss man schweigen”).
Anyway the above little rant was provoked by what I thought were some bizarre statements in the BMJ in an article from the ethicist and barrister, Daniel Sokol (BMJ 2017;357:j2949). I am going to take some of the argument out of context, so beware.
This issue was about an adverse medical event, and to what extent the failure of a junior doctor to ask a question led to the event and whether the junior doctor should have asked the question. Sokol summarises an argument underpinning the judgment as follows:
“In short, the law expects history taking to be the same, whether it is by an inexperienced junior doctor or a senior consultant. Lord Justice Jackson Sai that history taking was a basic skill that hospital doctors at all levels should possess”
As I said, I am not debating the particular case, but the generality. So, let me alter some words in the above:
“In short, the law expects surgical skills to be the same, whether it is by an inexperienced junior doctor or a senior consultant.”
Or, how about:
“In short, the law expects diagnostic skills taking to be the same, whether it is by an inexperienced junior doctor or a senior consultant.”
Now, my two statements are patently absurd. If they are not, then we can get rid of all consultants, and save all this money. Indeed I notice when you consult members of the legal profession they show you a scale of fees that reflect different levels of expertise — and you have to choose.
The mistake is to view history taking as a checklist of questions. It is not. Or at least in many situations it is not. Some doctors are much better at it than others and just like you can quickly objectively score surgical ability based on videos of surgeons operating, to me it is obvious that the ability to elicit a history is highly variable. But most doctors get better at it over time.
A long time ago, there were three consultant neurologists in the unit I worked at in Newcastle. They all — reasonably I think — were viewed as excellent clinicians (and BTW they were terrific teachers of medical students and junior doctors). Two of them remarked that the third was not so good at eliciting physical signs but was better at taking a history than either of them.
I believed this at the time and still do. But more importantly the statement underpinning the judgment seems to be to based on a failure to understand what a history is. It is an active process, and reality does not reveal itself without an interaction between doctor and patient, that is dependent on the prior experience of both. It is not a checklist, and there is not a finite list of questions to regurgitate. Just as in science, it is experiment rather than observation, that reveals the true nature of reality. And some are better at it than others — as we should expect. So the idea that doctors of widely different experience will be able to ‘interrogate’ the same way or with the same skill is mistaken.
A few years back at an ADA meeting in Napa I got to listen to a dermatologist who understood how to influence government. I had never heard anybody speak live who was so effective, so effortless in his command of his brief, and with such charm. Maybe JFK might have had this effect, too. The there was Obama.
I do not have these skills, but more worryingly I do not think UK medicine does them that well either. I do not mean the ‘honours’ business, but meaningful attempts to balance the power and corruption of the state. We don’t seem to do activism well, either.
Some advice from last week’s NEJM, worth a read: ‘Effective Legislative Advocacy — Lessons from Successful Medical Trainee Campaigns’. Which, if nothing else, forced me to chase up the quote I (and others) have been misquoting for years from Rudolf Virchow.
Speaking on BBC Radio 4’s Midweek programme on 22 February 2006, Jonathon Kaplan quoted Virchow as saying that, “pathology is politics writ large”. He seems to have been misquoting the usual part‐quotation that, “Medicine is a social science and politics is nothing but medicine writ large”. In fact, what Virchow really said was that, “Medicine is a social science and politics is nothing else but medicine on a large scale. Medicine as a social science, as the science of human beings, has the obligation to point out problems and to attempt their theoretical solution; the politician, the practical anthropologist, must find the means for their actual solution”
Link here to J Epidemiol Community Health. 2006 Aug; 60(8): 671.
Interesting piece in today’s NEJM on data sharing and clinical trials and how meaningfully patients are involved.
Patients also said they wanted trial results to be shared with participants themselves, along with an explanation of what the results mean for them — something that generally doesn’t happen now. In addition, most participants said they had entered a trial not to advance knowledge, but to obtain what their doctor thought was the best treatment option for them. All of which raises some questions about how well patients understand consent forms.
Which reminded me of a powerful paper by the late David Horrobin in the Lancet in which, from the position of being a patient with a terminal illness, he challenged what many say:
The idea that altruism is an important consideration for most patients with cancer is a figment of the ethicist’s and statistician’s imagination. Of course, many people, when confronted with the certainty of their own death, will say that even if they cannot live they would like to contribute to understanding so that others will have a chance of survival. But the idea that this is a really important issue for most patients is nonsense. What they want is to survive, and to do that they want the best treatment.
I have always been suspicious of the ‘equipoise’ argument and terrified when I see participation rates in clinical trials as a NHS performance measure. It is bad enough that doctors might end up acting as agents of the state. But this is worse than shilling for pharma.
Thew NEJM piece also draws attention to people’s reluctance to share with commercial entities. What this tells you, is that many people view some corporations —pharma — in this instance as pirates. Or worse. This topic is not going away. Nor is the need for (commercial) pharma to finance and develop new drugs.
When I covered Kasparov-Deep Blue match, I thought the drama came from a battle between computer and human. But it was really a story of people, with brutal capitalist impulse, teaming up with AI to destroy the confidence and dignity of the greatest champion the world had seen. That leads me to believe it’s not Skynet that should worry us about AI, but rather the homo sapiens who build, implement, and employ those systems.
Well, this is all about one of the great issues of our age. As I wrote in 2008:
Change in medicine is increasingly driven by the twin forces of specialization, and the desire to codify medical practice, i.e. to produce rules that can be followed by those from a range of educational and professional backgrounds. The battle is over the intellectual heartlands of clinical practice and how the knowledge that underpins clinical practice is acquired, distributed and validated.
The mistake is to believe that this process is not mired in political and financial assumptions about what good care means — and who can make money out of it.
During the 7-year period between the introduction of tacrolimus in preclinical studies in 1987 and the FDA approval of tacrolimus in 1994, the transplant program at the University of Pittsburgh produced one peer-reviewed article every 2.7 days, while transplanting an organ every 14.2 hours.
Always thought these surgeons needed to spend more time in theatre .
Science. Obituary of Thomas Starzl.
The use of Benzedrine by American athletes in the 1936 Berlin Olympics prompted the Temmler company on the edge of Berlin to focus on creating a more powerful version. By the autumn of 1937, its chief chemist, Dr. Fritz Hauschild (in postwar years the drug provider for East German athletes), created a synthesized version of methamphetamine. This was patented as Pervitin. It produced intense sensations of energy and self-confidence.
In pill form Pervitin was marketed as a general stimulant, equally useful for factory workers and housewives. It promised to overcome narcolepsy, depression, low energy levels, frigidity in women, and weak circulation. The assurance that it would increase performance attracted the Nazi Party’s approval, and amphetamine use was quietly omitted from any anti-drug propaganda. By 1938, large parts of the population were using Pervitin on an almost regular basis, including students preparing for exams, nurses on night duty, businessmen under pressure, and mothers dealing with the pressures of Kinder, Küche, Kirche (children, kitchen, church—to which the Nazis thought women should be relegated). Ohler quotes from letters written by the future Nobel laureate Heinrich Böll, then serving in the German army, begging his parents to send him more Pervitin. Its consumption came to be seen as entirely normal.
Lots I didn’t know, but any reader of David Healy will not be surprised. A dermatologist doesn’t come out of it too well, either.
Antony Beevor in the NYRB
Medicare, America’s public health scheme for the over-65s, has recently started paying doctors for in-depth conversations with terminally ill patients; other national health-care systems, and insurers, should follow.
The quote is from a reasonable article in the Economist (How to have a better death). But what screams at me that is that the very incentive systems the Economist espouses are those that have led to the status quo. We already have behavioural code(s) that are misaligned, and now we add more and more buggy patches, layer upon layer. All because nobody talks to those on either side of the front line.
Nice letter in Academic Medicine. Not convinced by the exact details, but the author is on to something important. The first victim of insincerity is language (Orwell, if I remember correctly).
Medical professionalism is espoused as a necessity in health care, setting an important precedent of excellence and respect towards peers and patients. In many medical schools, a portion of the curriculum is dedicated to the intricacies of medical professionalism. Though typically taught through specific tenets and case studies, professionalism is still a general principle, resulting in varied definitions across institutions. This is, in fact, part of the beauty of professionalism—the lack of definition makes it a flexible concept, applicable in a wide variety of situations. However, the downside to this vagary is that it allows for the weaponization of professionalism, leaving space for “professionals” to reject certain approaches to health care.
I always recommend people to read David Healy’s Psychopharmacology 1, 2, and 3, together with Jack Scannell’s articles (here and here) to get a feel for exactly what drug discovery means. What is beyond doubt is that we are not as efficient at it as we once were. There is lots of blame to go around. The following gives a flavour of some of the issues ( or at least one take on the core issues).
From a review in ‘Health Affairs’ of A Prescription For Change: The Looming Crisis In Drug Development by Michael S. Kinch Chapel Hill (NC): University of North Carolina Press, 2016, by Christopher-Paul Milne.
He chronicles these industries’ long, strange trip from being the darling of the investor world and beneficiary of munificent government funding to standing on the brink of extinction, and he details the “slow-motion dismantlement” of their R&D capacity with cold, hard numbers because “the data will lead us to the truth.” There are many smaller truths, too: Overall, National Institutes of Health (NIH) funding has fallen by 25 percent in relative terms since a funding surge ended in 2003; venture capital is no longer willing to invest in product cycles that are eleven or twelve years long; and biotech companies may have to pay licensing fees on as many as forty patents for a decade before they even get to the point of animal testing and human trials….
In an effort to survive in such a costly and competitive environment, pharmaceutical companies have shed their high-maintenance R&D infrastructure, maintaining their pipelines instead by acquiring smaller (mostly biotech) companies, focusing on the less expensive development of me-too drugs, and buying the rights to promising products in late-stage development. As a consequence, biotech companies are disappearing (down from a peak of 140 in 2000 to about 60 in 2017), and the survivors must expend an increasing proportion of their resources on animal and human testing instead of the more innovative (and less costly) identification of promising leads and platform technologies. Similarly, some of academia’s R&D capacity, overbuilt in response to the NIH funding surge, now lies fallow, while seasoned experts and their promising protégés have moved on to other fields.
With many powerful academicians, lobbyists, professional societies, funding agencies, and perhaps even regulators shifting away from trials to observational data, even for licensing purposes, clinical medicine may be marching headlong to a massive suicide of its scientific evidence basis. We may experience a return to the 18th century, before the first controlled trial on scurvy. Yet, there is also a major difference compared with the 18th century: now we have more observational data, which means mostly that we can have many more misleading results.
I think the situation is even worse. Indeed, we can only grasp the nature of reality with action, not with contemplation (pace Ioannidis). But experiments (sic) as in RCT are also part of the problem: we only understand the world by testing of ideas that appear to bring coherency to the natural world. A/B testing is inadequate for this task — although it may well be all we have left.
G4S, the outsourcing company, has sold its US juvenile detention centres business for $57m. It said it had sold the business to BHSB, a US a “behavioural health care services company” that provides services to troubled young people. FT.
We know where this ends up.
Comment on an FT article. How things have changed. Even I can remember a colleague — a few years my senior — who went for a Wellcome Training Fellowship, only to be interviewed by one person, with the opening question being, ‘Imagine I am an intelligent layperson: tell me what you want to do!’
I was a war baby, a small farmer’s son and in 1960, at 17, I had a chat with my most trusted teacher about what I should do to apply to become a doctor for which I had just acquired a good group of Scottish highers. He advised me that because I should have applied a number of months before, to write a letter to the University enclosing my qualifications. I was asked to come and have a chat with the Bursar and the only thing I remember him saying was that my qualifications were good but did I realise that I might be preventing somebody else from getting in. I am ashamed to say that I replied that I was not really too troubled about that. I was accepted, and was fine.
When you want to find your way around a city, you might memorise key streets or more likely use a simplified map as a guide as you travel. But when you know a city, you navigate by being able to recall how you get from A to B. In fact you may have difficulty drawing a map — certainly to scale — but your memory is made up of lots of instances of what lies around a particular corner. Much of what you learn about diseases is the map in this analogy. By contrast, what the experienced clinician knows are lots of instances of what lies round particular corners. Those instances have a name: they are called patients.
“We have a policy to help each ward—not just the acute admissions wards, but each ward in the hospital—decide who is the ‘least bad’ patient to approach to ask to sleep on a bed in the corridor. We have a plan for which nurse takes responsibility for taking observations—they are recorded in ‘the corridor folder.’”
Other realistic medicine comments on the same BMJ page here collated from a RCP report.
For a flavour:
Now if I had to pick, I would overcome my suckerproneness and take the butcher any minute.
Incerto here. Deadly serious.
“When an older person dies, it’s harder to know that they died because of poor care,” but he added that the “families know”.
This is an article about the entirely predictable crisis in care for vulnerable people. We used to do it better, but decided cheating was more profitable. “Most local authorities buy care piecemeal via an auction system where contractors bid to provide a care package for each elderly or disabled person.” You cannot run a service on this basis; you might or might not make money. I remember being told as a young medical student: the local authority homes are the best, they have professional standards their staff have pensions, and unions etc . Avoid private providers.
The lyrics are about a cognate issue but on this topic I am always reminded of Peter Gabriel’s words and sense of disgust.
More Americans now die from drug overdoses than from car crashes or gun violence, and more than 26m are being treated for addiction, according to CDC figures.
This is the title of a piece in Nature, discussing how much evidence is needed before you bring a drug to market, and how much information you should gather after the drug is available. The reality is possibly more nuanced that is made out here, and knowing how regulation has changed and was different between countries is worth factoring in. But I think the key issue is that many people do not trust drug companies, and they are right not to do so. It was in the FT of all places, that the comparison between public perceptions of pharma and ‘the bankers’ was made. Trust has a value, a value to both parties in any exchange. And really, why do you need advertisers to get involved?
Refreshing to read an article in which I can find something to disagree with in almost every sentence. And the title — in the print edition only —‘Taking your research to the real world” was probably the work of the subeditor. But the tired trope of academia versus the real world is like a red rag to a bull (self-reference intended). Most of all, I find the belief that unless you are changing health care in the short term is some distant country, you are somehow deficient as an academic, conceited.
Capitalism has helped lift more people out of poverty than most public health researchers; economists basic work on how societies work may do the same; and I am not certain how Watson and Crick would have fared under this self-congratulatory humbug. The real danger is that we are forgetting that universities are some of the few places left to do genuinely transformative and generative work. There are plenty of alternatives for much other ‘close to market work’: private corporations; NGOs; national health agencies; consultancies. Delivery and revolutionary science belong to different scales and cultures (mostly).
The report’s authors wanted to know what drove nurses and doctors to join agencies, presuming it was simply a case of higher pay. But the researchers found that agency workers in the NHS had similar pay. What they found interesting was that agency workers felt better off because they had escaped the internal politics, the bureaucracy and the stricture of rotas that rarely matched the demands of home. As agency workers they believed they had more control over their lives and put up with less bullying from their employer.
A report by the National Institute for Economic & Social Research (NIESR) quoted here.