Besides, “university league tables are like sausages: the more you know about how they are made, the less you want to [do with] them”.
“Research was structurally unprofitable even if you scored really well in the research excellence framework,” he claims. “It’s being financed by surpluses on taught master’s. I think that’s fine because part of the reason people came on the taught programmes was because the place was very highly ranked in research, and they thought they were going to be sitting at the feet of the best economists around. Academics had to understand the dynamic and deliver the teaching because that was what was paying for the research. Yet because of the history of underfunding [undergraduate] students [before the introduction of £9,000 tuition fees in 2012], a kind of mood gained ground in British universities that [all] students were an unprofitable activity.
Science 21 June 2013: 1394-1399.
For most alumni, university fundraising may seem to be uncoordinated and lacking in focus—an assortment of phone calls, solicitous letters, and invitations to a class reunion. But for Steven Rum, it’s a science. And the goal is to carry out more research.
Rum is senior vice president for development and chief fundraiser for Johns Hopkins Medicine in Baltimore, Maryland. Last year, his team had a banner year, raising $318 million. Their approach places the physician scientists at Hopkins on the donor front lines. The goal is to turn the positive feelings of “grateful patients” into support for new research, faculty chairs, academic scholarships, bricks and mortar, or simply defraying the cost of running a multibillion-dollar medical center.
Rum has 65 full-time fundraisers on a staff of 165. Each one is responsible for meeting weekly with physicians—their “caseloads” range from a dozen to more than 30 docs—to discuss which of their patients might be potential donors. The conversation is designed to help them identify what Rum calls a donor’s “qualifying interest” and connect it to their “capacity,” that is, the ability to make a donation.
More often than not, Rum’s team finds that sweet spot…..
”Ideally, I’d like to have one gift officer manage no more than six doctors,” he says.
Article in Nature. I largely agree, although my views are as much based on the hype-upon-hype that characterises so much of medical research, especially cancer. I do not have a reference, but whatever one’s views about the late David Horrobin, his Lancet article about cancer trials — written when he was dying from lymphoma — is worth a read. What a mess!
Key quotes from this article:
In 2017, my colleagues and I completed a study of all 48 cancer drugs approved by the European Medicines Agency between 2009 and 2013 (C. Davis et al. Br. Med. J. 359, j4530; 2017). Of the 68 clinical indications for these drugs (reasons to use a particular drug on a patient), only 24 (35%) demonstrated evidence of a survival benefit at the time of approval. Even fewer provided evidence of an improved quality of life for symptoms such as pain, tiredness and loss of appetite (7 trials; 10%). Most indications (36 of 68) still lacked such evidence three or more years after approval. Other groups in other regions have observed similar trends. For example, a 2015 study demonstrated that only a small proportion of cancer drugs approved by the FDA improved survival or quality of life (C. Kim and V. Prasad JAMA Intern. Med. 175, 1992–1994; 2015).
But the key point he makes is:
I believe that the low bar also undermines innovation and wastes money.
When assessments — whether in medicine or education — are flawed the loss in value is not in short term financial costs, but in what might have happened 10 years down the road.
There are periodic evaluations, but a poor result means losing only a fraction of your funding, says Schuman, who previously held one of the plum positions in U.S. science: as an investigator funded by the Howard Hughes Medical Institute on a 5-year contract. “I did not realize how the renewal clock of 5 years dissuaded me from going for risky ideas until I became a Max Planck director,” she says.
Cue, David Bowie:
The following is an excerpt from a review in press with Acta. You can see the full article with DOI 10.2340/00015555-2916 here
From the solar constant to thong bikinis and all stops in between.
A review of: “Sun Protection: A risk management approach.” Brian Diffey. IOP Publishing, Bristol, UK. ISBN 978-0-7503-1377-3 (ebook) ISBN 978-0-7503-1378-0 (print) ISBN 978-0-7503-1379-7 (mobi)
Leo Szilard was one of half a dozen or so physical scientists who, having attended the same Budapest gymnasium, revolutionised twentieth century physics. In 1934, whilst working in London, he realised that if one neutron hit an atom which then released two further neutrons, a chain reaction might ensue. Fearing of the consequences, he tried to keep the discovery secret by assigning the patent to the British Admiralty. In 1939, he authored the letter, that Einstein signed, warning the then US President of the coming impact of nuclear weapons.
After the war, in revulsion at the uses to which his physics had been applied, he swapped physics for biology. There was a drawback, however. Szilard liked to think in a hot bath, and he liked to think a lot. Once his interests had turned to biology he remarked that he could no longer enjoy a long uninterrupted bath — he was forever having to leave his bath, to check some factual detail (before returning to think some more). Biology seemed to lack the deep simplifying foundations of the Queen of Sciences.
But all of that media can’t really replace the socializing, networking, and simply fun that happened as part of (or sometimes despite) the conference formula.
I don’t know how to fix conferences, but the first place I’d start on that whiteboard is by getting rid of all of the talks, then trying to find different ways to bring people together — and far more of them than before.
I no longer go to many conferences, and that is a good thing. But fixing them is a problem, not least because many academic conferences are businesses that collect money that supports other activities. This is not always bad, but is often not good. ‘Getting rid of the talks’ is of course attractive. Leo Szilard once suggested that you should stand up, briefly report your conclusions, then sit down. Only if the audience were sceptical of your results would you have to speak for longer. As for size, there is no single right size. However the best conferences I have every attended were all small, with less than 40 people. But I wouldn’t t have got to these small ones, unless I had gone to the big ones.
The surge in open-access predatory journals is making it harder for contributors and readers to distinguish these from legitimate publications — a confusion that is fostered by the predatory-journal industry. One solution could be to deploy a variant of a well-established quality-control test. The scientific community could submit replicate test articles several times a year to a wide array of open-access journals, suspect and non-suspect.
From Steven N Goodman who, as ever, is worth reading. Of course, in one sense, it is a question of serial monogamy, or polygamy.
After earning his medical degree in 1951 he trained in hospitals in Montreal. “To my surprise I also found I enjoyed clinical medicine,” he wrote in his Nobel prize biography. Then he quipped, “It took three years of hospital training after graduation, a year of internship and two of residency in neurology, before that interest finally wore off.”
This is from the obituary of Ben Barres [link]
An utterly committed researcher, Professor Barres would regularly work until 2am or 3am. He “slept on the floor of my small office”, recalled Professor Raff. “Every morning when I arrived and opened the door, it would whack him in the head – he eventually learned to sleep facing the opposite direction.”
Somewhere, I cannot remember where, after one of his seminars, his intellectual depth (Ben Barres) was judged more favourably to that of his ‘sister’. His sister was his his former ‘self’, Barbara Barres. Such a neat experimental design to tease apart causality.
I too worked somewhere where people slept overnight in the lab, although I think the deciding factor there was an inability to find or pay for a suitable flat, rather than enthusiasm
David Hubel, on statistics: “We could hardly get excited about an effect so feeble as to require statistics for its demonstration.”
I came across this (below), in my end of year clear out. And even if this was 2016, rather than 2017, it is as good a thought to open 2018 with, as any other. It is from a review of “Life’s Greatest Secret: The Race to Crack the Genetic Code”, by Matthew Cobb. The review is by H Allen Orr. NYRB
Finally, and perhaps most important, Life’s Greatest Secret highlights the power of the beautiful experiment in science. Though Cobb pays less attention to this subject than he might have, the period of scientific history that he surveys was the golden age of the beautiful experiment in biology. Biologists of the time—including Nirenberg with his UUU, Crick and Brenner with their triplet code work, and others including Matthew Meselson, Franklin Stahl, and Joshua Lederberg—were masters of the sort of experiment that, through some breathtakingly simple manipulation, allowed a decisive or nearly decisive solution to what previously seemed a hopelessly complex problem. Such experiments represent a species of intellectual art that is little appreciated outside a narrow circle of scientists……..
But the larger lesson of Life’s Greatest Secret is one that may be worth remembering. When scientists require definitive answers, not merely suggestive patterns, they require experiments that are decisive and, if all goes well, beautiful.
There is something about teaching that makes you a better researcher. I know this is very countercultural wisdom, but I believed it all along. Luria, Magasanik, and Levinthal all believed it. Levinthal and Luria both had a very strong influence on me in this regard.
An (old) interview with David Botstein, in PloS genetics. Link
At least we are spared the ‘research led teaching’ mission statements.
“For example, I studied Physics, so I learned about how physicists think… and it is not how most people think. They have these tricks which turn difficult problems into far easier problems. The main lesson I took away from Physics is that you can often take an impossibly hard problem and simply represent it differently. By doing so, you turn something that would take forever to solve into something that is accessible to smart teenagers.”
But the opposite is now much more common. I think there are whole swathes of modern institutional and corporate life, that are designed to make the simple, complicated. At best, simple may sometimes be wrong, but complicated is usually useless — or much worse. I seem to remember Paul Jannsen, when asked why we do not seem to be able to discover revolutionary new drugs like we once did, respond: ‘in those days the idea of obviousness still existed’.
Yan Lecun of Facebook wrote:
In the history of science and technology, the engineering artefacts have almost always preceded the theoretical understanding: the lens and the telescope preceded optics theory, the steam engine preceded thermodynamics, the airplane preceded flight aerodynamics, radio and data communication preceded information theory, the computer preceded computer science.
This is so true for (much) medicine, too. The journal comes after the discovery.
Take salary: as Mrs. Neal told us during her crash course, you’ll carry your whole life the compound price of an un-negotiated first salary.
From Frederic Filloux in the Monday Note. A great article which, whilst focussed on the topic of journalism schools, has bags of relevance to future and therefore present day medical schools. The professional schools have a lot in common.
This was a quote from an article by an ex-lawyer who got into tech and writing about tech. Now some of by best friends are lawyers, but this chimed with something I came across by Benedict Evans on ‘why you must pay sales people commissions’. The article is here (the video no longer plays for me).
The opening quote poses a question:
I felt a little odd writing that title [ why you must pay sales people commissions]. It’s a little like asking “Why should you give engineers big monitors?” If you have to ask the question, then you probably won’t understand the answer. The short answer is: don’t, if you don’t want good engineers to work for you; and if they still do, they’ll be less productive. The same is true for sales people and commissions.
The argument is as follows:
Imagine that you are a great sales person who knows you can sell $10M worth of product in a year. Company A pays commissions and, if you do what you know you can do, you will earn $1M/year. Company B refuses to pay commissions for “cultural reasons” and offers $200K/year. Which job would you take? Now imagine that you are a horrible sales person who would be lucky to sell anything and will get fired in a performance-based commission culture, but may survive in a low-pressure, non-commission culture. Which job would you take?
But the key message for me is:
Speaking of culture, why should the sales culture be different from the engineering culture? To understand that, ask yourself the following: Do your engineers like programming? Might they even do a little programming on the side sometimes for fun? Great. I guarantee your sales people never sell enterprise software for fun. [emphasis mine].
Now why does all this matter? Well personally, it still matters a bit, but it matters less and less. I am towards the end of my career, and for the most part I have loved what I have done. Sure, the NHS is increasingly a nightmare place to work, but it has been in decline most of my life: I would not recommend it unreservedly to anybody. But I have loved my work in a university. Research was so much fun for so long, and the ability to think about how we teach and how we should teach still gives me enormous pleasure: it is, to use the cliche, still what I think about in the shower. The very idea of work-life balance was — when I was young and middle-aged at least — anathema. I viewed my job as a creative one, and building things and making things brought great pleasure. This did not mean that you had to work all the hours God made, although I often did. But it did mean that work brought so much pleasure that the boundary between my inner life and what I got paid to do was more apparent to others than to me. And in large part that is still true.
Now in one sense, this whole question matters less and less to me personally. In the clinical area, many if not most clinicians I know now feel that they resemble those on commission more than the engineers. Only they don’t get commission. Most of my med school year who became GPs will have bailed out. And I do not envy the working lives of those who follow me in many other medical specialties in hospital. Similarly, universities were once full of academics who you almost didn’t need to pay, such was their love for the job. But modern universities have become more closed and centrally managed, and less tolerant of independence of mind.
In one sense, this might go with the turf — I was 60 last week. Some introspection, perhaps. But I think there really is more going on. I think we will see more and more people bailing out as early as possible (no personal plans, here), and we will need to think and plan for the fact that many of our students will bail out of the front line of medical practice earlier than we are used to. I think you see the early stirrings of this all over: people want to work less than full-time; people limit their NHS work vis a vis private work; some seek administrative roles in order to minimise their face-to-face practice; and even young medics soon after graduation are looking for portfolio careers. And we need to think about how to educate our graduates for this: our obligations are to our students first and foremost.
I do not think any of these responses are necessarily bad. But working primarily in higher education, has one advantage: there are lost of different institutions, and whilst in the UK there is a large degree of groupthink, there is still some diversity of approach. And if you are smart and you fall outwith the clinical guilds / extortion rackets, there is no reason to stay in the UK. For medics, recent graduates, need to think more strategically. The central dilemma is that depending on your specialty, your only choice might appear to be to work for a monopolist, one which seeks to control not so much the patients cradle-to-grave, but those staff who fall under its spell, cradle-to-grave. But there are those making other choices — just not enough, so far.
An aside. Of course, even those who have achieved the most in research do not alway want to work for nothing, post retirement. I heard the following account first hand from one of Fred Sanger’s previous post-docs. The onetime post-doc was now a senior Professor, charged with opening and celebrating a new research institution. Sanger — a double Laureate — would be a great catch as a speaker. All seemed will until the man who personally created much of modern biology realised the date chosen was a couple of days after he was due to retire from the LMB. He could not oblige: the [garden] roses need me more!
No, I could not run, lead or guide a pharma company. Tough business. But I just hope their executives do not really believe most of what they say. So, in a FT article about the new CE of Novartis, Vas Narasimhan, has vowed to slash drug development costs; there will be a productivity revolution; 10-25 per cent could be cut from the cost of trials if digital technology were used to carry them out more efficiently.
And of course:
(he) plans to partner with, or acquire, artificial intelligence and data analytics companies, to supplement Novartis’s strong but “scattered” data science capability.
I really think of our future as a medicines and data science company, centred on innovation and access (read that again, parenthesis mine)
And to add insult to injury:
Dr Narasimhan cites one inspiration as a visit to Disney World with his young children where he saw how efficiently people were moved around the park, constantly monitored by “an army of Massachusetts Institute of Technology-trained data scientists”.
And not that I am a lover of Excel…
No longer rely on Excel spreadsheets and PowerPoint slides, but instead “bring up a screen that has a predictive algorithm that in real time is recalculating what is the likelihood our trials enrol, what is the quality of our clinical trials”
Just recall that in 2000 it would have been genes / genetics / genomics rather than data / analytics / AI / ML etc
So, looks to me like lots of cost cutting and optimisation. No place for a James Black, then.
This is from an article in Nature. And the problem is resolving differences in experimental results between labs.
But subtle disparities were endless. In one particularly painful teleconference, we spent an hour debating the proper procedure for picking up worms and placing them on new agar plates. Some batches of worms lived a full day longer with gentler technicians. Because a worm’s lifespan is only about 20 days, this is a big deal. Hundreds of e-mails and many teleconferences later, we converged on a technique but still had a stupendous three-day difference in lifespan between labs. The problem, it turned out, was notation — one lab determined age on the basis of when an egg hatched, others on when it was laid.
Now my title is from Blake:
He who would do good to another must do it in Minute Particulars: general Good is the plea of the scoundrel, hypocrite, and flatterer, for Art and Science cannot exist but in minutely organized Particulars.
And yet, I think I am using the quote in a way he would have strongly disagreed with. Some of the time ‘Minute particulars’ are not the place to be if you want to change the world. Especially in biology.
If you are interested in the history of science. Or put another way, how science can work when it is allowed to, check out this wonderful site: ‘Molecular Tinkering: How Edinburgh changed the face of molecular biology’.
I first became aware of ‘Edinburgh’ and its role in molecular biology by accident. In the late 1970’s I spent three months here doing a psychiatry elective, on the unit of Prof Bob Kendell. I stayed in self-catering University accommodation through the summer, knowing nobody, spending most of my free time tramping around the city on foot. Not always bored. There was a motley crew of people staying in the small hall, and one was a biologist from China. I had never met anybody from this far away and mysterious place. Of course, mostly I asked about barefoot doctors and the like, and questions that you would not now use your mobile phone to ask. No Alexa then. But I wondered why he was here. Molecular biology, he explained. Now, I could recite the story Jim Watson told about Watson and Crick, and even mention a few names who followed. But no more than a handful. ‘Recombinant DNA’ might have passed my lips, but please do not ask me to explain it. But from him, I learned something special has awoken. And for me it was literally a couple more years before the waves of this second revolution broke on the shores of MRC Clinical Training Fellowship applications.
Throughout the 20th century film editors worked by identifying key scenes in their film reels, snipping precise frames and re-joining them to create a new narrative. This is exactly how the Murrays wanted to work with the long molecules of DNA. Restriction enzymes would be the editor’s scissors, making the cuts. As well as doing the cutting, the enzymes would be the editor’s eyes: recognising the precise frame at which to snip.
Or if you ever doubt that the space for discovery is infinite, listen to this from Noreen Murray:
“Looking back, it is amazing that Ken and I were the only people in the UK in the late 60s and early 70s to notice the potential of restriction enzymes.”
Or this, for a definition of what research leadership is all about:
Waddington had a real knack for selecting and recruiting interesting but often unproven biologists.
Little evidence is found for higher-order organization into 30- or 120-nm fibers, as would be expected from the classic textbook models based on in vitro visualization of non-native chromatin.
Well, chromatin structure might not be everybody’s cup of tea but I once shared an ‘office’ with a couple of French / Polish researchers in Strasbourg. It was all above my head, so I had to make do with the textbooks, and I stuck to my simple cloning of upstream regulatory regions of a retinoid receptor. Now, it appears from this article in Science, the textbooks will need rewriting. Science works.
Many many years ago I wrote a few papers about — amongst other things — the statistical naivety of the EBM gang. I enjoyed writing them but I doubt they changed things very much. EBM as Bruce Charlton pointed out many years ago has many of the characteristic of a cult (or was it a Zombie? — you cannot kill it because it is already dead). Anyway one of the reasons I disliked a lot of EBM advocates was because I think they do not understand what RCTs are, and of course they are often indifferent to science. Now, in one sense, these two topics are not linked. Science is meant to be about producing broad ranging theories that both predict how the world works and explain what goes on. Sure, there may be lots of detail on the way, but that is why our understanding of DNA and genetics today is so different from that of 30 years ago.
By contrast RCTs are usually a form of A/B testing. Vital, in many instances, but an activity that is often a terminal side road rather than a crossroads on the path to understanding how the world works. That is not to say they are not important, nor worthy of serious intellectual endeavour. But the latter activity is for those who are capable of thinking hard about statistics and design. Instead the current academic space makes running or enrolling people in RCT some sort of intellectual activity : it isn’t, rather it is a part of professional practice, just as seeing patients is. Companies used to do it all themselves many decades ago, and they didn’t expect to get financial rewards from the RAE/REF for this sort of thing. There are optimal ways to stack shelves that maths geeks get excited about, but those who do the stacking do not share in the kudos — as in the cudos  — of discovery.
Anyway, this is by way of highlighting a post I came to by Frank Harrell, with title:
Randomized Clinical Trials Do Not Mimic Clinical Practice, Thank Goodness
Harrell is the author of one of those classic books…. . But I think the post speaks to something basic. RCT are not facsimiles of clinical practice, but some sort of bioassay to guide what might go on in the clinic. Metaphors if you will, but acts of persuasion not brittle mandates. This all leaves aside worthy debates on the corruption that has overtaken many areas of clinical measurement, but others can speak better to that than me.
 I really couldn’t resist.
Hours later, with Blackburn’s approval, the institute issued comments on the scientific records of the two women. It had “invested millions of dollars” in each scientist, Salk stated, but a “rigorous analysis” showed each “consistently ranking below her peers in producing high quality research and attracting” grants. Neither has published in Cell, Nature, or Science in the last 10 years, it said. Lundblad’s salary “is well above the median for Salk full professors ($250,000) … yet her performance has long remained within the bottom quartile of her peers.” The institute wrote that Jones’s salary, in the low $200,000 range, “aligns” with salaries at top universities, although she “has long remained within the bottom quartile of her peers.”
This is from an article in Science (Gender discrimination lawsuit at Salk ignites controversy). The context is a sex discrimination case, but the account is about an astonishing lack of vision. Short termism of stock markets is not the only way value is being destroyed by a cash-in-hand mentality. The best rugby coaches have rarely been the greatest players. Nobel laureates may not be the best leaders. No (bull) shit here…
Of all the titles submitted to the 2014 research excellence framework, only “around a half in most subjects achieved at least one retail sale in the UK in the years 2008-14”.
Interesting piece in today’s NEJM on data sharing and clinical trials and how meaningfully patients are involved.
Patients also said they wanted trial results to be shared with participants themselves, along with an explanation of what the results mean for them — something that generally doesn’t happen now. In addition, most participants said they had entered a trial not to advance knowledge, but to obtain what their doctor thought was the best treatment option for them. All of which raises some questions about how well patients understand consent forms.
Which reminded me of a powerful paper by the late David Horrobin in the Lancet in which, from the position of being a patient with a terminal illness, he challenged what many say:
The idea that altruism is an important consideration for most patients with cancer is a figment of the ethicist’s and statistician’s imagination. Of course, many people, when confronted with the certainty of their own death, will say that even if they cannot live they would like to contribute to understanding so that others will have a chance of survival. But the idea that this is a really important issue for most patients is nonsense. What they want is to survive, and to do that they want the best treatment.
I have always been suspicious of the ‘equipoise’ argument and terrified when I see participation rates in clinical trials as a NHS performance measure. It is bad enough that doctors might end up acting as agents of the state. But this is worse than shilling for pharma.
Thew NEJM piece also draws attention to people’s reluctance to share with commercial entities. What this tells you, is that many people view some corporations —pharma — in this instance as pirates. Or worse. This topic is not going away. Nor is the need for (commercial) pharma to finance and develop new drugs.
Institutions with histories matter. It is just that in many instances innovation often comes from the periphery. I think this is often true in many fields: science, music, even medical education. It is not always this way, but often enough to make me suspicious of the ‘centre’. The centre of course gets to write the history books.
An article by Mark Mazower in the NYRB, praising Richard Evans, the historian of the Third Reich, caught my attention. It seems that nobody in the centre was too excited about understanding the event that changed much of the world forever. Mazower writes:
If you wanted to do research on Saint Anselm or Cromwell, there were numerous supervisors to choose from at leading universities; if you wanted to write about Erich Ludendorff or Hitler, there was almost no one. The study of modern Europe was a backwater, dominated by historians with good wartime records and helpful Whitehall connections—old Bletchley Park hands and former intelligence officials, some of whom had broken off university careers to take part in the war and then returned.
Forward-looking, encouraging of the social sciences, open to international scholarship from the moment of its establishment, St. Antony’s is the college famously written off by the snobbish Roddy Martindale in John le Carré’s Tinker, Tailor, Soldier, Spy as “redbrick.” The truth is that it was indeed the redbrick universities, the creations of the 1950s and 1960s, that gave Evans and others their chance and shaped historical consciousness as a result. The Evans generation, if we can call them that, men (and only a very few women) born between 1943 and 1950, came mostly from the English provinces and usually got their first jobs in the provinces, too.
It is interesting how academics who had had career breaks were important. And how you often will need new institutions to change accepted practice. All those boffins whose careers were interrupted by the war led to the flowering of invention we saw after the second world war. You have to continually recreate new types of ivory towers. But I see little of this today. Instead, we live in an age of optimisation, rather than of optimism that things can be different. The future is being captured by the present ever more than it once was. At least in much of the academy.
During the 7-year period between the introduction of tacrolimus in preclinical studies in 1987 and the FDA approval of tacrolimus in 1994, the transplant program at the University of Pittsburgh produced one peer-reviewed article every 2.7 days, while transplanting an organ every 14.2 hours.
Always thought these surgeons needed to spend more time in theatre .
Science. Obituary of Thomas Starzl.
I always recommend people to read David Healy’s Psychopharmacology 1, 2, and 3, together with Jack Scannell’s articles (here and here) to get a feel for exactly what drug discovery means. What is beyond doubt is that we are not as efficient at it as we once were. There is lots of blame to go around. The following gives a flavour of some of the issues ( or at least one take on the core issues).
From a review in ‘Health Affairs’ of A Prescription For Change: The Looming Crisis In Drug Development by Michael S. Kinch Chapel Hill (NC): University of North Carolina Press, 2016, by Christopher-Paul Milne.
He chronicles these industries’ long, strange trip from being the darling of the investor world and beneficiary of munificent government funding to standing on the brink of extinction, and he details the “slow-motion dismantlement” of their R&D capacity with cold, hard numbers because “the data will lead us to the truth.” There are many smaller truths, too: Overall, National Institutes of Health (NIH) funding has fallen by 25 percent in relative terms since a funding surge ended in 2003; venture capital is no longer willing to invest in product cycles that are eleven or twelve years long; and biotech companies may have to pay licensing fees on as many as forty patents for a decade before they even get to the point of animal testing and human trials….
In an effort to survive in such a costly and competitive environment, pharmaceutical companies have shed their high-maintenance R&D infrastructure, maintaining their pipelines instead by acquiring smaller (mostly biotech) companies, focusing on the less expensive development of me-too drugs, and buying the rights to promising products in late-stage development. As a consequence, biotech companies are disappearing (down from a peak of 140 in 2000 to about 60 in 2017), and the survivors must expend an increasing proportion of their resources on animal and human testing instead of the more innovative (and less costly) identification of promising leads and platform technologies. Similarly, some of academia’s R&D capacity, overbuilt in response to the NIH funding surge, now lies fallow, while seasoned experts and their promising protégés have moved on to other fields.
With many powerful academicians, lobbyists, professional societies, funding agencies, and perhaps even regulators shifting away from trials to observational data, even for licensing purposes, clinical medicine may be marching headlong to a massive suicide of its scientific evidence basis. We may experience a return to the 18th century, before the first controlled trial on scurvy. Yet, there is also a major difference compared with the 18th century: now we have more observational data, which means mostly that we can have many more misleading results.
I think the situation is even worse. Indeed, we can only grasp the nature of reality with action, not with contemplation (pace Ioannidis). But experiments (sic) as in RCT are also part of the problem: we only understand the world by testing of ideas that appear to bring coherency to the natural world. A/B testing is inadequate for this task — although it may well be all we have left.
Q: What’s at stake when scientists fib?
A: Science is the last institution where being honest is a quintessential part of what you’re doing. You can do banking and cheat, and you’ll make more money, and that money will still buy you the fast cars and the yachts. If you cheat in science, you’re not making more facts, you’re producing nonfacts, and that is not science. Science still has this chance of giving a lead to democratic societies because scientific values overlap strongly with democratic values.
Interview with Harry Collins about his book: Gravity’s Kiss: The Detection of Gravitational Waves Harry Collins MIT Press, 2017. 414 pp.
There was an interesting paper published in Nature recently on the topic of automated skin cancer diagnosis. Readers of my online work will know it is a topic close to my heart.
Here is the text of a guest editorial I wrote for Acta about the paper. Acta is a ‘legacy’ journal that made the leap to full OA under Anders Vahlquist’s supervision a few years back — it is therefore my favourite skin journal. This month’s edition, is the first without a paper copy, existing just online. The link to the edited paper and references is here. I think this is the first paper in their first online only edition :-). Software is indeed eating the world.
When I was a medical student close to graduation, Sam Shuster then Professor of Dermatology in Newcastle, drew my attention to a paper that had just been published in Nature. The paper, from the laboratory of Robert Weinberg, described how DNA from human cancers could transform cells in culture (1). I tried reading the paper, but made little headway because the experimental methods were alien to me. Sam did better, because he could distinguish the underlying melody from the supporting orchestration. He told me that whilst there were often good papers in Nature, perhaps only once every ten years or so would you read a paper that would change both a field and the professional careers of many scientists. He was right. The paper by Weinberg was one of perhaps fewer than a dozen that defined an approach to the biology of human cancer that still resonate forty years later.
Revolutionary papers in science have one of two characteristics. They are either conceptual, offering a theory that is generative of future discovery — think DNA, and Watson and Crick. Or they are methodological, allowing what was once impossible to become almost trivial — think DNA sequencing or CRISPR technology. Revolutions in medicine are slightly different, however. Yes, of course, scientific advance changes medical practice, but to fully understand clinical medicine we need to add a third category of revolution. This third category comes from papers that change the everyday lives of what doctors do and how they work. Examples would include fibreoptic instrumentation and modern imaging technology. To date, dermatology has escaped such revolutions, but a paper recently published in Nature suggests that our time may have come (2).
The core clinical skill of the dermatologist is categorising morphological states in a way that informs prognosis with, or without, a therapeutic intervention. Dermatologists are rightly proud of these perceptual skills, although we have little insight as to how this expertise is encoded in the human brain. Nor should we be smug about our abilities as, although the domains are different, the ability to classify objects in the natural world is shared by many animals, and often appears effortless. Formal systems of education may be human specific, but the cortical machinery that allows such learning, is widespread in nature.
There have been two broad approaches to try and imitate these skills in silica. Either particular properties (shape, colour, texture etc.) are first explicitly identified and, much as we might add variables in a linear regression equation, the information used to try and discriminate between lesions in an explicit way. Think of the many papers using rule based strategies such as the ABCD system (3). This is obviously not the way the human brain works: a moment’s reflection about how fast an expert can diagnose skin cancers and how limited we are in being able to handle formal mathematics, tells us that human perceptual skills do not work like this.
There is an alternative approach, one to some extent that almost seems like magic. The underlying metaphor is as follows. When a young child learns to distinguish between cats and dogs, we know the language of explicit rules is not used: children cannot handle multidimensional mathematical space or complicated symbolic logic. But feedback, in terms of what the child thinks, allows the child to build up his or her own model of the two categories (cats versus dogs). With time, and with positive and negative feedback, the accuracy of the perceptual skills increase — but without any formal rules that the child could write down or share. And of course, since it is a human being we are talking about, we know all of this process takes place within and between neurons.
Computing scientists started to model the way that they believed collections of neurons worked over 4 decades ago. In particular, it became clear that groups of in silica neurons could order the world based on positive and negative feedback. The magic is that we do not have to explicitly program their behaviour, rather they just learn, but — since this is not magic after all — we have got much better at building such self-learning machines. (I am skipping any detailed explanation of such ‘deep learning’ strategies, here). What gives this field its current immediacy is a combination of increases in computing power, previously unimaginable large data sets (for training), advances in how to encode such ‘deep learning’, and wide potential applicability — from email spam filtering, terrorist identification, online recommendation systems, to self-driving cars. And medical imaging along the way.
In the Nature paper by Thrun and colleagues (2) such ‘deep learning’ approaches were used to train computers based on over 100,000 medical images of skin cancer or mimics of skin cancer. The inputs were therefore ‘pixels’ and the diagnostic category (only). If this last sentence does not shock you, you are either an expert in machine learning, or you are not paying attention. The ‘machine’ was then tested on a new sample of images and — since modesty is not a characteristic of a young science — the performance of the ‘machine’ compared with over twenty board certified dermatologists. If we use standard receiver operating curves (ROC) to assess performance the machine equalled if not out-performed the humans.
There are of course some caveats. The dermatologists were only looking at single photographic images, not the patients (4); the images are possibly not representative of the real world; and some of us would like to know more about the exact comparisons used. However, I would argue that there are also many reasons for imagining that the paper may underestimate the power of this approach: it is striking that the machine was learning from images that were relatively unstandardised and perhaps noisy in many ways. And if 100,000 seems large, it is still only a fraction of the digital images that are acquired daily in clinical practice.
It is no surprise that the authors mention the possibilities of their approach when coupled with the most ubiquitous computing device on this planet — the mobile phone. Thinking about the impact this will have on dermatology and dermatologists would require a different sort of paper from the present one but, as Marc Andreessen once said (4), ‘software is eating the world’. Dermatology will survive, but dermatologists may be on the menu.
Full paper with references on Acta is here.