A couple of sentences from this article on the value of interdisciplinary research got me thinking — or at least pulling some memes off my dusty intellectual shelf of clutter. The article is about Ian Goldin, and some ideas I am sure he talks about in his new book, which I haven’t read.
He added that “one of the reasons” for the 2008 financial crisis was that “people lost their ethics, their judgement, and their wisdom” because of disciplinary silos.
I agree. I remember the Economist putting it more harshly: …’professors fixated on crawling along the frontiers of knowledge with a magnifying glass’[Economist, December 10th 2011]. Economics, a bit like psychiatry in medicine, is the canary in the mine. Nor would teaching mandatory ethics courses (‘I am certified in ethics A+!), do very much. Enron’s management were stars at HBS. This is one of the tragedies of many modern universities, so busy edging their way up largely meaningless ranking scales, that they are unable to tackle the problems society faces.
Golden was quoted as saying, ‘[there is] a “real pressure” on universities to be “thinking ahead” and teaching information that will remain relevant when current students “reach their mid-careers”’.
There are two aspects to this. One is that the whole idea of education is a way of hedging against a changing environment. If the world was constant, we could dispense with much (but not all) education — training would suffice. This is just another way of saying advance comes from when sons do not do what their fathers did (‘20th century physics was made by the sons of coblers’. Substitute your gender, please). But from a teaching perspective there is another facet to think about. We cannot adequately judge how well we educate our students over the short term (alone). Yes, they can pass finals. Yes, they can take a history etc. But the test of education is how well they behave and think 20 years down the line. This is a large search space that we can only navigate using theories about what makes the world change, and what makes people push at the boundaries: do not cite Cronbach’s alpha, at me. But in examinations and certification, like so much else in science and society, we are blinded by the apparent certitude of short term goals. And the allure of summary measures, rather than the messiness of the real world.
This comment (and phrase) from Bruce Schneier struck a chord with me.
NYU professor Helen Nissenbaum gave an excellent lecture at Brown University last month, where she rebutted those who think that we should not regulate data collection, only data use: something she calls “big data exceptionalism.” Basically, this is the idea that collecting the “haystack” isn’t the problem; it what is done with it that is………Under this framework, the problem with wholesale data collection is not that it is used to curtail your freedom; the problem is that the collector has the power to curtail your freedom. Whether they use it or not, the fact that they have that power over us is itself a harm.
Of course, as Alan Kay said, we need ‘big ideas’ rather than assuming ‘big data’ will do our thinking for us. This is not to deny that large data sets are not useful, nor that they do not allow you to answer questions, you might not have been able to do so before. But A/B testing only gets you so far. And beware technicians who want to mould nature to their method, not vice versa; or change what meaningful consent means.
Daniel Sarewitz in Nature
The quality problem has been widely recognized in cancer science, in which many cell lines used for research turn out to be contaminated. For example, a breast-cancer cell line used in more than 1,000 published studies actually turned out to have been a melanoma cell line. The average biomedical research paper gets cited between 10 and 20 times in 5 years, and as many as one-third of all cell lines used in research are thought to be contaminated, so the arithmetic is easy enough to do: by one estimate, 10,000 published papers a year cite work based on contaminated cancer cell lines. Metastasis has spread to the cancer literature……..That problem is likely to be worse in policy-relevant fields such as nutrition, education, epidemiology and economics, in which the science is often uncertain and the societal stakes can be high
See this great piece by Bruce Charlton
Professional science has arrived at this state in which the typical researcher feels free to indulge in unrestrained careerism, while blandly assuming that the ‘systems’ of science will somehow transmute the dross of his own contribution into the gold of truth. It does not: hence the preponderance of irreproducible publications.
More structural issues in higher-ed that appear hard to change (see previous post). From Science:
Unfortunately, it is not clear why Ph.D. students pursue postdoc positions and how their plans depend on individual-level factors, such as career goals or labor market perceptions.
And in Nature
The number of US faculty members who have tenure or are on the tenure track is falling, according to a report by the American Association of University Professors in Washington DC. Over the past 40 years, the proportion of the academic labour force that is in a full-time tenured position has shrunk by one-quarter, and the proportion in tenure-track posts has halved, reports Higher Education at a Crossroads.
Across large swathes of higher ed there is an enormous amount of cross-subsidy, much of it based on misinformation about ‘what you are buying’. Tech and data will start to unpick at much of this. The future for many institutions is uncertain.
Two articles about Sci-hub (here and here). No, I am not encouraging illegal downloading. But I hope we can look back in a few years with shame at the way journals, their publishers and those who have a vested interest in the mismeasure of science have hindered educational advance, and wasted public money. Some specialty journals do indeed pour money back into their subject, but it is a minority. All too often medical journals are a way of making money for publishers and specialist societies. There will be an iTunes moment (I hope).
This quote is from the ever quotable John Ioannidis, in a commentary of the disagreements about how to interpret the evidence linking salt intake and health.
“Sometimes I wonder whether published observational epidemiology is simply reflecting a power-weighted vote count of the opinions of epidemiologists. What does a risk ratio of 1.3 mean? Perhaps it means that those who believe in the risk factor have 1.3-fold more powerful opinions than those who don’t believe in the risk factor. In this (hypothetical) nightmare situation, risk ratios are accurate measures of epidemiologists’ net bias.
Systematic reviews cannot settle this conundrum after the fact. Even systematic reviews of randomized trials can reach almost any conclusion the reviewers believe in. “
Gary Taubes, talking about our understanding of obesity:
“Here’s another possibility: The 600,000 articles — along with several tens of thousands of diet books — are the noise generated by a dysfunctional research establishment.”
I think it was James Le Fanu who suggested that closing most UK departments of epidemiology and public health might result in a net gain to human health. So much research work is zombie science: you can’t kill it , because it is already dead ( I owe this formulation to Bruce Charlton). But the problem is not just with observational research. Ironically, it may have been the fact that Doll and Hill were right, that may have been, in the long term, a harmful influence on discovery.
Some Neanderthals would — based on MC1R sequence — be expected to have red hair. What has always caused me confusion is the way that dates for everything to do with human paleohistory, and the various representations of our evolution, are revised based on n of 1 publications. No doubt the story will get easier, but I think silence for a while on the ‘greatest story every told’ would be in order. At least from me.
Note added: And then…..
Venki Ramakrishnan was on the radio the other day. I cannot remember his exact words but they were something to the effect that he wanted ‘not to generate lots of data, but instead, lots of understanding’. Says it all.
People always want to mess up on invention. This about the birth of email and the death of Ray Tomlinson.
“It wasn’t an assignment at all, he was just fooling around; he was looking for something to do with ARPANET,” Raytheon spokeswoman Joyce Kuzman said in a statement about Tomlinson’s death.
When Tomlinson showed his early work on email to his coworker at Bolt Beranek and Newman (BBN), Jerry Burchfiel, he was initially warned that he shouldn’t show anyone what he was doing. “Don’t tell anyone!” Burchfiel reportedly said. “This isn’t what we’re supposed to be working on.”
Tomlinson’s death gives us a chance to look at how various innovations come to pass. They are rarely, if ever, the work of one person. And in the case of email, Tomlinson contributed greatly, along with people like Bob Clements of BBN, Dick Watson of SRI International, and Stephen Lukasik of ARPA (now known as Darpa). And they all managed to anger the Department of Defense for quite literally being too ahead of their time.
We owe the word ‘revolution’ — in the meaning of changing the world — to Galileo and the motion of the planets. You can almost define invention as that which disturbs: it is why Freeman Dyson, titled one of his book’s about science, ‘Disturbing the Universe’. But each generation wants to forget this, ours perhaps more than others. One of the great things about computing over the last half century is that sometimes the barriers to entry have been so low: biology and medicine are much harder. The other lesson: do not be too far ahead of your time.
This made me laugh. I have got used to MC1R mutations and red hair in Neanderthals, but this article (full research paper in Science here) brought a smile to my face, even if I am still a little hazy on the genetics.
JBS Haldane once commented ‘that God would appear to be inordinately fond of beetles’, based on the observation that the world was so full of different species of beetles.
I have long had similar thoughts about seborrhoeic keratoses. God must be inordinately fond of them. Seborrhoeic keratoses are benign skin tumours, some of which contain identified mutations: they generally attract little serious research interest (apart from yours truly, of course). However, their significance clinically is enormous. This is because they are incredibly common as people move into their fourth decades and beyond, and because they vary so much in their morphological appearance. They mimic everything, including melanoma. So, most things referred as possible melanomas in many clinics, will turn out to be harmless seborrhoeic keratoses. Of course, a more cynical view is that since seborrhoeic keratoses are such great mimics, they in effect create lots of work for dermatologists. I suppose I should say thank you, next time I bump into one of my distant cousins, but the basis of the link — if confirmed — also deserves some serious mechanistic thought.
Nice BBC radio programme. Impressed with the presenters, rather than yours truly.
Martin Wolf, the influential economist, wrote in the FT sometime back that the public might soon view pharma companies in the same way many viewed banks (or at least bankers). Here is a news item from today’s FT.
Dr Mikael Dolsten, president of worldwide research and development at Pfizer, said he was aware of the unease but that the combined company would occupy a “sweet spot” in R&D.
Brent Saunders, Allergan chief executive, has questioned the efficiency of discovery research conducted by big pharma groups. In an interview with the FT last year, he argued that smaller companies and academic centres were better suited to this sort of science.
Since then, he has modified his position somewhat, arguing that drug discovery has a role at big pharma companies, providing it is has a high chance of success and is targeted at illnesses where the company already has a strong selection of drugs.
This is all about the merger between Allergen and Pfizer. The arguments may be more nuanced than I want to believe, but this is what happens when ‘financialization’ becomes more important than invention (and long term value). Lets just call it: the no James Black syndrome.
And can we please skip the dreadful ‘sweet spot’ terminology.
Remember those compare and contrast questions (UC versus Crohns; DLE versus LP etc.). Well, look at these two quotes from articles in the same edition of Nature.
The first from the tsunami of papers showing that ‘Something in rotten in the state of
Denmark Science’ — essentially that the Mertonian norms for science have been well and truly trampled over.
Journals charge authors to correct others’ mistakes. For one article that we believed contained an invalidating error, our options were to post a comment in an online commenting system or pay a ‘discounted’ submission fee of US$1,716. With another journal from the same publisher, the fee was £1,470 (US$2,100) to publish a letter. Letters from the journal advised that “we are unable to take editorial considerations into account when assessing waiver requests, only the author’s documented ability to pay”.
Discrete Analysis’[the journal] costs are only $10 per submitted paper, says Gowers; money required to make use of Scholastica, software that was developed at the University of Chicago in Illinois for managing peer review and for setting up journal websites. (The journal also relies on the continued existence of arXiv, whose running costs amount to less than $10 per paper). A grant from the University of Cambridge will cover the cost of the first 500 or so submissions, after which Gower hopes to find additional funding or ask researchers for a submission fee.
Well done the Universities of Cambridge and Cornell (arXiv). For science, the way forward is clear. But for much clinical medicine, including much of my own field, we need to break down the barriers between publication and posting online information that others may find useful. This cannot happen until the financial costs approximate to zero.
Nor did Mann have to worry that a good idea would struggle for funding. Whereas many government research heads fret about budgets that don’t at least keep pace with inflation, past DARPA directors are surprisingly blasé about the agency’s finances. “I never really felt constrained by money,” Tether says. “I was more constrained by ideas.” In fact, aerospace engineer Verne (Larry) Lynn, DARPA’s director from 1995 to 1998, says he successfully lobbied Congress to shrink his budget after the Clinton administration had boosted it to “dangerous levels” to finance a short-lived technology reinvestment program. “When an organization becomes bigger, it becomes more bureaucratic,” Lynn told an interviewer in 2006.
A little about what makes DARPA tick. This reminds me of come of the comments from Xerox PARC (from John Seely Brown? — I can’t remember) about how important it was to keep the research budget (not the D, of R+D) below 1%. Any higher, and the bean counters would get interested, and start trying to manage the budget. [See my previous post from Alan Kay]. Not all increases in funding are a good idea (unless the success metric is money, rather than discovery)
After yesterday’s post I couldn’t resist one more slide from Alan Kay. This is about how we once knew how to fund and support real discovery and invention.
From Alan Kay. If it comes from a Turing award winner, maybe people might take notice. Perhaps not.
‘The world that allowed Brenner and Crick, or Hubel and Wiesel no longer exists, because the institutions no longer believe in allowing people to make mistakes: you are not allowed to declare bankruptcy in a scientific career. And yet we know that most people — whether in tech — or academia, need to pivot after failure and bankruptcy.’
“Besides inventing quantum theory, Planck had made another great contribution to science by welcoming and generously supporting the young Albert Einstein. In 1905, when Einstein, then an unknown employee of the Swiss patent office in Bern, sent five revolutionary papers to the physics journal that Planck edited in Berlin, Planck immediately recognized them as works of genius and published them quickly without sending them to referees. He did not agree with all of Einstein’s ideas, but he published all of them.”
Freeman Dyson in the NYRB. Real review by one’s peers, and not a checklist in sight.
There is a touching video of Marvin Minsky here. Steven Levy’s wonderful book on how some of this revolution took place is compelling reading (as in the Model Railroad club). You have to wonder how and why so much fundamental and successful (‘an important discovery every few days’) work was done in such a short period of time, with so little money. And across the pond, and elsewhere the biological revolution that dominated the second half of the 20th century was being laid out with even less resource. Not so much ‘events, dear boy, events’ but ‘ideas, dear kids, ideas’.
A colleague says this is philosophy (not meant as a compliment). I would like to think it is an essay about some problems that face us. Abstract below, full text pdf on my publications page.
Successfully delivering medical care and acquiring and disseminating the new knowledge that underpins clinical advance requires dealing with a number of both theoretical and organizational issues that may impede progress. Firstly, we have to move beyond the idea that biology and medicine are synonymous, and realize that tropes such as ‘bench to bedside’ or ‘translational’ frequently do not capture the way medical advance occurs. Medicine is more engineering than science, and the constraints imposed by society and economics, as well as historical models of working, may all delay improvements in healthcare delivery. Secondly, the generation of new ideas is influenced by the social organization and financial underpinning of science. Comparisons with other areas of science and technology suggest that medical science is dysfunctional and lacking in genuine innovation, particularly when cost is factored in as a key denominator. There are reasons to believe that matters are getting worse, and that the climate for revolutionary discovery is less supportive in both academia and industry than it was in the mid- to-late twentieth century. Thirdly, healthcare delivery is subject to a number of factors that limit cheap and effective care. These include payment systems that encourage unnecessary care, self-interest by medical guilds and insurers, and reg- ulators that seek to limit new ways of working. Finally, there is also a striking failure to study and understand medical competence, how we educate doctors and other clinicians, and how technology might help to reduce costs.
British Journal of Dermatology (2015) 173, pp547–551
My intercalated degree (1980) was centred around epidemiology, and analysis of what was then a large dataset. So, I had to learn some FORTRAN and file editing on an IBM360 / find my way around a MTS system, all in the days of punchcards, and line printers, with only one ugly green on black monitor to share across a unit. Mostly, I wasted large amounts of print out and got used to retrieving my batch processed requests early morning, with the comment ‘run aborted’ , ‘system error’ etc. I had to wait another 24 hours to find out if I had corrected the errors in my programs. I remember the first time I looked at the GLIM manuals, wondering if I was attempting to read them upside down.
Years later, my interest in statistics resurfaced, mainly as a response to the banality of much of the EBM crowd with their NNT and their apparent lack of understanding of what I disparagingly call ‘probability management’. Most EBM merchants are frustrated chartered accountants. Real science is involved with understanding reality by creating models of how the world works: if you don’t do that, you are not doing science, but rather the the D of R+D, or ‘technology assessment’. The theory of how you should do the latter is a proper academic pursuit, but using these ‘products’ is not a matter for the academy — although it is important for many businesses or professions. Using what others invented is what doctors do in the clinic, but it does not count as research, but as honest professional toil (and very valuable work, at that, compared with say much business).
But statistics is hard, and frequently counterintuitive. We do not teach it well to medical students and for all the mantra about doctors’ communication skills, fluency with statistics is a core medical skill, and in many situations, the key communication skill doctors should possess. If you want your students to communicate well, do not stray far without mentioning binomial, poisson, Bayes etc.
What follows is a comment from Sander Greenland, on Deborah Mayo‘s excellent site, Error Statistics . I do not know Greenland (although we have emailed each other in the distant past), but I think he is somebody who is always worth listening too. There are a couple of points he makes that chime with me, and they relate to both teaching and the ‘crisis’ in much medical (and scientific research). He writes:
“My view is that stats in soft sciences (medicine, health, social sciences among others) has been a massive educational and ergonomic failure, often self-blinded to the limits on time and capabilities of most teachers, students, and users. I suspect the reason may be that modern stats was developed and promulgated by a British elite whose students and colleagues were selected by their system to be the very best and brightest, a tiny fraction of a percent of the population. Furthermore, it was developed for fields where sequences of experiments leading to unambiguous answers could be carried out relatively quickly (over several years at most, not decades) so that the most serious errors could be detected and controlled, not left as part of the uncertainty surrounding the topic.”
First, we have to accept that we have failed. Second, all too often we are ‘self-blinded to the limits on time and capabilities of most teachers, students, and users’. This is a widespread problem in the undergraduate medical curriculum, more generally. Students must be researchers, teachers, scholars etc. All too often this is one giant GMC inspired delusion, fuelled by either the NHS (yes, I know there is no longer one NHS), or speciality groups (my organ is bigger than your organ….). Third, much of higher education has not caught up with its audience (pace, elite higher education).
Finally, the sorts of experimental science he is talking about (final para) is exactly the sort I love. This is the sort of work Brenner and Crick described when they and a handful of others invented molecular biology. Or for an example from another field, look at how David Hubel and Torsten Wiesel described their work. But sadly, most medical research is no longer like this. It is much, much duller, and much less intellectually secure, because many built in tests of veracity through experimental design and approach, have been replaced by audit of process. Does it have to be this way? A silent bleak voice, tells me yes. As for the teaching, that is a problem we can do something about.
[I like his use of ergonomic, too]
“Most universities misjudge their own brands, mistaking a longtime monopoly on access to top students for value.”
“Assessing the quality of UK medical schools: what is the validity of student satisfaction ratings as an outcome measure?” I have not read the paper, but you can guess the answer. This is all rather sad, and dangerous. I would just link back to some words from Harvard’s Larry Lessig I quoted earlier:
The best example of this, I am sure many of you know are familiar with this, is the tyranny of counting in the British educational system for academics, where everything is a function of how many pages you produce that get published by journals. So your whole scholarship is around this metric which is about counting something which is relatively easy to count. All of us have the sense that this can’t be right. That can’t be the way to think about what is contributing to good scholarship.
Well if you can wreck real research, you can certainly wreck good teaching.
And if you think I am complacent about the value proposition we offer to students, Rich DeMillo’s long awaited new book its out: ‘Revolution in Higher Education: How a Small Band of Innovators Will Make College Accessible and Affordable’. I have only just started reading it. There is bags to whinge about, but unlike what AJP Taylor said of some academics, ‘He is 90% right, and 100% wrong’, DeMillo is wrong about the little things, and right about the big things that matter. Some quotes:
American Élites had little incentive to change, and change without the active involvement of the top of the academic pyramid was impossible. But then the outsider theory started to crumble. Influential insiders began to say in public what critics had been reporting all along: unequal access was a threat to higher education. Stanford’s president John Hennessy said that the cost of maintaining a large faculty was not sustainable and predicted that in the future, there would be fewer professors but they would be using new technologies to teach more students. He then took a minisabbatical—a rarity for university presidents—to acquire a deep understanding of educational technology.
Universities are generally conceived as vertically integrated entities, but as soon as families realize that they do not have to pay for unadorned content, the natural question is: “Well, exactly what do I have to pay for?” This is the central concern in chapter 2. Affordable quality is a goal of the Revolution, but it is achieved through an unbundling of a university’s value proposition that allows students to pay only for the value they receive. This is an innocent-sounding although in fact dramatic shift in the landscape of higher education.
Most universities misjudge their own brands, mistaking a longtime monopoly on access to top students for value.
Universities ‘decline charity research grants due to fall in public funding’ Not certain about the examples, but you cannot solve the crisis in HigherEd unless you look hard at the money flows, and the various cross subsidies. In medicine — until I have seen hard data to the contrary — I will stick to my view that this is a bigger problem that for other parts of the university. As DeMillo points out (ibid): ‘In fact, the rates paid by research sponsors are kept artificially low by cross-platform subsidies. Who subsidizes low research prices?’ For the Ivy League, the subsidy is from endowments, for many of the rest, it is tuition fees.
Stanford Encyclopedia of Philosophy.This free online encyclopedia has achieved what Wikipedia can only dream of. Not entirely fair, but Wikipedia just isn’t sufficient for all, all of the time. But imagine how much poorer the world would be without it. And yes, Stanford is giving us this encylopedia for nothing, but note the following:
“Our grant application days are over,” says Zalta. “We are practically self-sufficient as long as we don’t try to grow too much or too fast.”
Now this chimed with something I read in Nature about the re-financing of the Scripps Institute. The quote was:
“Looking forward, I think many scientists realize that NIH funding is a good thing if you have it, but it’s not sustainable,” says organic chemist Phil Baran, who was on the search committee that selected Kay and Schultz. “What is stable are endowments, which you build by having products that give you proceeds, and by philanthropy. You get philanthropy by doing the best science, so that’s why there is such frenzied competition for the brightest minds.”
Many years ago I gave one of the President Council’s Guest lectures at Cold Spring Harbour (I forge the exact title). The audience comprised people who earned more by the day than I did by the lifetime, but I remember how nobody considered funding a PhD here or a project here, instead it was taken as given, that meaningful funding had to be endowed, so as to ensure secure long term funding — funding to play with, in the best sense of the word. Subsistence societies do not produce great artefacts, or produce the sorts of culture that is science (or most types of non-dogmatic learning, for that matter). Money flows matter. And our traditional funding streams are broken. And my Univeristy Chair is endowed, but I doubt the pot of money has not been used to cross-subsidise something else
Viewing Exemplars of Melanomas and Benign Mimics of Melanoma Modestly Improves Diagnostic Skills in Comparison with the ABCD Method and Other Image-based Methods for Lay Identification of Melanoma
Ella Cornell1#, Karen Robertson2#, Robert D. McIntosh1 and Jonathan L. Rees2, Departments of 1Human Cognitive Neuroscience, Psychology and 2Dermatology, University of Edinburgh, UK. #These authors contributed equally to this paper and should be considered as first authors.
Using an experimental task in which lay persons were asked to distinguish between 30 images of melanomas and common mimics of melanoma, we compared various training strategies including the ABC(D) method, use of images of both melanomas and mimics of melanoma, and alternative methods of choosing training image exemplars. Based on a sample size of 976 persons, and an online experimental task, we show that all the positive training approaches increased diagnostic sensitivity when compared with no training, but only the simultaneous use of melanoma and benign exemplars, as chosen by experts, increased specificity and diagnostic accuracy. The ABCD method and use of melanoma exemplar images chosen by laypersons decreased specificity in comparison with the control. The method of choosing exemplar images is important. The levels of change in performance are however very modest, with an increase in accuracy between control and best-performing strategy of only 9%. Key words: skin cancer; melonoma; melanocytic nevi; seborrheic keratosis; diagnosis.
I was going to ignore this, but I have just chanced upon some more comments about it, so writing therapy is required. The article with the above title was written by Richard Smith, and published in the THE a few weeks back. I was always worry if I agree with Richard Smith’ arguments, but on this topic he is largely correct:
Yet peer review persists because of vested interests. Absurdly, academic credit is measured by where people publish, holding back scientists from simply posting their studies online rather than publishing in journals. Publishers of science journals, both commercial and society, are making returns of up to 30 per cent and journals employ thousands of people. As John Maynard Keynes observed, it is impossible to convince somebody of the value of an innovation if his or her job depends on maintaining the status quo.
Scrapping peer review may sound radical, but actually by doing so we would be returning to the origins of science. Before journals existed, scientists gathered together, presented their studies and critiqued them. The web allows us to do that on a global scale.
But I think he gets a few things dangerously wrong.
First, journals have taken over ‘peer review’ forgetting that its original meaning was judgment by one’s peers. It was always silly to image that 2 -10 people judging a MS could provide robust scrutiny or recognition of excellence. Peer review is why we recognise Watson and Crick, or Wiesel and Hubel, a half century on. Journal editors use this intellectual sleight to make their organs more appear more powerful, when in point of fact many journals work outside the real community of science.
Second, he states, ‘The most cited paper in Plos Medicine, which was written by Stanford University’s John Ioannidis, shows that most published research findings are false.’ This is imply untrue. Ioannidis’s study was limited to medicine and largely concerned papers that tried to define truth using ‘probability management’. One the reasons we are in this mess, is that we have confused what I would call academic A/B testing with science. The former is, at best, technology assessment, and however important it is for medicine , it is not science— or at least not science if we define science as the attempt to produce broad and deep conceptual understanding of the natural world. You cannot do science without an attempt to produce theories about how the world works. That is why most RCT fail as science, because they are not obsessed with the structure of the world. They are frequently expensive forms of A/B testing.
Third, he states: ‘Cochrane reviews, which gather systematically all available evidence, are the highest form of scientific evidence.’ They are not the highest form of scientific evidence; and they do not systematically gather evidence (much of the evidence is in people’s heads). What they frequently are, are reasonable attempts to summarise domains of knowledge in which we lack critical insight, and have to rely on probability management (as if often the case in medicine). You may have to do this, but nothing in a Cochrane compares with say the theory of evolution, or the base structure of DNA, or how we define fitness, or laws of the conservation of energy etc etc. Probably the greatest series of scientific breakthrough over the last half century — the breakthroughs that allow the medium I write on to exist — owe nothing to Cochrane. The more you need Cochrane, the less you can be confident about what you are saying.
Rant over. I feel better already.
“We thought if you found the gene that caused a disease you would simply be able to ‘drug it’. The technology had great promise, but then we realised it would take decades.”
This is from an article in the FT about biotech. The quote has more to do with layering dishonesty on dishonesty, than insight. Nobody who knew anything about disease thought that finding genes was a quick way to improving care. Rather, what those who professed this belief were doing, was talking up a movement, only to cash in their shares before the bubble burst, which most of those on the inside were aware would happen. This was always an investor strategy, rather than a credible view about science or medicine. There will be more bubbles. Here is another.
I like this because it shows how discovery should and can work, and how it all relies on a generosity of spirit (just as a reader must give an author some slack). Also because several years after, I was first shown a browser when I was visiting NIH, and I just didn’t ‘get it’. (music is a bit loud, but persist…)
“After earning his medical degree in 1951 he trained in hospitals in Montreal. “To my surprise I also found I enjoyed clinical medicine,” he wrote in his Nobel prize biography. Then he quipped, “It took three years of hospital training after graduation, a year of internship and two of residency in neurology, before that interest finally wore off.”” From an obit of David Hubel. [direct link to this quote]
From an article in press, based on a talk I gave last year, in which I mull over some of the problems of modern medicine: a dysfunctional research system; a failure to consider what types of rationality underpin medicine; and a broadside on medical education .[Yes, modesty doesn’t figure]. Here is a flavour:
There is a phrase used by engineers, that goes roughly as follows: physics was made by God, but to engineer is human. Engineers have of course to be cognisant of the laws of nature, but that is only part of what they need to know in order to build artefacts. This applies to medicine too. Science may tell us about how the natural world works, but we want more than that, because we also have to know how science and society work. We want to intervene, and the test of our knowledge is how well we can do this in the social and economic systems we inhabit. UVR causes most skin cancer, but if the patient in front of us already has a thick nodular melanoma, we have to ask: was it inevitable that they would present when they did, and what can we do for them now? But the answers to these questions do not just depend on biology. Medical engineers not only have to talk the language of molecules, but also the language of health insurers. Furthermore, medical engineers also have to know how to engineer their own cultures of discovery, along the way.
Interesting aside in Science. To which I would add: arrogance before men, humility before your subject.
At the time, peer review by anonymous outside experts was beginning to take hold among journals in the United States. Einstein, however, wasn’t used to it: Until he left Germany 3 years earlier, he had regularly published in German journals without external peer review. He was indignant when he learned that his paper had received a critical review, and he withdrew it in a huff. “We (Mr. Rosen and I) … had not authorized you to show it to specialists before it is printed,” he wrote to the editor. “I see no reason to address the—in any case erroneous—comments of your anonymous expert.” He and Rosen submitted the paper to another journal, the Journal of the Franklin Institute, without change.
Yet before it was printed, Einstein revised the manuscript, retitling it “On Gravitational Waves.” It now came to the opposite conclusion: that gravitational waves were possible. The unidentified referee had pointed out a legitimate flaw in the original paper. Historians have recently confirmed that the referee was Howard Percy Robertson of Princeton University. After his anonymous criticisms were ignored, Robertson had delicately approached Einstein and convinced him of his error.
Even though peer review had helped Einstein save face, he stuck to his guns and never published another scientific paper in the Physical Review.
There are areas of medical research that are more legitimate than others. Finding genes is OK, as is finding drugs. But other bits of the health enterprise seem immune to rational scrutiny. One area that has bugged me for years is the way hospitals, charities, professional bodies and others plaster images of skin cancer around, on the assumption that this will ‘make things better’. It might, or it might not. So here is a recent paper of ours on this topic. Take away message: if there is a fixed resource envelope, many if not most strategies may make matters worse. The abstract is below. The images will of course remain up — I am not certain the rationale is the obvious one.
Abstract: Using an experimental task in which lay persons were asked to distinguish between 30 images of melanomas and common mimics of melanoma, we compared various training strategies including the ABC(D) method, use of images of both melanomas and mimics of melanoma, and alternative methods of choosing training image exemplars. Based on a sample size of 976 persons, and an online experimental task, we show that all the positive training approaches increased diagnostic sensitivity when compared with no training, but only the simultaneous use of melanoma and benign exemplars, as chosen by experts, increased specificity and diagnostic accuracy. The ABCD method and use of melanoma exemplar images chosen by laypersons decreased specificity in comparison with the control. The method of choosing exemplar images is important. The levels of change in performance are however very modest, with an increase in accuracy between control and best-performing strategy of only 9%.
Authors: Ella Cornell, Karen Robertson, Robert D. McIntosh, Jonathan L. Rees
Full open access paper here.