The (medical) future is here, just unevenly distributed
The lessons from Glybera, the first gene therapy to be sold in Europe, still loom large. It cures a genetic condition that causes a dangerously high amount of fat to build up in the blood system. Priced at $1m, the product has only been bought once since 2012 and stands out as a commercial disaster. Economist
“This is a terrific essay. The keystone of science’s power and the continued survival of a civilisation based on — and at the mercy of — science, is contained in the following:
‘As Jacob Bronowski (1956) said – in science truth is all-of-a-piece: either we are truthful always and about everything; or else the dishonesty ramifies, the rot spreads, and rapidly we are being honest about nothing.’
External audit, as we have seen over the last quarter century in many human domains, does not work. All too often it is merely a tool for rendering deceit invisible. Integrity is not a bolt on for our survival, but a bit of our biological machinery that is struggling against the loss of the ‘personal’.
If we look back to the writings of Merton, Lewis Thomas, Peter Medawar, John Ziman, and the like, it is clear we lack a coherent and deep view of what has happened to modern science and — because science is integral to the modern world — our civilisation. This essay sets the tone for what must follow.”
Derek Bok states that some of those who were found guilty of criminal acts in the recent waves of corporate malfeasance in the US, scored very well on their ethics modules at Harvard. It is easy (and facile) to imagine that somehow doing a ‘course’ on a particular topic will produce a change in behaviour that is permanent and withstands countervailing forces (culture eats strategy,and culture eats morality etc, I hear you say). Those in universities should of course know better — producing changes in behaviour in response to an environmental stimulus is a paraphrase of one definition of learning. But the message doesn’t get through, largely because the academy has increasingly chosen to turn its professional tools away from examination of its own purpose. It is deemed rude to ask for evidence when everybody knows the sun goes round the earth.
Nor, if we are to believe Timothy Wilson, should we go in with the ‘null’ hypothesis that courses wishing to eradicate ‘isms’ may only be beneficial. The evidence points in a different direction: they make some people’s behaviour worse. I sometimes wonder if anybody is really too worried about whether these interventions work — they just want to tick boxes to comply with yet more rituals of verification (to use Michael Power’s phrase from the Audit Society).
Anyway these ramblings were by way of introduction for what is for me one of the clearest expositions of morality and the human condition. I have no idea why I cannot keep it out of my mind but maybe putting it down in writing might help. It comes from a short article by Jacob Bronowski, in a posthumous collection of his essays, ‘A sense of the future’. The article is “A moral for an age of plenty” and it includes an account of the death of the physicist Louis Slotin.
Louis Slotin was a physicist in his mid thirties, working at Los Alamos in 1946. Bronowski described him so: ‘Slotin was good with his hands; he liked using his head; he was bright and a little daring — in short, he was like any other man anywhere who is happy in his work’. Just so.
Slotin was moving bits of plutonium closer together, but for obvious reasons, not too close. And as experts are tempted to do, he was using a screwdriver. His hand slipped. The monitors went through the roof. He immediately pulled the pieces of plutonium apart, and asked everybody to mark their precise positions at the time of the accident. The former meant he would die (9 days later, as it turned out); the latter allowed him to prognosticate on what would happen to the others (they survived).
There are two things that make up morality. One is the sense that other people matter: the send of common loyalty….The other is a clear judgement of what is at stake: a cold knowledge, without a trace of deception, of precisely what will happen to oneself and to others if one plays the hero or the coward. This is the highest morality: to combine human love with an unflinching, a scientific judgement.
I actually think we are more lacking in the second than the former. Worse still, we are less tolerant of evidence than we once were: we prefer to wallow smugly in our self-congratulatory goodness. We have been here before. Medicine only became useful when physicians learned this lesson.
[ And yes, people remarked that Slotin hadn’t followed protocol…]
A nice story in Nature about two giants: Jerome Bruner and the Turing award winner, Alan Kay.
Jerry made seminal contributions to an astonishing number of fields — each a stop on the road to finding out what makes us human. Beginning in the 1960s, computer simulations became the model of the human mind in cognitive psychology, with researchers trying to simulate how humans solve problems, form concepts, comprehend language and learn. But reducing humans to computers was antithetical to Jerry’s humanistic perspective.
Given this, it was surprising that computer scientist Alan Kay, the designer of what became the Macintosh graphical user interface, turned up more than 30 years ago on Bruner’s Manhattan doorstep with a gift of a Macintosh computer. Jerry’s ideas of representing information through actions, icons and symbols, central to his theory of cognitive development, had inspired Kay to get users (even children) to act (through a computer mouse) on icons, enabling the use of an abstract set of symbols (computer program). This was the foundation for what became the Macintosh interface.
One other line in the obiuaryt by Patricia Marks Greenfield stood out:
In 1972, Bruner sailed his boat across the Atlantic to take up the first Watts Professorship of Psychology at the University of Oxford, UK.
I guess the removal expenses were as stingy then as now.
“I have never kept count of the many inventions I made but it must run into the hundreds. Most of them were trivial, such as a wax pencil that would write clearly on cold wet glassware straight from a refrigerator. It was published as one of my first letters to Nature in 1945.”
“A Rough Ride to the Future” by James Lovelock. Blake said it: it is all about ‘minute particulars’. Of a piece.
No, not ‘up North’, but a neat way to check whether people have been sloppy or dishonest. The following from the Economist
The GRIM test, short for granularity-related inconsistency of means, is a simple way of checking whether the results of small studies of the sort beloved of psychologists (those with fewer than 100 participants) could be correct, even in principle.
Full PeerJ reprint here.
Daniel Sarewitz in Nature
The quality problem has been widely recognized in cancer science, in which many cell lines used for research turn out to be contaminated. For example, a breast-cancer cell line used in more than 1,000 published studies actually turned out to have been a melanoma cell line. The average biomedical research paper gets cited between 10 and 20 times in 5 years, and as many as one-third of all cell lines used in research are thought to be contaminated, so the arithmetic is easy enough to do: by one estimate, 10,000 published papers a year cite work based on contaminated cancer cell lines. Metastasis has spread to the cancer literature……..That problem is likely to be worse in policy-relevant fields such as nutrition, education, epidemiology and economics, in which the science is often uncertain and the societal stakes can be high
See this great piece by Bruce Charlton
Professional science has arrived at this state in which the typical researcher feels free to indulge in unrestrained careerism, while blandly assuming that the ‘systems’ of science will somehow transmute the dross of his own contribution into the gold of truth. It does not: hence the preponderance of irreproducible publications.
Two articles about Sci-hub (here and here). No, I am not encouraging illegal downloading. But I hope we can look back in a few years with shame at the way journals, their publishers and those who have a vested interest in the mismeasure of science have hindered educational advance, and wasted public money. Some specialty journals do indeed pour money back into their subject, but it is a minority. All too often medical journals are a way of making money for publishers and specialist societies. There will be an iTunes moment (I hope).
I am not certain when I learned a little about sign language, probably from Steven Pinker’s, ‘The language instinct’. But it is absolutely fascinating, and its study — it seems to me — is yet another one of the almost endless arguments for letting academics play: the world in a grain of sand. There is a short article in this week’s Science:
The use of new parts also makes language more efficient: The youngest ISL signers can express themselves much faster than the oldest—153.2 signs per minute compared with 103.5 signs per minute.
The findings also show that social interaction is essential for language evolution. When a new generation establishes a system for signing, Sandler says, it stays more or less the same as its members age. Her work has shown that when young signers enter a community, they add complexity through experimentation with their peers in what she calls “a social game.” The more players, the more innovations.
You could the same for science in general or any branch of human culture. Which is why many of us worry.
Here is number 3. My favourite bit of skin biology.
Here is video number 2 in the ‘clinical’ skin biology series.
I have been busy producing and updating some videos. Here is the first in a series on skin biology.
Some Neanderthals would — based on MC1R sequence — be expected to have red hair. What has always caused me confusion is the way that dates for everything to do with human paleohistory, and the various representations of our evolution, are revised based on n of 1 publications. No doubt the story will get easier, but I think silence for a while on the ‘greatest story every told’ would be in order. At least from me.
Note added: And then…..
Venki Ramakrishnan was on the radio the other day. I cannot remember his exact words but they were something to the effect that he wanted ‘not to generate lots of data, but instead, lots of understanding’. Says it all.
It’s no secret that therapies that look promising in mice rarely work in people. But too often, experimental treatments that succeed in one mouse population do not even work in other mice, suggesting that many rodent studies may be flawed from the start. Nature
As it says, ‘no secret’. Science is usually self correcting, but the time period may vary. What has always puzzled me is how in the areas of biology I know something about, mouse work has been so informative; whereas in others, all is seems to be good for, is publication is high impact journals. For those interested in pigmentation, mice have been wonderfully informative, whereas for those other bits of skin biology I am familiar with (ahem), like inflammation, mice have been less helpful. A part of me wonders whether some of this is due to whether you are trying to identify potential pathways, or whether you are trying to build interventions based on particular pathways. And finally, lest there be any confusion, I am not one of those who believes we haven’t learned a lot from animals.
Remember those compare and contrast questions (UC versus Crohns; DLE versus LP etc.). Well, look at these two quotes from articles in the same edition of Nature.
The first from the tsunami of papers showing that ‘Something in rotten in the state of
Denmark Science’ — essentially that the Mertonian norms for science have been well and truly trampled over.
Journals charge authors to correct others’ mistakes. For one article that we believed contained an invalidating error, our options were to post a comment in an online commenting system or pay a ‘discounted’ submission fee of US$1,716. With another journal from the same publisher, the fee was £1,470 (US$2,100) to publish a letter. Letters from the journal advised that “we are unable to take editorial considerations into account when assessing waiver requests, only the author’s documented ability to pay”.
Discrete Analysis’[the journal] costs are only $10 per submitted paper, says Gowers; money required to make use of Scholastica, software that was developed at the University of Chicago in Illinois for managing peer review and for setting up journal websites. (The journal also relies on the continued existence of arXiv, whose running costs amount to less than $10 per paper). A grant from the University of Cambridge will cover the cost of the first 500 or so submissions, after which Gower hopes to find additional funding or ask researchers for a submission fee.
Well done the Universities of Cambridge and Cornell (arXiv). For science, the way forward is clear. But for much clinical medicine, including much of my own field, we need to break down the barriers between publication and posting online information that others may find useful. This cannot happen until the financial costs approximate to zero.
From Alan Kay. If it comes from a Turing award winner, maybe people might take notice. Perhaps not.
Patients are willing to pay, and pay dearly: the HeartSheet treatment costs nearly ¥15 million (US$122,000). Last month, the health ministry added it to the procedures covered by national health insurance, which will help. But patients still pay 10–30% of the cost for a drug that is not known to be effective. As they do so, they basically subsidize the company’s clinical trial.
Japan has turned the drug-discovery model on its head. Usually, the investment — and thus the risk — is borne by drug companies, because they stand to gain in the long run. Now the risk is being outsourced. By the time it is clear whether a treatment works or not, the companies will have already made revenue from it.
‘Now Weinberg has added another credential to his crowded vita: historian of science. In his past writings, he had mainly concerned himself with the modern era of physics and astronomy, from the late nineteenth century to the present—a time, he says, when “the goals and standards of physical science have not materially changed.” Yet to appreciate how those goals and standards took shape, he realized he would have to dig deeper into the history of science. So, “as is natural for an academic,” he volunteered to teach a course on the subject—in this case, to undergraduates with no special background in science or mathematics. Then he immersed himself in the primary and secondary literature. The result is To Explain the World, which takes us all the way from the first glimmerings of science in ancient Greece, through the medieval world, both Christian and Islamic, and down to the Newtonian revolution and beyond.’
In a review, by Jim Holt,of ‘To Explain the World: The Discovery of Modern Science’, by Steven Weinberg.
The problem is that this is no longer natural or even encouraged of an academic. And if the writings of Weinberg I have read are anything to go by, this course must have been something special. I can remember the late John Ziman telling me that having been appointed to a lectureship in physics at Cambridge, he realised that there was no suitable text for his Cambridge undergraduates in the area that interested him. So, he spent two years writing such a text (which sold well for many years, he added). He observed that no longer would a UK university consider some behaviour appropriate: what about the REF! This tells us something about great thinkers, deep domain expertise, and how explanatory ability is the crux of great teaching. And about universities, and their troubled relation with teaching — and academics.
Wonderful retrospective in the Economist on one of the intellectual triumphs of the twentieth century. A few quotes give the flavour of real advance without an impact statement in sight.
General relativity was presented to the Prussian Academy of Sciences over the course of four lectures in November 1915; it was published on December 2nd that year. The theory explained, to begin with, remarkably little, and unlike quantum theory, the only comparable revolution in 20th-century physics, it offered no insights into the issues that physicists of the time cared about most.
Some people might be satisfied just to let each theory be used for what it is good for and to worry no further. But people like that do not become theoretical physicists
A century ago general relativity answered no-one’s questions except its creator’s
There is a touching video of Marvin Minsky here. Steven Levy’s wonderful book on how some of this revolution took place is compelling reading (as in the Model Railroad club). You have to wonder how and why so much fundamental and successful (‘an important discovery every few days’) work was done in such a short period of time, with so little money. And across the pond, and elsewhere the biological revolution that dominated the second half of the 20th century was being laid out with even less resource. Not so much ‘events, dear boy, events’ but ‘ideas, dear kids, ideas’.
My intercalated degree (1980) was centred around epidemiology, and analysis of what was then a large dataset. So, I had to learn some FORTRAN and file editing on an IBM360 / find my way around a MTS system, all in the days of punchcards, and line printers, with only one ugly green on black monitor to share across a unit. Mostly, I wasted large amounts of print out and got used to retrieving my batch processed requests early morning, with the comment ‘run aborted’ , ‘system error’ etc. I had to wait another 24 hours to find out if I had corrected the errors in my programs. I remember the first time I looked at the GLIM manuals, wondering if I was attempting to read them upside down.
Years later, my interest in statistics resurfaced, mainly as a response to the banality of much of the EBM crowd with their NNT and their apparent lack of understanding of what I disparagingly call ‘probability management’. Most EBM merchants are frustrated chartered accountants. Real science is involved with understanding reality by creating models of how the world works: if you don’t do that, you are not doing science, but rather the the D of R+D, or ‘technology assessment’. The theory of how you should do the latter is a proper academic pursuit, but using these ‘products’ is not a matter for the academy — although it is important for many businesses or professions. Using what others invented is what doctors do in the clinic, but it does not count as research, but as honest professional toil (and very valuable work, at that, compared with say much business).
But statistics is hard, and frequently counterintuitive. We do not teach it well to medical students and for all the mantra about doctors’ communication skills, fluency with statistics is a core medical skill, and in many situations, the key communication skill doctors should possess. If you want your students to communicate well, do not stray far without mentioning binomial, poisson, Bayes etc.
What follows is a comment from Sander Greenland, on Deborah Mayo‘s excellent site, Error Statistics . I do not know Greenland (although we have emailed each other in the distant past), but I think he is somebody who is always worth listening too. There are a couple of points he makes that chime with me, and they relate to both teaching and the ‘crisis’ in much medical (and scientific research). He writes:
“My view is that stats in soft sciences (medicine, health, social sciences among others) has been a massive educational and ergonomic failure, often self-blinded to the limits on time and capabilities of most teachers, students, and users. I suspect the reason may be that modern stats was developed and promulgated by a British elite whose students and colleagues were selected by their system to be the very best and brightest, a tiny fraction of a percent of the population. Furthermore, it was developed for fields where sequences of experiments leading to unambiguous answers could be carried out relatively quickly (over several years at most, not decades) so that the most serious errors could be detected and controlled, not left as part of the uncertainty surrounding the topic.”
First, we have to accept that we have failed. Second, all too often we are ‘self-blinded to the limits on time and capabilities of most teachers, students, and users’. This is a widespread problem in the undergraduate medical curriculum, more generally. Students must be researchers, teachers, scholars etc. All too often this is one giant GMC inspired delusion, fuelled by either the NHS (yes, I know there is no longer one NHS), or speciality groups (my organ is bigger than your organ….). Third, much of higher education has not caught up with its audience (pace, elite higher education).
Finally, the sorts of experimental science he is talking about (final para) is exactly the sort I love. This is the sort of work Brenner and Crick described when they and a handful of others invented molecular biology. Or for an example from another field, look at how David Hubel and Torsten Wiesel described their work. But sadly, most medical research is no longer like this. It is much, much duller, and much less intellectually secure, because many built in tests of veracity through experimental design and approach, have been replaced by audit of process. Does it have to be this way? A silent bleak voice, tells me yes. As for the teaching, that is a problem we can do something about.
[I like his use of ergonomic, too]
I am at the ESDR 2015 in Rotterdam. Strange sort of place, I haven’t been here before, but I guess there was a lot of war damage (the exact details of which have subsequently been explained to me by a Dutch colleague, Marcel Jonkman). Many parts of the city look like a toy town with buildings that I first saw in comic books when I was a kid. I find it hard to imagine it being a home — but then I don’t live here. On the other hand, lots of nice bars and coffee shops; and bikes. And the Dutch always seem so friendly, and possess a great sense of humour (meaning I tend to laugh, when they laugh).
As for the meeting, well… Leo Szilard, suggested many years ago that presenters should just stand up and announce their conclusions, in a sentence or two. If the audience think the conclusions reasonable, they should quickly then sit down again. On the other hand (and he was a physicist), if the conclusions sounded unreasonable, then they would need to spend time justifying their thinking (or their data). In biology, or medicine, this approach is problematic. Treatment X works better than treatment Y? Not much to say, except that this argument illustrates why so much of scientific conferences no longer consist of science. Second, even biology has so many myriad pathways that it is so hard to frame experiments as tests of hypotheses, rather than stamp collecting. Biology is just messy, and I remain amazed that we can discover anything useful at all. But we can, and we do.
There was a nice keynote by Cédric Blanpain on tumour heterogeneity. I once claimed to know something about this, but since I was reading in this area, technology has moved on. Nonetheless, at the risk of sounding like somebody who always says there is nothing new under the sun, I think many of the key questions that people were framing 25 years ago, still remain unanswered: how do we deal with molecular heterogeneity (and the drivers of tumour heterogeneity), and how do we therapeutically get round this issue? Is there a bigger problem in cancer?
As for the Szilard approach, the ESDR have tried, and it works. Short one or two sentence statements, ending with ‘if you want to find out why, visit my poster’. Neat. And posters can be fun.
Everywhere I look I see stuff about how ‘unreliable’ much science might be. For instance see here, here and here. In truth most of this is about the recent paper on the lack of reproducibility of some outputs [irony intended] in psychology journals. John Ionnnidis writes:
Multiple lines of evidence suggest this is a recipe for disaster, leading to a scientific literature littered with long chains of irreproducible results.
I do not have much original to add to this debate, but find none of it very surprising. Go back and read the ‘Double Helix’, or look at how David Hubel and Torsten Wiesel described how they changed the world. I think of it in the following terms.
If people use probability management as the sole marker of truth, you have left science — that is science as the attempt to discover the deeper structures that underpin reality — behind. Science is about reliable knowledge, and if what you do is not reliable, you have not been doing science. There is a lot of activity that is going on that that is not science. Much is Cargo cult science; much is testing; much is effectively the D of R+D where we are playing and abusing Type 1 and Type 2 errors (Neyman Pearson errors, rather than Fisher); much the sort of thing any large logistics or commercial company does, but they probably have more humility. And it is why I have argued that much medical research (sic) could be usefully outsourced to Amazon: its logistics, stupid. ( see for instance, Of supermarkets and medical science: getting the product lines right). The Feynman quote that I recall was: “When, during the Challenger enquiry, Richard Feynman was told that the chance of an error in a certain process was 1:105, he retorted, “if a man tells me the chance of failure is 1 in 10^5 I know he is full of crap”.
Science relies on truth seeking; integrity; and honesty. It is not just a career. You have to have an attitude to your subject. Arrogance before men; humility before your subject (Bronowski). Most good science is iterative. You want to see the world honestly because your next bit of work depends on how you have interpreted what you did yesterday. In one sense, you publish for yourself. The hardest judge must be you and your reputation, not some anonymous reviewer. Your peers are not the people who review your papers, but those people you do not know who decide whether you are in the books 20 years later. Think of dietary observational epidemiology: how much will survive?
When publication is no longer a matter of record, but of career advancement, we are doomed to endlessly inventing ways to discover who has been debasing our currency. Institutions matter, not so much the bricks and mortar ones, but the way science is organised. John Ziman foresaw the mess we are getting into in his book Real Science, and elsewhere:
What is more, science is no longer what it was when Merton first wrote about it. The bureaucratic engine of policy is shattering the traditional normative frame. Big science has become a novel way of life, with its own conventions and practices. What price now those noble norms? Tied without tenure into a system of projects and proposals, budgets and assessments, how open, how disinterest- ed, how self-critical, how riskily original can one afford to be?
He was of course talking about Robert Merton’s essay (J. Legal and Political Sociology 1, 115–262; 1942). He went on to say:
There is no going back to that world we have lost. Anyway, science has never had it so good as in this past half-century, and is still going great guns. But soon it will be offered a new contract with society. To renegotiate that contract with its eyes open, on even terms, science will need to understand itself much better. That understanding is going to require, not adherence to an obsolete ethos, but a sharp but sympathetic sociological self- analysis. That is the unfinished business that Merton’s little paper began.
This is where we are. We need to find such a new contract. Bruce Charlton had the right diagnosis a long time ago (Not even trying: the corruption of real science).
One of the dangers of focussing on learning natural science is that you may become oblivious to all the unnatural science that may shape human society to a greater degree. I am talking here of the rules and institutions that we create to organise our combined social existence: money, the rules of a formal legal system, the created cultures and belief systems that underpin how we think society works, and so on (as in the conventional lies that get us through the day).
The Economist has a couple of pieces on patents, recounting their origins and the longstanding differences in opinion over their utility: how they can retard — as well as advance — enquiry. And we will put to one side for the moment the stupidities of modern patents on iPhones and ‘one-clicking’ etc
That is why proponents of patents see it as reasonable to let Bristol-Myers Squibb have a temporary monopoly on Opdivo, its new melanoma drug, and to charge $120,000 per course of treatment in America. If the company could not do so, the argument goes, it would not have spent a fortune on getting the drug and its complex manufacturing process approved. However, the history of the industry raises doubts about such arguments.
I was surprised — and I learned this from David Healy only a few years back — that until 1967, German companies could only patent the process by which they made a drug,and not the drug itself. (And the Economist states the German’s invented more drugs than the British). The article adds:
Another interesting case is Italy, which had no patent protection for drugs until 1978. One study showed it invented a larger proportion of the world’s new medicines before that date than afterwards.
Awhile back I wrote a piece in the Lancet (Rees J. Patents and intellectual property: a salvation for patient-oriented research. Lancet. 2000;356:849–850 pdf here), pointing out that one side-effect of IP rules for medicine was that it tended to favour approaches that afforded IP protection over those that did not. I still think that observation is correct, although I suspect my solution owes more to irony than what how we should move forward. Just as for copyright, we are seeing shaping of the legal landscape to protect incumbents and to discourage innovation. The article ends with sentiments I can agree with:
But a top-to-bottom re-examination of whether patents and other forms of intellectual-property protection actually do their job, and even whether they deserve to exist, is long overdue. Simple abolition raises problems in terms of the ethics of property rights (see leader). But reductions in the duration of exclusive rights and differentiation between those rights for different sorts of innovation are possible, and could be introduced in steps over a number of years, allowing plenty of time for any ill effects to surface. Experiments with other forms of financing innovation could be run alongside the patent system. If defenders of the patent system really seek to foster innovation, they should be prepared to do so in their own backyard.
William Harvey, better known for describing the circulation of the blood, wondered in the 17th century what could explain why children’s skin colour was often a blend of their parents’, whereas they share a sex with only one, and can have an eye colour different from either. for describing.
The central problems of human genetics.From a book review in the Economist. It has always fascinated my why Mendel modelled on the basis of discrete states, rather than thinking of traits as ‘blended’. The guess, is that for this monk, sex was the answer: male or female, but not a little bit of either. [direct link to this aside]
70 Years ago today the Atomic Bomb was dropped on Hiroshima. On Sunday, it was to be Nagasaki’s turn. The above title is from Jacob Bronowki’s book ‘Science and Human Values‘. He was part of an official visit to Nagasaki later in 1945. Two quotes from this book have never left me.
The gravest indictment that can be made of our generalized culture is, in fact, that it erodes our sense of the context in which judgements must be made. Let me end with a practical example. When I returned from the physical shock of Nagasaki, which I have described in the first page of this book, I tried to persuade my colleagues in governments and in the United Nations that Nagasaki should be preserved exactly as it was then. I wanted all future conferences on disarmament, and on other issues which weigh the fates of nations, to be held in that ashy, clinical sea of rubble. I still think as I did then, that only in this forbidding context could statesmen make realistic judgements of the problems which they handle on our behalf. Alas, my official colleagues thought nothing of my scheme; on the contrary, they pointed out to me that delegates would be uncomfortable in Nagasaki……….
Nothing happened in 1945 except that we changed the scale of our indifference to man; and conscience, in revenge, for an instant became immediate to us. Before this immediacy fades in a sequence of televised atomic tests, let us acknowledge our subject for what it is: civilization face to face with its own implications. The implications are both the industrial slum which Nagasaki was before it was bombed, and the ashy desolation which the bomb made of the slum.
Bronowksi always wanted to build a natural philosophy fit for our age. It is our tragedy that he (and we have) failed.