The following is an excerpt from a review in press with Acta. You can see the full article with DOI 10.2340/00015555-2916 here
From the solar constant to thong bikinis and all stops in between.
A review of: “Sun Protection: A risk management approach.” Brian Diffey. IOP Publishing, Bristol, UK. ISBN 978-0-7503-1377-3 (ebook) ISBN 978-0-7503-1378-0 (print) ISBN 978-0-7503-1379-7 (mobi)
Leo Szilard was one of half a dozen or so physical scientists who, having attended the same Budapest gymnasium, revolutionised twentieth century physics. In 1934, whilst working in London, he realised that if one neutron hit an atom which then released two further neutrons, a chain reaction might ensue. Fearing of the consequences, he tried to keep the discovery secret by assigning the patent to the British Admiralty. In 1939, he authored the letter, that Einstein signed, warning the then US President of the coming impact of nuclear weapons.
After the war, in revulsion at the uses to which his physics had been applied, he swapped physics for biology. There was a drawback, however. Szilard liked to think in a hot bath, and he liked to think a lot. Once his interests had turned to biology he remarked that he could no longer enjoy a long uninterrupted bath — he was forever having to leave his bath, to check some factual detail (before returning to think some more). Biology seemed to lack the deep simplifying foundations of the Queen of Sciences.
Enrico Fermi was big on back-of-the-envelope calculations. I cannot match his brain, but I like playing with simple arithmetic. Here are some notes I made several years ago after reading a paper from Mistry et al in the British Journal of Cancer on cancer incidence projections for the UK.
For melanoma we will see a doubling between now (then) and 2030, half of this is increase in age specific incidence and half due to age change. Numbers of cases for the UK:
If we assume we see 15 non-melanomas (mimics) for every melanoma, the number of OP visits with or without surgery is as follows.
This is for melanoma. The exponent for non-melanoma skin cancer is higher, so these numbers are an underestimate of the challenge we face. Once you add in ‘awareness campaigns’, things look even worse.
At present perhaps 25% of consultant dermatology posts are empty (no applicants), and training numbers and future staffing allowing for working patterns, reducing. Waiting times to see a dermatologist in parts of Wales are over a year. The only formal training many receive in dermatology as an undergraduate can be measured in days. Things are worse than at any time in my career. It is with relief, that I say I am married to a dermatologist.
He is one of 10 case studies in Black Tudors, an enlightening and constantly surprising book about the men and women of African origin who found themselves on a cold island on the fringe of Europe amid a pale and pockmarked people.
One thing about trying to put the Internet and computing in context, is that you are forced to look back at the history of other communication revolutions (pace Tim Wu, John Naughton etc). It is now a well trodden path, but one I still find fascinating. Even down to the details of how the cost of distributing images or 3D moulages had an effect on my own specialty. The following caught my eye — or maybe my nose.
“When paper was embraced in Europe, it became arguably the continent’s earliest heavy industry. Fast-flowing streams (first in Fabriano, Italy, and then across the continent) powered massive drop-hammers that pounded cotton rags, which were being broken down by the ammonia from urine. The paper mills of Europe reeked, as dirty garments were pulped in a bath of human piss.”
No, still not finished but useable.
”On the business side , its advertising inventory, whether it is sold directly or via third parties, is made of different kinds of ads, ranging from high-value brands such as Rolex or Lexus to low-paying advertisers, like the toe fungus ads used to fill unsold spaces.”
You can’t sell news for what it costs to make – The Walkley Magazine – Medium
I am hard at work on a new version of skincancer909, so odd breaks from blogging will occur, as my deadline is fast approaching. Instead, a few images, to assuage my guilt, will follow. It won’t compare with the quality of the masters (below), but the technology allows some of us to stand high on their shoulders.
I guess you could call this inverse dermatology.
My first publication was on eccrine sweating. I was known (for a while) as the ‘sweat guy’, too. Only at work, I note.
There was an interesting paper published in Nature recently on the topic of automated skin cancer diagnosis. Readers of my online work will know it is a topic close to my heart.
Here is the text of a guest editorial I wrote for Acta about the paper. Acta is a ‘legacy’ journal that made the leap to full OA under Anders Vahlquist’s supervision a few years back — it is therefore my favourite skin journal. This month’s edition, is the first without a paper copy, existing just online. The link to the edited paper and references is here. I think this is the first paper in their first online only edition :-). Software is indeed eating the world.
When I was a medical student close to graduation, Sam Shuster then Professor of Dermatology in Newcastle, drew my attention to a paper that had just been published in Nature. The paper, from the laboratory of Robert Weinberg, described how DNA from human cancers could transform cells in culture (1). I tried reading the paper, but made little headway because the experimental methods were alien to me. Sam did better, because he could distinguish the underlying melody from the supporting orchestration. He told me that whilst there were often good papers in Nature, perhaps only once every ten years or so would you read a paper that would change both a field and the professional careers of many scientists. He was right. The paper by Weinberg was one of perhaps fewer than a dozen that defined an approach to the biology of human cancer that still resonate forty years later.
Revolutionary papers in science have one of two characteristics. They are either conceptual, offering a theory that is generative of future discovery — think DNA, and Watson and Crick. Or they are methodological, allowing what was once impossible to become almost trivial — think DNA sequencing or CRISPR technology. Revolutions in medicine are slightly different, however. Yes, of course, scientific advance changes medical practice, but to fully understand clinical medicine we need to add a third category of revolution. This third category comes from papers that change the everyday lives of what doctors do and how they work. Examples would include fibreoptic instrumentation and modern imaging technology. To date, dermatology has escaped such revolutions, but a paper recently published in Nature suggests that our time may have come (2).
The core clinical skill of the dermatologist is categorising morphological states in a way that informs prognosis with, or without, a therapeutic intervention. Dermatologists are rightly proud of these perceptual skills, although we have little insight as to how this expertise is encoded in the human brain. Nor should we be smug about our abilities as, although the domains are different, the ability to classify objects in the natural world is shared by many animals, and often appears effortless. Formal systems of education may be human specific, but the cortical machinery that allows such learning, is widespread in nature.
There have been two broad approaches to try and imitate these skills in silica. Either particular properties (shape, colour, texture etc.) are first explicitly identified and, much as we might add variables in a linear regression equation, the information used to try and discriminate between lesions in an explicit way. Think of the many papers using rule based strategies such as the ABCD system (3). This is obviously not the way the human brain works: a moment’s reflection about how fast an expert can diagnose skin cancers and how limited we are in being able to handle formal mathematics, tells us that human perceptual skills do not work like this.
There is an alternative approach, one to some extent that almost seems like magic. The underlying metaphor is as follows. When a young child learns to distinguish between cats and dogs, we know the language of explicit rules is not used: children cannot handle multidimensional mathematical space or complicated symbolic logic. But feedback, in terms of what the child thinks, allows the child to build up his or her own model of the two categories (cats versus dogs). With time, and with positive and negative feedback, the accuracy of the perceptual skills increase — but without any formal rules that the child could write down or share. And of course, since it is a human being we are talking about, we know all of this process takes place within and between neurons.
Computing scientists started to model the way that they believed collections of neurons worked over 4 decades ago. In particular, it became clear that groups of in silica neurons could order the world based on positive and negative feedback. The magic is that we do not have to explicitly program their behaviour, rather they just learn, but — since this is not magic after all — we have got much better at building such self-learning machines. (I am skipping any detailed explanation of such ‘deep learning’ strategies, here). What gives this field its current immediacy is a combination of increases in computing power, previously unimaginable large data sets (for training), advances in how to encode such ‘deep learning’, and wide potential applicability — from email spam filtering, terrorist identification, online recommendation systems, to self-driving cars. And medical imaging along the way.
In the Nature paper by Thrun and colleagues (2) such ‘deep learning’ approaches were used to train computers based on over 100,000 medical images of skin cancer or mimics of skin cancer. The inputs were therefore ‘pixels’ and the diagnostic category (only). If this last sentence does not shock you, you are either an expert in machine learning, or you are not paying attention. The ‘machine’ was then tested on a new sample of images and — since modesty is not a characteristic of a young science — the performance of the ‘machine’ compared with over twenty board certified dermatologists. If we use standard receiver operating curves (ROC) to assess performance the machine equalled if not out-performed the humans.
There are of course some caveats. The dermatologists were only looking at single photographic images, not the patients (4); the images are possibly not representative of the real world; and some of us would like to know more about the exact comparisons used. However, I would argue that there are also many reasons for imagining that the paper may underestimate the power of this approach: it is striking that the machine was learning from images that were relatively unstandardised and perhaps noisy in many ways. And if 100,000 seems large, it is still only a fraction of the digital images that are acquired daily in clinical practice.
It is no surprise that the authors mention the possibilities of their approach when coupled with the most ubiquitous computing device on this planet — the mobile phone. Thinking about the impact this will have on dermatology and dermatologists would require a different sort of paper from the present one but, as Marc Andreessen once said (4), ‘software is eating the world’. Dermatology will survive, but dermatologists may be on the menu.
Full paper with references on Acta is here.
I have been busy updating some teaching stuff. It is never finished but there is time for a little pause. I have completed all the SoundCloud audio answers to the questions in ed.derm.101 (Part C) and there is a ‘completed’ version of ed.derm.101 Part C half way down the linked page. Not all the links have been checked, and a lot had to be redone because the superb New Zealand Dermnet site changed their design (the best source of dermatology images, IMHO). An example of the sort of audio material is below.
I was sat in a meeting recently. We were discussing teaching, amongst other things. And I pencilled out what we in dermatology deliver each year, every year (subtext: I doubted that people realise how much effort and resource we need to teach clinical medicine).
We provide clinical placements for 36 weeks per year, with 12-14 students attached for each two week period, in 18 ‘cohorts’. Over the year we provide just under 400 hours of clinical seminars, in which patients appear, but are there for teaching purposes only, with the students in groups of around ten, and with the teacher having no other responsibility (they are not managing patients). In addition we provide around 1500 hours of clinical experience — timetabled events in which a student attends a session in which they are not the focus of attention. These latter sessions are real: they are timetabled by person and time, start and finish on time, and are rarely cancelled or changed.
Students like what we do, they like the online stuff, too, and the staff are enthusiastic. But this system does not run itself and, in the long term, I fear might not be sustainable, even though the funding is said to be there. It is certainly not optimal, even though our students get a better deal than students at most other UK medical schools. We need to do something else, building on what we do well. Just thinking.
When I was a child, growing up in Wales, my father would express puzzlement that I didn’t seem to know how to pronounce certain words. He didn’t get that since Welsh was his equal first tongue — but not mine— knowing how you pronounce Welsh words was obvious to him, but not to me. For my part, it was only scores of years later that I realised some of his verbal mannerisms were not just odd idiosyncratic English or slang, but Welsh, although the meaning was clear to me. I had just not realised these were Welsh words or phrases, and of course I too would use them.
I have noted in the past, that when students mispronounce some of these dreadful dermatological terms, it was a signal that they had read about a disease, but had never been taught on that disease. It signalled to me how much they were acquiring on their own. English is like that, certainly in comparison with German: until you hear the word spoken, guessing how you say it, is tricky. More so, when you chuck in the various languages that contribute to the dermatological lexicon — and when they are then spoken / bastardised by English speakers.
But today, a student today pointed out that it would be helpful to include how words are pronounced in our course material. I am not certain how to do this yet, but I can believe that not knowing how to pronounce a term might ‘inhibit’ thinking and ‘silent talking’ about the topic (I do not know whether there is any research to back this opinion up).
Nick Carr writes about e-textbooks, quoting research that students don’t like them, or at least they prefer conventional textbooks. Seems reasonable to me. We know a lot more about the design of conventional textbooks, layout, indexing, and interaction and so on. But for dermatology it seems to me e-textbooks offer a way forward. If you want to learn dermatology, you have to look at images, and to do this well, you need access to lots and lots of images. One of the conclusions of a paper we published several years ago was how few instances of a particular disease students are exposed to. Seeing only n of 1 for a particular lesion type is just not enough: imagine your sole idea of what a ‘dog’ is, was based on only seeing one poodle. Current publishing models and norms mean that most dermatology textbooks are short on images — and often the images they contain are poor. E-textbooks are one way round this, and it is difficult to look at an iPad and not wonder what a good dermatology text would like like on it. What will be really interesting is what will happen to the legacy publishers given the price sensitivity of undergraduate students and the lower barriers to entry.
Annotation and memory of position on the page are important issues, but I doubt invention will not improve things. Just look at the way the ‘clunky’ Kindle allows you to highlight text, then retrieve it on the Amazon web site and go back to the text at the various bookmarks. A scholars dream for encouraging accurate referencing and citation.
^^ Skincancer909 is currently being rewritten and the future version will incorporate video with a new design.
I have added some more SoundCloud answers and added and sorted the links in Part C Chapters 5,6,7. Getting there.
I have posted some new audio SoundCloud answers to questions from the first three chapters of ed.derm.101 Part C.
There is sometimes a prejudice in medical education that somehow teaching at the bedside is always best. Of course most medical encounters are not at the bedside (any more) simply because most clinical encounters are not on wards, but in offices, whether the offices are in hospitals or elsewhere. The arguments for the bedside include tradition, but also reflect the fear that medical education will be expropriated from the clinical context. I have a lot of sympathy with the latter view, but it will sometimes lead to error.
Yesterday, I talked about the Dermofit App, to which I contributed. One of the rationales for this whole approach almost a dozen years ago now, was my belated realisation that clinical exposure — however intense — in dermatology might not be as efficient as a learning environment in a virtual world. In dermatology, simulation is over one and a half centuries old, and the history of this simulation, tracks the development of technology. It is just that this simulation relies on something we have got used to because it is all around us: high quality graphics. Pictures of lesions.
Several years later we published a paper, exploring this. We wrote:
“The overwhelming majority of students 82% (n = 41) did not see an example of each of the three major skin cancers (BCC, SCC, melanoma) and only a single student (2%) witnessed two examples of each. The percentage of students witnessing 1, >3 and >5 examples is given for each of the 16 lesions and demonstrates that there was not only a lack of breadth but also of depth to the students’ exposure.”
In one sense this is all very obvious. We know that (perceptual) classification tasks require practice, and that practice requires multiple training examples. The training signal: noise ratio can be higher in the virtual world, and it is easier to manipulate events in the virtual world. If the quip is that technology is everything that gets invented after your teenage years, we don’t recognise the obvious technology here simply because it is has been around so long. It is just that silicon really allows it to be done so much better. The caveat is whether the business model allows this.
Students will prefer the clinic, for reasons I understand. But they will often be to wrong to do so.
The App version of Dermofit is on the App store. Here is a link to a link. I have only just started to play around with it. The App was a commercialisation of some work we did here in Edinburgh, between myself and and Bob Fisher in Informatics. If you search my main site reestheskin.me using the keyword ‘dermofit’ you will find a little more about it and the work that led up to it. It is for iPad only. [Yes, I stand to benefit from any sales, but I do not think I will be giving up the day job anytime soon]. I will write another time about the ‘why’ and rationale behind this whole approach.
A short video on epidermal biology with an emphasis on barrier function and irritant dermatitis.