I am hard at work on a new version of skincancer909, so odd breaks from blogging will occur, as my deadline is fast approaching. Instead, a few images, to assuage my guilt, will follow. It won’t compare with the quality of the masters (below), but the technology allows some of us to stand high on their shoulders.
I guess you could call this inverse dermatology.
My first publication was on eccrine sweating. I was known (for a while) as the ‘sweat guy’, too. Only at work, I note.
There was an interesting paper published in Nature recently on the topic of automated skin cancer diagnosis. Readers of my online work will know it is a topic close to my heart.
Here is the text of a guest editorial I wrote for Acta about the paper. Acta is a ‘legacy’ journal that made the leap to full OA under Anders Vahlquist’s supervision a few years back — it is therefore my favourite skin journal. This month’s edition, is the first without a paper copy, existing just online. The link to the edited paper and references is here. I think this is the first paper in their first online only edition :-). Software is indeed eating the world.
When I was a medical student close to graduation, Sam Shuster then Professor of Dermatology in Newcastle, drew my attention to a paper that had just been published in Nature. The paper, from the laboratory of Robert Weinberg, described how DNA from human cancers could transform cells in culture (1). I tried reading the paper, but made little headway because the experimental methods were alien to me. Sam did better, because he could distinguish the underlying melody from the supporting orchestration. He told me that whilst there were often good papers in Nature, perhaps only once every ten years or so would you read a paper that would change both a field and the professional careers of many scientists. He was right. The paper by Weinberg was one of perhaps fewer than a dozen that defined an approach to the biology of human cancer that still resonate forty years later.
Revolutionary papers in science have one of two characteristics. They are either conceptual, offering a theory that is generative of future discovery — think DNA, and Watson and Crick. Or they are methodological, allowing what was once impossible to become almost trivial — think DNA sequencing or CRISPR technology. Revolutions in medicine are slightly different, however. Yes, of course, scientific advance changes medical practice, but to fully understand clinical medicine we need to add a third category of revolution. This third category comes from papers that change the everyday lives of what doctors do and how they work. Examples would include fibreoptic instrumentation and modern imaging technology. To date, dermatology has escaped such revolutions, but a paper recently published in Nature suggests that our time may have come (2).
The core clinical skill of the dermatologist is categorising morphological states in a way that informs prognosis with, or without, a therapeutic intervention. Dermatologists are rightly proud of these perceptual skills, although we have little insight as to how this expertise is encoded in the human brain. Nor should we be smug about our abilities as, although the domains are different, the ability to classify objects in the natural world is shared by many animals, and often appears effortless. Formal systems of education may be human specific, but the cortical machinery that allows such learning, is widespread in nature.
There have been two broad approaches to try and imitate these skills in silica. Either particular properties (shape, colour, texture etc.) are first explicitly identified and, much as we might add variables in a linear regression equation, the information used to try and discriminate between lesions in an explicit way. Think of the many papers using rule based strategies such as the ABCD system (3). This is obviously not the way the human brain works: a moment’s reflection about how fast an expert can diagnose skin cancers and how limited we are in being able to handle formal mathematics, tells us that human perceptual skills do not work like this.
There is an alternative approach, one to some extent that almost seems like magic. The underlying metaphor is as follows. When a young child learns to distinguish between cats and dogs, we know the language of explicit rules is not used: children cannot handle multidimensional mathematical space or complicated symbolic logic. But feedback, in terms of what the child thinks, allows the child to build up his or her own model of the two categories (cats versus dogs). With time, and with positive and negative feedback, the accuracy of the perceptual skills increase — but without any formal rules that the child could write down or share. And of course, since it is a human being we are talking about, we know all of this process takes place within and between neurons.
Computing scientists started to model the way that they believed collections of neurons worked over 4 decades ago. In particular, it became clear that groups of in silica neurons could order the world based on positive and negative feedback. The magic is that we do not have to explicitly program their behaviour, rather they just learn, but — since this is not magic after all — we have got much better at building such self-learning machines. (I am skipping any detailed explanation of such ‘deep learning’ strategies, here). What gives this field its current immediacy is a combination of increases in computing power, previously unimaginable large data sets (for training), advances in how to encode such ‘deep learning’, and wide potential applicability — from email spam filtering, terrorist identification, online recommendation systems, to self-driving cars. And medical imaging along the way.
In the Nature paper by Thrun and colleagues (2) such ‘deep learning’ approaches were used to train computers based on over 100,000 medical images of skin cancer or mimics of skin cancer. The inputs were therefore ‘pixels’ and the diagnostic category (only). If this last sentence does not shock you, you are either an expert in machine learning, or you are not paying attention. The ‘machine’ was then tested on a new sample of images and — since modesty is not a characteristic of a young science — the performance of the ‘machine’ compared with over twenty board certified dermatologists. If we use standard receiver operating curves (ROC) to assess performance the machine equalled if not out-performed the humans.
There are of course some caveats. The dermatologists were only looking at single photographic images, not the patients (4); the images are possibly not representative of the real world; and some of us would like to know more about the exact comparisons used. However, I would argue that there are also many reasons for imagining that the paper may underestimate the power of this approach: it is striking that the machine was learning from images that were relatively unstandardised and perhaps noisy in many ways. And if 100,000 seems large, it is still only a fraction of the digital images that are acquired daily in clinical practice.
It is no surprise that the authors mention the possibilities of their approach when coupled with the most ubiquitous computing device on this planet — the mobile phone. Thinking about the impact this will have on dermatology and dermatologists would require a different sort of paper from the present one but, as Marc Andreessen once said (4), ‘software is eating the world’. Dermatology will survive, but dermatologists may be on the menu.
Full paper with references on Acta is here.
I have been busy updating some teaching stuff. It is never finished but there is time for a little pause. I have completed all the SoundCloud audio answers to the questions in ed.derm.101 (Part C) and there is a ‘completed’ version of ed.derm.101 Part C half way down the linked page. Not all the links have been checked, and a lot had to be redone because the superb New Zealand Dermnet site changed their design (the best source of dermatology images, IMHO). An example of the sort of audio material is below.
I was sat in a meeting recently. We were discussing teaching, amongst other things. And I pencilled out what we in dermatology deliver each year, every year (subtext: I doubted that people realise how much effort and resource we need to teach clinical medicine).
We provide clinical placements for 36 weeks per year, with 12-14 students attached for each two week period, in 18 ‘cohorts’. Over the year we provide just under 400 hours of clinical seminars, in which patients appear, but are there for teaching purposes only, with the students in groups of around ten, and with the teacher having no other responsibility (they are not managing patients). In addition we provide around 1500 hours of clinical experience — timetabled events in which a student attends a session in which they are not the focus of attention. These latter sessions are real: they are timetabled by person and time, start and finish on time, and are rarely cancelled or changed.
Students like what we do, they like the online stuff, too, and the staff are enthusiastic. But this system does not run itself and, in the long term, I fear might not be sustainable, even though the funding is said to be there. It is certainly not optimal, even though our students get a better deal than students at most other UK medical schools. We need to do something else, building on what we do well. Just thinking.
When I was a child, growing up in Wales, my father would express puzzlement that I didn’t seem to know how to pronounce certain words. He didn’t get that since Welsh was his equal first tongue — but not mine— knowing how you pronounce Welsh words was obvious to him, but not to me. For my part, it was only scores of years later that I realised some of his verbal mannerisms were not just odd idiosyncratic English or slang, but Welsh, although the meaning was clear to me. I had just not realised these were Welsh words or phrases, and of course I too would use them.
I have noted in the past, that when students mispronounce some of these dreadful dermatological terms, it was a signal that they had read about a disease, but had never been taught on that disease. It signalled to me how much they were acquiring on their own. English is like that, certainly in comparison with German: until you hear the word spoken, guessing how you say it, is tricky. More so, when you chuck in the various languages that contribute to the dermatological lexicon — and when they are then spoken / bastardised by English speakers.
But today, a student today pointed out that it would be helpful to include how words are pronounced in our course material. I am not certain how to do this yet, but I can believe that not knowing how to pronounce a term might ‘inhibit’ thinking and ‘silent talking’ about the topic (I do not know whether there is any research to back this opinion up).
Nick Carr writes about e-textbooks, quoting research that students don’t like them, or at least they prefer conventional textbooks. Seems reasonable to me. We know a lot more about the design of conventional textbooks, layout, indexing, and interaction and so on. But for dermatology it seems to me e-textbooks offer a way forward. If you want to learn dermatology, you have to look at images, and to do this well, you need access to lots and lots of images. One of the conclusions of a paper we published several years ago was how few instances of a particular disease students are exposed to. Seeing only n of 1 for a particular lesion type is just not enough: imagine your sole idea of what a ‘dog’ is, was based on only seeing one poodle. Current publishing models and norms mean that most dermatology textbooks are short on images — and often the images they contain are poor. E-textbooks are one way round this, and it is difficult to look at an iPad and not wonder what a good dermatology text would like like on it. What will be really interesting is what will happen to the legacy publishers given the price sensitivity of undergraduate students and the lower barriers to entry.
Annotation and memory of position on the page are important issues, but I doubt invention will not improve things. Just look at the way the ‘clunky’ Kindle allows you to highlight text, then retrieve it on the Amazon web site and go back to the text at the various bookmarks. A scholars dream for encouraging accurate referencing and citation.
^^ Skincancer909 is currently being rewritten and the future version will incorporate video with a new design.
I have added some more SoundCloud answers and added and sorted the links in Part C Chapters 5,6,7. Getting there.
I have posted some new audio SoundCloud answers to questions from the first three chapters of ed.derm.101 Part C.
There is sometimes a prejudice in medical education that somehow teaching at the bedside is always best. Of course most medical encounters are not at the bedside (any more) simply because most clinical encounters are not on wards, but in offices, whether the offices are in hospitals or elsewhere. The arguments for the bedside include tradition, but also reflect the fear that medical education will be expropriated from the clinical context. I have a lot of sympathy with the latter view, but it will sometimes lead to error.
Yesterday, I talked about the Dermofit App, to which I contributed. One of the rationales for this whole approach almost a dozen years ago now, was my belated realisation that clinical exposure — however intense — in dermatology might not be as efficient as a learning environment in a virtual world. In dermatology, simulation is over one and a half centuries old, and the history of this simulation, tracks the development of technology. It is just that this simulation relies on something we have got used to because it is all around us: high quality graphics. Pictures of lesions.
Several years later we published a paper, exploring this. We wrote:
“The overwhelming majority of students 82% (n = 41) did not see an example of each of the three major skin cancers (BCC, SCC, melanoma) and only a single student (2%) witnessed two examples of each. The percentage of students witnessing 1, >3 and >5 examples is given for each of the 16 lesions and demonstrates that there was not only a lack of breadth but also of depth to the students’ exposure.”
In one sense this is all very obvious. We know that (perceptual) classification tasks require practice, and that practice requires multiple training examples. The training signal: noise ratio can be higher in the virtual world, and it is easier to manipulate events in the virtual world. If the quip is that technology is everything that gets invented after your teenage years, we don’t recognise the obvious technology here simply because it is has been around so long. It is just that silicon really allows it to be done so much better. The caveat is whether the business model allows this.
Students will prefer the clinic, for reasons I understand. But they will often be to wrong to do so.
The App version of Dermofit is on the App store. Here is a link to a link. I have only just started to play around with it. The App was a commercialisation of some work we did here in Edinburgh, between myself and and Bob Fisher in Informatics. If you search my main site reestheskin.me using the keyword ‘dermofit’ you will find a little more about it and the work that led up to it. It is for iPad only. [Yes, I stand to benefit from any sales, but I do not think I will be giving up the day job anytime soon]. I will write another time about the ‘why’ and rationale behind this whole approach.
A short video on epidermal biology with an emphasis on barrier function and irritant dermatitis.
A basic introduction to clinical photobiology for our students.
I don’t tend to blog or write much about the nitty-gritty of some of the day job. And I try to avoid the partisanship that affects so much of medicine in terms of this disease or that disease being a priority…. can we have more research money or money for more doctors etc. But this post is perhaps an exception, and it is written out of utter frustration at the incompetence of the UK health services.
Some data from a paper in press in the JID looking at age-cohort models of melanoma, trying to predict what might happen over time. When I came into dermatology, crude incidence rates for melanoma /100,000 were 5.8. When I retire (if I get that for) they are estimated to be 31.4/100,000. A timespan of one career.
Some context. Yes, this is incidence not mortality; diagnostic patterns have changed over time; and our knowledge of the natural history of melanoma, incomplete. But, in the absence of a major scientific advance that changes the field ( I see little evidence of this), incidence rates will be a key driver of health care needs. Given that melanoma diagnosis is clinical, and that we need to see 10–20 patients to pick up one melanoma, we are looking at a sixfold change in workload over one clinician’s career. And this ignores the more intensive nature of treatment and follow up, so workload will increase even more. Nor is melanoma the most common cancer we see. If cancer is ‘half our work’, melanoma rates are possibly an underestimate of what changes we will see, as the exponent relating incidence to UVR, is even higher for the non-melanoma skin cancers (NMSC).
At present, perhaps 20% of UK consultant dermatologist posts are vacant, and the UK has the lowest number of dermatologists per unit population in the EU by an order of magnitude (certainly in comparison with the German speaking world). I will repeat that: by an order of magnitude. Primary care is in meltdown, too. And while there are lots of well qualified trainees who want to become dermatologists, training positions are essentially static, reflecting classic market corruption by a monopoly. And to cap it all, the Colleges (‘little Englanders’) have recently made the situation even worse, with bizarre changes to dermatology training, that will worsen care in the UK, rather than reflect the changes that will be necessary looking to the future:bracing forward with their eyes on the rear view mirror.
This is not an easy problem to solve. Nor do I think disease rates have to drive doctor numbers in a 1 for 1 way (they don’t); we need to rethink how this profession works. But if you claim to provide a national health service, it is surely negligent not to at least realise that there is a problem — and tell people. My university produces plans. Yes, lots of padding, but also attempts to think how we will function in 2020 and 2025. If you look at health boards, in any meaningful sense, I just see a void. And of course, this isn’t just about dermatology, as my GP politely joined out to me last night: it is across the board.
A short video on urticaria and the biology of the mast cells
Here is number 3. My favourite bit of skin biology.
Here is video number 2 in the ‘clinical’ skin biology series.
I have been busy producing and updating some videos. Here is the first in a series on skin biology.
Hywel Williams gets lyrical about atopic dermatitis. Worth a cwtch, as we might say.
This made me laugh. I have got used to MC1R mutations and red hair in Neanderthals, but this article (full research paper in Science here) brought a smile to my face, even if I am still a little hazy on the genetics.
JBS Haldane once commented ‘that God would appear to be inordinately fond of beetles’, based on the observation that the world was so full of different species of beetles.
I have long had similar thoughts about seborrhoeic keratoses. God must be inordinately fond of them. Seborrhoeic keratoses are benign skin tumours, some of which contain identified mutations: they generally attract little serious research interest (apart from yours truly, of course). However, their significance clinically is enormous. This is because they are incredibly common as people move into their fourth decades and beyond, and because they vary so much in their morphological appearance. They mimic everything, including melanoma. So, most things referred as possible melanomas in many clinics, will turn out to be harmless seborrhoeic keratoses. Of course, a more cynical view is that since seborrhoeic keratoses are such great mimics, they in effect create lots of work for dermatologists. I suppose I should say thank you, next time I bump into one of my distant cousins, but the basis of the link — if confirmed — also deserves some serious mechanistic thought.
Remember those compare and contrast questions (UC versus Crohns; DLE versus LP etc.). Well, look at these two quotes from articles in the same edition of Nature.
The first from the tsunami of papers showing that ‘Something in rotten in the state of
Denmark Science’ — essentially that the Mertonian norms for science have been well and truly trampled over.
Journals charge authors to correct others’ mistakes. For one article that we believed contained an invalidating error, our options were to post a comment in an online commenting system or pay a ‘discounted’ submission fee of US$1,716. With another journal from the same publisher, the fee was £1,470 (US$2,100) to publish a letter. Letters from the journal advised that “we are unable to take editorial considerations into account when assessing waiver requests, only the author’s documented ability to pay”.
Discrete Analysis’[the journal] costs are only $10 per submitted paper, says Gowers; money required to make use of Scholastica, software that was developed at the University of Chicago in Illinois for managing peer review and for setting up journal websites. (The journal also relies on the continued existence of arXiv, whose running costs amount to less than $10 per paper). A grant from the University of Cambridge will cover the cost of the first 500 or so submissions, after which Gower hopes to find additional funding or ask researchers for a submission fee.
Well done the Universities of Cambridge and Cornell (arXiv). For science, the way forward is clear. But for much clinical medicine, including much of my own field, we need to break down the barriers between publication and posting online information that others may find useful. This cannot happen until the financial costs approximate to zero.