There are areas of medical research that are more legitimate than others. Finding genes is OK, as is finding drugs. But other bits of the health enterprise seem immune to rational scrutiny. One area that has bugged me for years is the way hospitals, charities, professional bodies and others plaster images of skin cancer around, on the assumption that this will ‘make things better’. It might, or it might not. So here is a recent paper of ours on this topic. Take away message: if there is a fixed resource envelope, many if not most strategies may make matters worse. The abstract is below. The images will of course remain up — I am not certain the rationale is the obvious one.
Abstract: Using an experimental task in which lay persons were asked to distinguish between 30 images of melanomas and common mimics of melanoma, we compared various training strategies including the ABC(D) method, use of images of both melanomas and mimics of melanoma, and alternative methods of choosing training image exemplars. Based on a sample size of 976 persons, and an online experimental task, we show that all the positive training approaches increased diagnostic sensitivity when compared with no training, but only the simultaneous use of melanoma and benign exemplars, as chosen by experts, increased specificity and diagnostic accuracy. The ABCD method and use of melanoma exemplar images chosen by laypersons decreased specificity in comparison with the control. The method of choosing exemplar images is important. The levels of change in performance are however very modest, with an increase in accuracy between control and best-performing strategy of only 9%.
Authors: Ella Cornell, Karen Robertson, Robert D. McIntosh, Jonathan L. Rees
Full open access paper here.