I like Gerd Gigerenzer’s writings (see for instance The Empire of Chance and Simple Heuristics that make us smart) and I am sure it is not his fault that the same stories keep coming round again and again. This story on the BBC web site treads over old ground but of course the lessons remain the same (even if the book is different). Doctors don’t like working using Bayes’ theorem in clinic — at least not if we have to use algebra, rather than real numbers (as Gigerenzer makes clear). And I still think we do a poor job of teaching medical students statistics. But something niggles me about his line of argument, and in part it it is not a million miles away from some of Gigerenzer’s other work on heuristics and ‘quick and dirty’ computation.
One view of expertise is that doctors somehow work from ‘basic principles’ and then work out what to do. This used to be the dominant view of medical expertise: we had to understand the physiology, so that we had a live model in our brain of what was happening to the patient. This may well be true in some instances, but more often it seems to me that the burden of knowledge to do this is so great, that we just follow simple shortcuts or heuristics— or we read it off look-up charts. I actually think this is sensible. We don’t need to fret about the molecules, just as I don’t need to worry about machine code or C+ when I write this blog. What Gigerenzer is drawing attention to is the absence of the relevant cognitive prostheses that takes care of the number crunching for us. Of course if the prosthesis existed, we would play with it, and actually become more at ease with the algebra.
The situation was a familiar one. Some time back, I was gossiping to a medical student, and he began to to talk about some research he had done, supervised by another faculty member of staff. I asked what he had found out: what did his data show? What followed, I have seen if not hundreds of times, then at least on several score occasions. A look of trouble and consternation, a shrug of embarrassment, and the predictable word-salad of ‘significance’, t values, p values, statistics and ‘dunno’. Such is the norm. There are exceptions, but even amongst postgraduates who have undertaken research, the picture is not wildly different. Rarely, without directed questioning, can I get the student to tell me about averages, or proportions, using simple arithmetic. A reasonable starting point surely. ‘What does it look like if you draw it?’ is met with a puzzled look. And yet, if I ask the same student, how they would manage psoriasis, or why skin cancers are more common in some people than others, I get —to varying degrees—a reasoned response. I asked the student how much tuition in statistics they had received. A few lectures was the response, followed by a silence, and then, “They told us to buy a book”. More silence. So this is what you pay >30K a year for? The student just smiled in agreement. This was a good student.
Statistics is difficult. Much statistics is counter-intuitive and, like certain other domains of expertise, learning the correct basics often results in a temporary —or in some cases a permanent —drop in objective performance.** That is, you can make people’s ability to interpret numerical data worse after trying to teach them statistics. On the other hand, statistics is beautiful, hard, and full of wonderful insights that debunk the often sloppy thinking that passes for everyday ‘common sense’. I am a big fan, but have always found the subject anything but easy. But, like a lot of formal disciplines, the pleasure comes from the struggle to achieve mastery. I also think the subject important, and for the medical ecosystem at least, it is critical that there is high level expertise within the community. On the other hand, in my experience many of the very best clinicians are (relatively) statistically illiterate. The converse is also seen.