I love statistics, but I am just not very good at it, and find much of it extremely counter intuitive (which is why it is ‘fun’). The Monty Hall problem floored me, but then Paul Erdos got it wrong too (I am told), so I am in good — and numerate — company. During my intercalated degree in addition to a research methods tutorials (class size, n=2), we had one three hour stats practical each week (class size, n=10). We each used a Texas calculator, and working out a SD demanded concentration. Never mind, that during the rest of the week we were learning how to use FORTRAN and SPSS on a mainframe, ‘slowing’ down the process was useful.
Medicine has big problems with statistics although it is often not so much to do with ‘mathematical’ statistics but evidence in a broader sense. IMHO the biggest abusers are the epidemiologists and the EBM merchants with their clickbait NNT and the like. But I do think this whole field deserves much greater attention in undergraduate education, and cannot help but feel that you need much more small group teaching over a considerable period of time. Otherwise, it just degenerates into ‘What is this test for?’ exam fodder style of learning.
The problems we have within both medicine and medical research have been talked about for a long while. Perhaps things are improving, but it is only more recently that this topic has been acknowledged as a problem amongst practising scientists (rather than medics). This topic certainly resurfaces with increased frequency, and there have been letters on it in Nature recently. I like this one:
Too many practitioners who discuss the misuse of statistics in science propose technical remedies to a problem that is essentially social, cultural and ethical (see J. Leek et al. Nature 551, 557–559; 2017). In our view, technical fixes are doomed. As Steven Goodman writes in the article, there is nothing technically wrong with P values. But even when they are correct and appropriate, they can be misunderstood, misrepresented and misused — often in the haste to serve publication and career. P values should instead serve as a check on the quality of evidence.
I think you could argue with the final sentence of this (selected) quote, but they are right about the big picture: narrow technical solutions are not the problem here. Instead, we are looking at a predictable outcome of the corruption of what being a scientist means.