This article (‘Humans may not always grasp why AIs act’) in the Economist gets to the right answer, but by way of a silly example involving brain scanning. The issue is that people are alarmed that that it may not be possible to understand how AI might come to a certain decision. The article rightly points out that we have the same problem with humans. This issue looms large in medicine where many clinicians believe they can always explain to students how they come to the correct answer. The following is one of my favourite Geoff Norman quotes:
Furthermore, diagnostic success may be a result of processes that can never be described by the clinician. If the right diagnosis arises from pattern recognition, clinicians are unlikely to be able to tell you why they thought the patient had gout, any more than we can say how we recognize that the person on the street corner is our son. Bowen claims that “strong diagnosticians can generally readily expand on their thinking”; I believe, instead, that strong diagnosticians can tell a credible story about how they might have been thinking, but no one, themselves included, can really be sure that it is an accurate depiction.
The article is about Germany, but I just wonder how much the rite of passage of moving out of the family home is relevant.
Second, apprentices in less prestigious positions are paid very poorly, she said. A trainee hairdresser might receive just €350-€400 (£311-£356) a month, not enough to allow them to move out of their parents’ house, Professor Solga explained, and sectors with shortages such as hotel work or food processing often involve shift and evening work. “For young people, they are not the best working conditions,” she said. THE
Frederik Filloux in the ever readable Monday note. And just as big T went for the developing world, so with FB
Mark Zuckerberg talking: “ There was this Deloitte study that came out the other day, that said if you could connect everyone in emerging markets, you could create more than 100 million jobs and bring a lot of people out of poverty.”
The Deloitte study, which did indeed say this, was commissioned by Facebook, based on data provided by Facebook, and was about Facebook.
I love statistics, but I am just not very good at it, and find much of it extremely counter intuitive (which is why it is ‘fun’). The Monty Hall problem floored me, but then Paul Erdos got it wrong too (I am told), so I am in good — and numerate — company. During my intercalated degree in addition to a research methods tutorials (class size, n=2), we had one three hour stats practical each week (class size, n=10). We each used a Texas calculator, and working out a SD demanded concentration. Never mind, that during the rest of the week we were learning how to use FORTRAN and SPSS on a mainframe, ‘slowing’ down the process was useful.
Medicine has big problems with statistics although it is often not so much to do with ‘mathematical’ statistics but evidence in a broader sense. IMHO the biggest abusers are the epidemiologists and the EBM merchants with their clickbait NNT and the like. But I do think this whole field deserves much greater attention in undergraduate education, and cannot help but feel that you need much more small group teaching over a considerable period of time. Otherwise, it just degenerates into ‘What is this test for?’ exam fodder style of learning.
The problems we have within both medicine and medical research have been talked about for a long while. Perhaps things are improving, but it is only more recently that this topic has been acknowledged as a problem amongst practising scientists (rather than medics). This topic certainly resurfaces with increased frequency, and there have been letters on it in Nature recently. I like this one:
Too many practitioners who discuss the misuse of statistics in science propose technical remedies to a problem that is essentially social, cultural and ethical (see J. Leek et al. Nature 551, 557–559; 2017). In our view, technical fixes are doomed. As Steven Goodman writes in the article, there is nothing technically wrong with P values. But even when they are correct and appropriate, they can be misunderstood, misrepresented and misused — often in the haste to serve publication and career. P values should instead serve as a check on the quality of evidence.
I think you could argue with the final sentence of this (selected) quote, but they are right about the big picture: narrow technical solutions are not the problem here. Instead, we are looking at a predictable outcome of the corruption of what being a scientist means.
In the US, “belief in work is crumbling among people in their 20s and 30s”, says Benjamin Hunnicutt, a leading historian of work. “They are not looking to their job for satisfaction or social advancement.” (You can sense this every time a graduate with a faraway look makes you a latte.)
“What were your most memorable moments at university?”
“There was a man called Walter Ullmann who taught medieval critical philosophy at 10am – and there was standing room only. I went every week, regardless of how wasted I’d got the night before, because he was brilliant.” THE
Reminds me of people queueing to get into to listen to Isaiah Berlin. Some merit as a metric: standing room only. (Until H&S arrive)
PS. And, for another example, see this from a recent book review of a biography of Enrico Fermi (The Last Man Who Knew Everything: The Life and Times of Enrico Fermi, Father of the Nuclear Age. By David Schwartz).
[the author]..He interviewed many of Fermi’s students and colleagues, shedding light also on Fermi the educator (his lectures were so renowned that even notes taken by his assistants were a bestseller).
The Osborne effect is a term referring to the unintended consequences of a company announcing a future product, unaware of the risks involved or when the timing is misjudged, which ends up having a negative impact on the sales of the current product. This is often the case when a product is announced too long before its actual availability. This has the immediate effect of customers canceling or deferring orders for the current product, knowing that it will soon be obsolete, and any unexpected delays often means the new product comes to be perceived as vaporware, damaging the company’s credibility and profitability.
AI and associated technologies will have major effects in some areas of medicine. Think skin cancer diagnosis, for certain; or this weekend story in the FT on eye disease; and radiology and pathology. This then begs the question, whether these skills are so central to expertise within a clinical domain, that students should think hard about these areas as a career. Of course, diagnosis of skin lesions is not all a clinical expert in this domain does. Ditto, ophthalmologists do more than look at retinas. Automated ECG readers have not put cardiologists out of work, after all. And many technical advances increase — not reduce — workloads.
But at some stage, people might want to start wondering if some areas of medicine are (not) going to be secure as long term careers. The Osborne metaphor should be a warning about how messy all this could be. Hype, has costs.
The surge in open-access predatory journals is making it harder for contributors and readers to distinguish these from legitimate publications — a confusion that is fostered by the predatory-journal industry. One solution could be to deploy a variant of a well-established quality-control test. The scientific community could submit replicate test articles several times a year to a wide array of open-access journals, suspect and non-suspect.
From Steven N Goodman who, as ever, is worth reading. Of course, in one sense, it is a question of serial monogamy, or polygamy.
That’s a question I just got at our most recent all-hands meeting. I’ve been reminding people that it’s Day 1 for a couple of decades. I work in an Amazon building named Day 1, and when I moved buildings, I took the name with me. I spend time thinking about this topic.
“Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.”
After earning his medical degree in 1951 he trained in hospitals in Montreal. “To my surprise I also found I enjoyed clinical medicine,” he wrote in his Nobel prize biography. Then he quipped, “It took three years of hospital training after graduation, a year of internship and two of residency in neurology, before that interest finally wore off.”
This article in the NEJM gets to the kernel of one of the major problems in medicine: the increasing dysfunction of doctor-patient interaction fuelled — in part — by awful IT, and a systematic ability to admit that it is no longer possible to actually do what is required within the ‘allocated’ time. In many industries the goal is to match task with skill and, wherever possible, to reduce costs by allocating low skill tasks to those who cost less: ‘right person at the right time’. There is a variation of this in medicine: those charged with ‘support’ or undertaking ‘low skill tasks’ have just been removed, meaning all tasks — both high and low — are done by the same practitioner, but without any change in time allocated. This is akin to asking the pilot of a plane to serve you snacks and check you in, but keep the schedules the same.
In terms of medicine, that this happens is not so much a manifestation of a managerial view that places little value on ‘care’ (true), nor where business innovation (sic) is viewed as synonymous with sacking people (true), but a complete failure to understand their own business and what their own product is. In an ideal world businesses like this should go bust. The problems are when: they are run by the government; there are third party payers; or there is actively created informational asymmetry. Sometimes all three apply.
An utterly committed researcher, Professor Barres would regularly work until 2am or 3am. He “slept on the floor of my small office”, recalled Professor Raff. “Every morning when I arrived and opened the door, it would whack him in the head – he eventually learned to sleep facing the opposite direction.”
Somewhere, I cannot remember where, after one of his seminars, his intellectual depth (Ben Barres) was judged more favourably to that of his ‘sister’. His sister was his his former ‘self’, Barbara Barres. Such a neat experimental design to tease apart causality.
I too worked somewhere where people slept overnight in the lab, although I think the deciding factor there was an inability to find or pay for a suitable flat, rather than enthusiasm
Not the usual stuff Nobel Laureates spiel, but take a look. An article in Quartz is here and Wikipedia is useful on him
I like the ‘biogibberish’ epithet. And cannot help but suspect he would agree with David Hubel’s line that reading most papers now is like chewing sawdust. But you can see the fire still burns: you have to be dissatisfied with the state of the universe. How polite or angry you are is a question of personal style.
As each year passes, the once celebrated barriers between man and the other animals become less secure. Once we were the only tool makers, once we were the ones who discovered drugs or used technology. This report is about how finches commandeer cigarette butts for a new purpose.
That idea has been around, though never proved, since 2012. This was when Dr Suárez-Rodríguez showed that nests which had butts woven into them were less likely to contain bloodsucking parasites than were nests that did not. What she was unable to show was whether the nest-builders were collecting discarded cigarettes deliberately for their parasite-repelling properties, or whether that parasite protection was an accidental consequence of butts being a reasonably abundant building material.
The value of your mountains of data is becoming obvious, especially as you continue to push into new areas that collect more information about consumers while binding them closer to you, such as the home microphones you are careful to call home speakers. Link
Remember your physics 101: when is a speaker a microphone? And of course this is not all new — see the Intercept article (with great graphics) here
I think this is just silly. There are lots of things universities do badly, and there are a lot of things they have done well. And it is true most seem to be unaware of how they need to change, and what they need to hang onto.
The problem in my neck of the woods is that many of the proposed solutions to these problems risk making things worse. A lot worse. And in any case, using Eblen Moglen’s terminology, how long do you think the thug with the hoodie will be running things.
I am not a big fan of lectures. The single best piece of advice I received at medical school was not to attend. I therefore skipped lectures for three years (although I got the handouts). It is not that all lectures are bad, they are not. It is just that often they are used for ‘content delivery’, much as we think about delivery of a takeaway. They are ill suited to this role, now that we can write and distribute text cheaply. Good lectures serve a different purpose, but you don’t need too many of them and, in my experience of medicine, there are very few people who lecture well. Lecturing well means choosing those fragments of a domain that lend themselves to this media type. Lectures are (and should be) theatre, but the theatre of the mind needs more.
By chance, I came across the following thoughts from the preface to the Ascent of Man (the TV series and the book). Bronowski understood many things, and I still marvel at how prescient his ideas were.
If television is not used to make these thoughts concrete, it is wasted. The unravelling of ideas is, in any case, an intimate and personal endeavour, and here we come to the common ground between television and the printed book. Unlike a lecture or a cinema show, television is not directed to crowds. It is addressed to two or three people in a room, as a conversation face to face – a one-sided conversation for the most part, as the book is, but homely and Socratic nevertheless. To me, absorbed in the philosophic undercurrents of knowledge, this is the most attractive gift of television, by which it may yet become as persuasive an intellectual force as the book.
The printed book has one added freedom beyond this: it is not remorselessly bound to the forward direction of time, as any spoken discourse is. The reader can do what the viewer and the listener cannot, which is to pause and reflect, turn the pages back and the argument over, compare one fact with another and, in general, appreciate the detail of evidence without being distracted by it.
As Bryan Caplan points out in his new book, The Case Against Education, most of the earnings differential associated with college does not reflect stuff colleges teach their students, but rather the already-existing advantages that college graduates possess (more intelligence, greater discipline, more ambition, more prior learning, etc.) that a diploma reveals to employers. The Sheepskin Effect is real. We expend enormous resources in producing pieces of paper (diplomas) conveying labor market information. The move toward getting a master’s degree—more diplomas—aggravates an already hugely inefficient system. [link]
This of course is the debate about what is valuable about the HBS is not what they teach you, but their ability to recognise those who are already likely to succeed. Many years ago, Paul Graham wrote a great essay touching on so many key issues that Higher Ed wants to wish away.
Luis von Ahn, somebody who has changed the world on more than one occasion, has also been awarded a teaching award from his own university. Take his tips seriously 😉
Pick great TAs. Over the years, I’ve had the most amazing set of TAs, and this teaching award is more than 50% due to them. If you’ve TA’d for me,I would like to give each of you part of the cash prize associated with this award. Too bad there isn’t one.
When you don’t know the answer to a question say it’s outside the scope of the course.
Teaching evaluations are highly correlated with the grade the students think they will get at the time of filling out the surveys. Make your course easy, then crush them on the final.
Never, under any circumstances, disclose the exact grade cutoffs at the end of the semester. Somebody has to get the highest B, and they won’t be happy. “You’re lucky you got a B, dude.”
Finish lecture 10 minutes early every time – they love this (and they’ll never know you love it even more).
Easiest way to get rid of whiners without yielding: “I’ll take that into account when calculating your final grade, dude.”
The internet, as digital journalist and commentator Cory Doctorow has remarked, is “an ecosystem of interruption technologies”. Always imagine that your readers are looking for a reason to — in Tinder terms — swipe left on your prose
‘On December 16, 2017, the staff of the Centers for Disease Control and Prevention (CDC) were instructed not to use 7 words in its 2019 budget appropriation request: diversity, transgender, vulnerable, fetus, entitlement, evidence-based, and science-based. These basic phrases are intrinsic to public health. The US Department of Health and Human Services (HHS) offered alternative word choices, such as by modifying “evidence-based” with “community standards and wishes” and using “unborn child” instead of “fetus.”’
The endless concern about stamps of approval and achievement distorts education and can even rob an interesting career of its joys. A professor friend introducing students at an East Coast college to Beethoven was greeted with a dead-eyed question from the back of the class: ‘Excuse me professor, will this be in the test?
A new copy of Glenn Hubbard and Tony O’Brien’s widely used introductory economics textbook costs more than some smartphones. The phone can send you to any part of the web and holds access to the sum of human knowledge. The book is about 800 heavy pages of static text. Yet thousands of college students around the US are shelling out $250 for these books, each semester, wincing at the many hours ahead of trying to make sense of this attempt to distill the global economy into tiny widgets and graphs.