A couple of articles from the two different domains of my professional life made me riff on some old memes. The first, was an article in (I think) the Times Higher about the fraud detection software Turnitin. I do not have any firsthand experience with Turnitin (‘turn-it-in’), as most of our exams use either clinical assessments or MCQs. My understanding is that submitted summative work is uploaded to Turnitin and the text compared with the corpus of text already collected. If strong similarities are present, the the work might be fraudulent. A numerical score is provided, but some interpretation is necessary, because in many domains there will be a lot of ‘stock phrases’ that are part of domain expertise, rather than evidence of cheating. How was the ‘corpus’ of text collected? Well, of course, from earlier student texts that had been uploaded.
Universities need to pay for this service, because in the age of massification, lecturers do not recognise the writing style of the students they teach. (BTW, as Graham Gibbs has pointed out, the move from formal supervised exams to course work has been a key driver of grade inflation in UK universities).
I do not know who owns the rights to the texts students submit, nor whether they are able to assert any property rights. There may be other companies out there apart from Turnitin, but you can see easily see that the more data they collect, the more powerful their software becomes. If the substrate is free, then the costs relate to how powerful their algorithms are. It is easy to imagine how this becomes a monopoly. However, if copies of all the submitted texts are kept by universities then collectively it would make it easier for a challenger to enter the field. But network effects will still operate.
The other example comes from medicine rather than education. The FT ran a story about the use of ‘machine learning’ to diagnose retinal scans. Many groups are working on this, but this report was about Moorfields in London. I think I read that as the work was being commercialised, then the hospital would have access to the commercial software free of charge. There are several issues, here.
Although, I have no expert knowledge in this particular domain, I know a little about skin cancer diagnosis using automated methods. First, the clinical material and annotation of clinical material is absolutely rate limiting. Second, once the system is commercialised, the more any subsequent images can be uploaded the better you would imagine the system will become. This of course requires further image annotation, but if we are interesting in improving diagnosis, we should keep enlarging the database if the costs of annotation are acceptable. As in the Turnitin example, the danger is that the monopoly provider becomes ever more powerful. Again, if the image use remains non-exclusive, then it means there are lower barriers to entry.
In addition to its vulnerability to spoofing, for example, there is its gross inefficiency. “For a child to learn to recognize a cow,” says Hinton, “it’s not like their mother needs to say ‘cow’ 10,000 times”—a number that’s often required for deep-learning systems. Humans generally learn new concepts from just one or two examples.
There is a nice review on Deep Learning in PNAS. The spoofing referred to, is an ‘adversarial patch’ — a patch comprising an image of something else. In the example here, a mini-image of a toaster confuses the AI such that a very large banana is seen as a toaster (the paper is here on arXiv — an image is worth more than a thousand of my words).
Hinton, one of the giants of this field, is of course referring to Plato’s problem: how can we know so much given so little (input). From the dermatology perspective, the humans may still be smarter than the current machines in the real world, but pace Hinton our training sets need not be so large. But they do need to be a lot larger than n=2. The great achievement of the 19th century clinician masters was to be able to create concepts that gathered together disparate appearances, under one ‘concept’. Remember the mantra: there is no one-to-one correspondence between diagnosis and appearance. The second problem with humans is that they need continued (and structured) practice: the natural state of clinical skills is to get worse in the absence of continued reinforcement. Entropy rules.
Will things change? Yes, but radiology will fall first, then ‘lesions’ (tumours), and then rashes — the latter I suspect after entropy has had its way with me.
Annual Review of the ‘business’ that is ed-tech by Audrey Watters.
Ed-tech is a confidence game. That’s why it’s so full of marketers and grifters and thugs. (The same goes for “tech” at large.)
“criticism and optimism are the same thing. When you criticize things, it’s because you think they can be improved. It’s the complacent person or the fanatic who’s the true pessimist, because they feel they already have the answer. It’s the people who think that things are open-ended, that things can still be changed through thought, through creativity—those are the true optimists. So I worry, sure, but it’s optimistic worry.” Jaron Lanier. We Need to Have an Honest Talk About Our Data
This is from an interview with Geoffrey Hinton who — to paraphrase Peter Medawar’s comments about Jim Watson — has something to be clever about. The article is worth reading in full, but here are a few snippets.
Now if you send in a paper that has a radically new idea, there’s no chance in hell it will get accepted, because it’s going to get some junior reviewer who doesn’t understand it. Or it’s going to get a senior reviewer who’s trying to review too many papers and doesn’t understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that’s really bad…
What we should be going for, particularly in the basic science conferences, is radically new ideas. Because we know a radically new idea in the long run is going to be much more influential than a tiny improvement. That’s I think the main downside of the fact that we’ve got this inversion now, where you’ve got a few senior guys and a gazillion young guys.
I would make a few comments:
All has been said before, I know, but no apology will be forthcoming.
In 2011, Beth Reeks, a 15-year-old Welsh schoolgirl studying for her GCSE exams, decided to write a teenage romantic novel. So she started tapping on her laptop with the kind of obsessive creative focus – and initial secrecy – that has been familiar to writers throughout history. “My parents assumed I was on Facebook or something when I was on my laptop – or I’d call up a document or internet page so it looked like I was doing homework,” she explained at a recent writers’ convention. “I wrote a lot in secret… and at night. I was obsessed.”
But Reeks took a different route: after penning eight chapters of her boy-meets-girl novel, The Kissing Booth, she posted three of them on Wattpad, an online story-sharing platform …. As comments poured in, Reeks turned to social media for more ideas. “I started a Tumblr blog and a Twitter account for my writing. I used them to promote the book…[and] respond to anyone who said they liked the story,” she explained in a recent blog post.
… while Reeks was at university studying physics, her work was turned into an ebook, then a paperback (she was offered a three-book deal by the mighty Random House) and, this year, Netflix released it as a film, which has become essential viewing for many teenage girls.
Maybe more of a theory than a law, but still:
Any eLearning tool, no matter how openly designed, will eventually become indistinguishable from a Learning Management System once a threshold of supported use-cases has been reached.
They start out small and open. Then, as more people adopt them and the tool is extended to meet the additional requirements of the growing community of users, eventually things like access management and digital rights start getting integrated. Boil the frog. Boom. LMS.
It is easy to make facile comparisons between universities, publishing, and the internet. But it is useful to explore the differences and similarities, even down to the mundane production of ‘content’.
This is from Frederic Filloux form the ever wonderful Monday Note
The biggest mistake of news publishers is their belief that the presumed uniqueness of their content is sufficient to warrant a lifetime of customer loyalty.
The cost of news production is a justification for the price of the service; in-depth, value-added journalism is hugely expensive. I’m currently reading Bad Blood, John Carreyrou’s book about the Theranos scandal (also see Jean-Louis last week’s column about it). This investigation cost the Wall Street Journal well over a million dollars. Another example is The New York Times, which spends about $200 m a year for its newsroom. The cost structure of news operations is the may reason why tech giants will never invest in this business: the economics of producing quality journalism are incompatible with the quantitative approach used in tech which relies Key Performance Indicators or Objectives and Key Results. (
In France, marketers from the French paid-TV network Canal+ prided themselves of their subscription management: “Even death isn’t sufficient to cancel a subscription,” as one of them told me once.
Facebook accounts hacked? I thought that was the feature not the bug.
Carrot weather — the weather app with attitude.
Two quotes from Bad Blood: Secrets and lies in a Silicon Valley Startup, by John Carreyrou. Only without much silicon.
“Henry, you’re not a team player,” she said in an icy tone. “I think you should leave right now.” There was no mistaking what had just happened. Elizabeth wasn’t merely asking him to get out of her office. She was telling him to leave the company—immediately. Mosley had just been fired.
He also maintained that Holmes was a once-in-a-generation genius, comparing her to Newton, Einstein, Mozart, and Leonardo da Vinci.
The reality distortion field lived on. Medicine is indeed tricky.
This is a scary story. But the lesson is (yet again) our inability to understand what makes humans tick.
How Maersk was taken down by Russian malware, and how it recovered. The passage that got the attention is the bit about flying a domain controller backup in from Ghana (the only one that survived). The one that matters is that they were still running Windows 2000 on some servers and hadn’t carried out a proposed security revamp because it wasn’t in the IT managers’ KPIs and so wouldn’t help their bonuses. Link
Via Ben Evans
It is not only taxi drivers that are being “uberised” but radiologists, lawyers, contractors and accountants. All these services can now be accessed at cut rates via platforms.
The NHS became such a platform, for good and bad. That is the real lesson here. The tech is an amplifier, but the fundamentals were always about power.
One selling point of MOOCs (massive online open courses) has been that students can access courses from the world’s most famous universities. The assumption—especially in the marketing messages from major providers like Coursera and edX—is that the winners of traditional higher education will also end up the winners in the world of online courses.
But that isn’t always happening.
In fact, three of the 10 most popular courses on Coursera aren’t produced by a college or university at all, but by a company. That company—called Deeplearning.ai—is a unique provider of higher education. It is essentially built on the reputation of its founder, Andrew Ng, who teaches all five of the courses it offers so far. Link
The MOOC story is like so much of tech — or drug discovery for that matter. Finding a use for a drug invented for another reason often offers the biggest payback. This story has barely begun.
Hype is not fading, it is cracking.
I like the turn of phrase. It is from a post on the coming AI winter. Invest wisely.
A Magic Shield That Lets You Be An Assh*le? – NewCo Shift
The Internet of the 1990s was about choosing your own adventure. The Internet of right now over the last 10 years is about somebody else choosing your adventure for you.
“They took all the trees, put ’em in a tree museum, and they charged the people., a dollar and a half just to see ’em…”
“It’s quite obvious that we should stop training radiologists,” said Geoffrey Hinton, an AI luminary, in 2016. In November Andrew Ng, another superstar researcher, when discussing AI’s ability to diagnose pneumonia from chest X-rays, wondered whether “radiologists should be worried about their jobs”. Given how widely applicable machine learning seems to be, such pronouncements are bound to alarm white-collar workers, from engineers to lawyers.
The Economist’s view is (rightly) more nuanced than Hinton’s statement on this topic might suggest, but this is real. For my own branch of clinical medicine, too. The interesting thing for those concerned with medical education is whether we will see the equivalent of the Osborne effect (and I don’t mean that Osborne effect).
This is some text I recognise, but I had forgotten its source: Bruce Schneier.
Technology magnifies power in general, but the rates of adoption are different. Criminals, dissidents, the unorganized—all outliers—are more agile. They can make use of new technologies faster, and can magnify their collective power because of it. But when the already-powerful big institutions finally figured out how to use the Internet, they had more raw power to magnify.
This is true for both governments and corporations. We now know that governments all over the world are militarizing the Internet, using it for surveillance, censorship, and propaganda. Large corporations are using it to control what we can do and see, and the rise of winner-take-all distribution systems only exacerbates this.
This is the fundamental tension at the heart of the Internet, and information-based technology in general. The unempowered are more efficient at leveraging new technology, while the powerful have more raw power to leverage. These two trends lead to a battle between the quick and the strong: the quick who can make use of new power faster, and the strong who can make use of that same power more effectively.
Well, this was the modest description of a ‘new’ way to test blood. Except it wasn’t. The reality distortion field in hyperspace. If you don’t know the Theranos story — or doubt the importance of real journalism — have a look.
In this week’s privacy nightmare, an Oregon couple discovered their Amazon Echo smart speaker recorded their conversation and sent the audio to an acquaintance — without their knowledge.
The claim seemed improbable, until the company confirmed it really happened. Amazon said it was reviewing how its smart speakers work to avoid similar situations.
Exactly in the same way that Big Tobacco has been free to fill the lungs of Asian of African populations, with little interference from local health administrations, Facebook will have a free hand to lock up these markets. (If you find my comparison with the tobacco industry exaggerated, just ask the Rohingyas or people in the Philippines about the toxicity of Facebook to democracy — or read this Bloomberg Business Week piece, “What happens when the government uses Facebook as a weapon?”)
I used to think this whole topic was overblown. But then again, I once thought those who foresaw the obesity epidermic were selling something. Wrong on both counts.
Former Google Design Ethicist: Relying on Big Tech in Schools Is a ‘Race to the Bottom’ | EdSurge News
I see this as game over unless we change course,” says Tristan Harris, a former ethicist at Google who founded the Center for Humane Technology. “Supercomputers play chess against your mind to extract the attention out of you. The stock price has to keep going up, so they point it at your kid and start extracting the attention out of them. You don’t want an extraction-based economy powered by AI, playing chess against people’s minds. We cannot win in that world.”
In an interview with EdSurge, Harris noted that the focus of their campaign started with children because they were the most vulnerable population. He says that particularly children in schools had little agency over whether they opted into or out of a technology platform because of pressure from both peers and educators handing out assignments.
Some nice turns of phrase and perspective from this article in the FT
In 1829, the great Scottish historian and essayist Thomas Carlyle wrote: “Were we required to characterise this age of ours by any single epithet, we should be tempted to call it . . . the Mechanical Age. It is the Age of Machinery . . . the age which, with its whole undivided might, forwards, teaches, and practices the greater art of adapting means to ends.”
He continued with a lament for older ways of doing and being: “On every hand, the artisan is driven from his workshop, to make room for a speedier, inanimate one. The shuttle drops from the fingers of the weaver and falls into iron fingers that ply it faster. The sailor furls his sail, and lays down his oar, and bids a strong unwearied servant . . . bear him through the waters.”
It is a measure of just how much speedier our age is that no one today will take the time to write or read such comparatively languorous prose. What is striking about Carlyle’s writing from today’s vantage point is how early in the industrial revolution he mounted a protest against it. By 1829, the steam engine was entering its heyday, but the explosion of iron, steel, coal and oil that we associate with the industrial age was visible only on the horizon.
Universities are certainly putting their courses online. The question is “why?” I talked last week with a University President whom I have known for many years and asked him why he was building online courses. His answer, unsurprisingly, was “fear.”
This is an old quote, but still redolent.
Well, just as I approached utter despair, it seems the authors of this Editorial in the Annals of Internal Medicine, say no. Whew! Gee, they will soon wonder if texting patients appointment times might occasionally be a good idea.
It is a truism that you never understand anything unless you can understand it more than one way. I like this one:
When he and his colleagues spun ClearMotion out of the Massachusetts Institute of Technology in 2008, their intention was to use bumps in the road to generate electricity. They had developed a device designed to be attached to the side of a standard shock absorber. As the suspension moved up and down, hydraulic fluid from the absorber would be forced through their device, turning a rotor that generated electricity. But, just as a generator and an electric motor are essentially the same, except that they run in opposite directions, so ClearMotion’s engineers realised that running their bump-powered generator backwards would turn it into an ideal form of suspension. And that seemed a much better line of business. They therefore designed a version in which the rotor is electrically powered and pumps hydraulic fluid rapidly into and out of the shock absorber. The effect is to level out a rough road by pushing the wheels down into dips and pulling them up over bumps.
“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
Pedro Domingos, The Master Algorithm1, (2015)
via John Naughton
Frederik Filloux in the ever readable Monday note. And just as big T went for the developing world, so with FB
Mark Zuckerberg talking: “ There was this Deloitte study that came out the other day, that said if you could connect everyone in emerging markets, you could create more than 100 million jobs and bring a lot of people out of poverty.”
The Deloitte study, which did indeed say this, was commissioned by Facebook, based on data provided by Facebook, and was about Facebook.
I had just wanted one, but now….
The Osborne effect is described in Wikipedia as follows:
The Osborne effect is a term referring to the unintended consequences of a company announcing a future product, unaware of the risks involved or when the timing is misjudged, which ends up having a negative impact on the sales of the current product. This is often the case when a product is announced too long before its actual availability. This has the immediate effect of customers canceling or deferring orders for the current product, knowing that it will soon be obsolete, and any unexpected delays often means the new product comes to be perceived as vaporware, damaging the company’s credibility and profitability.
AI and associated technologies will have major effects in some areas of medicine. Think skin cancer diagnosis, for certain; or this weekend story in the FT on eye disease; and radiology and pathology. This then begs the question, whether these skills are so central to expertise within a clinical domain, that students should think hard about these areas as a career. Of course, diagnosis of skin lesions is not all a clinical expert in this domain does. Ditto, ophthalmologists do more than look at retinas. Automated ECG readers have not put cardiologists out of work, after all. And many technical advances increase — not reduce — workloads.
But at some stage, people might want to start wondering if some areas of medicine are (not) going to be secure as long term careers. The Osborne metaphor should be a warning about how messy all this could be. Hype, has costs.