Universities are certainly putting their courses online. The question is “why?” I talked last week with a University President whom I have known for many years and asked him why he was building online courses. His answer, unsurprisingly, was “fear.”
This is an old quote, but still redolent.
Well, just as I approached utter despair, it seems the authors of this Editorial in the Annals of Internal Medicine, say no. Whew! Gee, they will soon wonder if texting patients appointment times might occasionally be a good idea.
It is a truism that you never understand anything unless you can understand it more than one way. I like this one:
When he and his colleagues spun ClearMotion out of the Massachusetts Institute of Technology in 2008, their intention was to use bumps in the road to generate electricity. They had developed a device designed to be attached to the side of a standard shock absorber. As the suspension moved up and down, hydraulic fluid from the absorber would be forced through their device, turning a rotor that generated electricity. But, just as a generator and an electric motor are essentially the same, except that they run in opposite directions, so ClearMotion’s engineers realised that running their bump-powered generator backwards would turn it into an ideal form of suspension. And that seemed a much better line of business. They therefore designed a version in which the rotor is electrically powered and pumps hydraulic fluid rapidly into and out of the shock absorber. The effect is to level out a rough road by pushing the wheels down into dips and pulling them up over bumps.
“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”
Pedro Domingos, The Master Algorithm1, (2015)
via John Naughton
Frederik Filloux in the ever readable Monday note. And just as big T went for the developing world, so with FB
Mark Zuckerberg talking: “ There was this Deloitte study that came out the other day, that said if you could connect everyone in emerging markets, you could create more than 100 million jobs and bring a lot of people out of poverty.”
The Deloitte study, which did indeed say this, was commissioned by Facebook, based on data provided by Facebook, and was about Facebook.
I had just wanted one, but now….
The Osborne effect is described in Wikipedia as follows:
The Osborne effect is a term referring to the unintended consequences of a company announcing a future product, unaware of the risks involved or when the timing is misjudged, which ends up having a negative impact on the sales of the current product. This is often the case when a product is announced too long before its actual availability. This has the immediate effect of customers canceling or deferring orders for the current product, knowing that it will soon be obsolete, and any unexpected delays often means the new product comes to be perceived as vaporware, damaging the company’s credibility and profitability.
AI and associated technologies will have major effects in some areas of medicine. Think skin cancer diagnosis, for certain; or this weekend story in the FT on eye disease; and radiology and pathology. This then begs the question, whether these skills are so central to expertise within a clinical domain, that students should think hard about these areas as a career. Of course, diagnosis of skin lesions is not all a clinical expert in this domain does. Ditto, ophthalmologists do more than look at retinas. Automated ECG readers have not put cardiologists out of work, after all. And many technical advances increase — not reduce — workloads.
But at some stage, people might want to start wondering if some areas of medicine are (not) going to be secure as long term careers. The Osborne metaphor should be a warning about how messy all this could be. Hype, has costs.
The value of your mountains of data is becoming obvious, especially as you continue to push into new areas that collect more information about consumers while binding them closer to you, such as the home microphones you are careful to call home speakers. Link
Remember your physics 101: when is a speaker a microphone? And of course this is not all new — see the Intercept article (with great graphics) here
The internet, as digital journalist and commentator Cory Doctorow has remarked, is “an ecosystem of interruption technologies”. Always imagine that your readers are looking for a reason to — in Tinder terms — swipe left on your prose
In my professional area of medical education there is clear market failure. Publishers are simply unable or choose not to, to develop what we need. And the institutions — universities — despite their origins have simply looked the other way, ignoring the needs (and wants) of their students. And do not get me started on “lecture recording”.
The quote below is a great article on some of the economics of book publishing: it is about a cookbook. And why not?
One of the most opaque industries around is publishing, not here online, but good old-fashioned print-books and their digital and audio spin-offs. Poke around and try to find some hard sales numbers and you’ll quickly find that it’s near impossible to do so. You can find bestseller lists from reputable sources like the NYTimes, Amazon and others but tying those rankings to an actual number of books sold at retail is simply not doable. Publishing costs, deals, and profit lines are even harder to shake loose.
“Why We Are Self Publishing the Aviary Cookbook – Lessons From the Alinea Book”. Real numbers from the opaque world of cookbook publishing. [Link]
And if you want insights into the research publishing business, here is a link to a great article that appeared in 2017 on this topic by Stephen Buranyi. Mind you, I almost have a sneaking admiration for some of the crooks: foxes in the henhouse.
I like computers (see previous post), but despair of them in the clinical context of keeping medical records. By contrast nobody sane doubts that computers are advantageous in other medical contexts: imaging, radiotherapy, or even using an insulin pump. We don’t have problems with the latter instances, because self-evidently computers work, and they are the result of a culture of improvement. Not so with electronic medical records, where a neutral observer might thing that the purpose is to save money in one budget at the expense of diminishing clinical care in another. The economists might talk about externalities, but essentially many electronic record keeping systems are a form of pollution of the clinical workspace.
The following quote caught my eye because, whilst in Scandinavia recently, a dermatologist from Denmark was expressing frustration with how bad their computer systems are; and how older physicians choose to ignore them by retiring early. I heard a similar tale from the US in the summer, from a dermatologist who takes a financial hit because he has not implemented electronic records. He says he can either manage patients or do IT (and yes, he is planning to get out early).
Electronic medical records (EMRs) have resulted in increased documentation burden, with physicians spending up to 2 hours on EMR-related tasks for every 1 patient-care hour. Although EMRs offer care delivery integration, they have decreased physician job satisfaction and increased physician burnout across multiple fields, including dermatology.
I would add, that I have read that the average ER doc on a shift in the US presses his mouse 4000 times.
A long time ago, Richard Doll wrote an article pointing out that hospital record systems such as hospital activity analysis were perhaps useful to managers, but not much use for doctors or researchers. He was right, and I even published a paper saying similar things. My experience of electronic records in hospitals is that they are designed for the purpose of ‘management’ not clinical care. Contrary to what many say, these two activities have little in common, and share few goals. Our care system is not designed for care or caring, and our software is not designed for clinicians or patients. As for EMR, we are still waiting for our VisiCalc or Photoshop. If somebody can pull it off, it would be worth a Nobel.
Today (Oct. 17) was International Spreadsheet Day, marking the day back in 1979 that VisiCalc first shipped for the Apple II. Creator Dan Bricklin devised the program originally to help him crunch numbers for an assignment at Harvard Business School. [Link]
I dislike spreadsheets, and think the world will end not in fire, but in one giant bloody spreadsheet (or as a result of one). I also think they are a great metaphor for what is often wrong in medicine: an Excel spreadsheet can calculate a PASI (pissing awful psoriasis index, in lay terms) but it cannot tell you when somebody has bad psoriasis. People get confused about the epistemology here.
But these comments are a little sour. I have never really had to use spreadsheets, instead preferring to use something like R for when I have need of matrices, or when I was really young, FORTRAN. And to be fair even then, I would (now) need to go via a spreadsheet / csv file to enter the data. And this ignores the fact that mostly spreadsheets are used as static tools to present multicoloured tables rather than do calculations. But spreadsheets were, and are, revolutionary. I knew of Dan Bricklin, their inventor, but not all of the following story about how he invented then because he needed them to carry out a set assignment at Harvard Business School
Bricklin knew all this, but he also knew that spreadsheets were needed for the exercise; he wanted an easier way to do them. It occurred to him: why not create the spreadsheets on a microcomputer? Why not design a program that would produce on a computer screen a green, glowing ledger, so that the calculations, as well as the final tabulations, would be visible to the person “crunching” the numbers?
Why not make an electronic spreadsheet, a word processor for figures?
Bricklin’s teachers at Harvard thought he was wasting his time: why would a manager want to do a spreadsheet on one of those “toy” computers? What were secretaries and accountants and the people down in DP for? But Bricklin could not be dissuaded. With a computer programmer friend from MIT named Bob Frankston, he set to work developing the first electronic spreadsheet program. It would be contained on a floppy disk and run on the then brand-new Apple personal computer. Bricklin and Frankson released VisiCalc (the name was derived from Visible Calculation) in late 1979.
There are some general points. Advances are often made by tool makers; and the best fillip for great software is a problem you personally need to solve (a point Paul Graham makes repeatedly). And of course, people who know better, will not think your efforts worthwhile. Little of this is true of hospital information systems.
This is from an article discussing the difficulties in recommending people to content they might like. The bigger picture is the dismal state of online journalism / news and polluters not paying. But Netflix’s understanding about how fine scale a taxonomy has to be, struck a chord with me. This is exactly the problem of diagnosis in some areas of medicine.
The latter is my favorite. Four years ago, I realized the size and scope of Netflix’s secret weapon, its suggestion system, when reading this seminal Alex Madrigal piece in The Atlantic. Madrigal was first in revealing the number of genres, sub-genres, micro-genres used by Netflix’s descriptors for its film library: 76,897! This entails the incredible task of manually tagging every movie and generating a vast set of metadata ranging from “forbidden-love dramas” to heroes with a prominent mustache.
This reminds me of the old story about how impressed somebody was, after being shown some small computing device that could ‘think’ using powerful algorithms. The observer did however ask about the aircraft hanger size machine that came with it: that was necessary to implement all the code for the exceptions to this universal reasoning machine, he was told.
Lots of room too, for fake news and fake diagnoses.
Public Domain, Link
I dislike LMS (learning management systems). There are lots of reasons for this, but chief is that the ones I have seen are ugly and don’t entice. Universities are increasingly employee facing, rather than student facing (they claim the opposite). LMS are ‘management’ tools, not tools to help you learn. Fit for widgets, not humans. When you go back and look at Gutenberg’s bible or the great illuminated manuscripts you feel the power and pleasure of what the authors intended transmitted via the scribe. The monks understood this — they shared the passion. The web and nascent industry of informal learning for autodidacts is also full of great design (here is an example from Highbrow), even if it usually designed as part of a ‘pop culture’. But not the dismal corporate LMS.
If you don’t like how careless Equifax was with your data, don’t waste
your breath complaining to Equifax. Complain to your government.
Surveillance capitalism fuels the Internet, and sometimes it seems that everyone is spying on you. You’re secretly tracked on pretty much every commercial website you visit. Facebook is the largest surveillance
organization mankind has created; collecting data on you is its business model. I don’t have a Facebook account, but Facebook still keeps a surprisingly complete dossier on me and my associations — just in case
I ever decide to join. I also don’t have a Gmail account, because I don’t want Google storing my e-mail. But my guess is that it has about half of my e-mail anyway, because so many people I correspond with have accounts. I can’t even avoid it by choosing not to write to gmail.com addresses, because I have no way of knowing if email@example.com is hosted at Gmail.
Just last week, when faced with a report that its advertising numbers promised an American audience that, in certain demographics, well exceeded the number of such humans in existence, judging by U.S. Census Bureau numbers, Facebook told the Wall Street Journal that its numbers “are not designed to match population or census estimates. We are always working to improve our estimates.” Facebook’s intercourse with the public need not adhere to the so-called norms of so-called reality.
It’s almost vanishingly rare that we pick a new device that we always have with us,” the historian of mobile technology Jon Agar says. “Clothes—a Paleolithic thing? Glasses? And a phone. The list is tiny.
In 2014, Wall Street analysts attempted to identify the world’s most profitable product, and the iPhone landed in the top slot—right above Marlboro cigarettes. The iPhone is more profitable than a relentlessly marketed drug that physically addicts its customers.
The One Device, Brian Merchant
(As an aside, I was also playing with a new operating system called Linux, which Linus Torvalds had announced on the
comp.os.minixnewsgroup with one of those throwaway phrases that have gone down in history: “I’m doing a (free) operating system — just a hobby, won’t be big and professional…”.)
One thing about trying to put the Internet and computing in context, is that you are forced to look back at the history of other communication revolutions (pace Tim Wu, John Naughton etc). It is now a well trodden path, but one I still find fascinating. Even down to the details of how the cost of distributing images or 3D moulages had an effect on my own specialty. The following caught my eye — or maybe my nose.
“When paper was embraced in Europe, it became arguably the continent’s earliest heavy industry. Fast-flowing streams (first in Fabriano, Italy, and then across the continent) powered massive drop-hammers that pounded cotton rags, which were being broken down by the ammonia from urine. The paper mills of Europe reeked, as dirty garments were pulped in a bath of human piss.”
Awhile back the University of Edinburgh changed some of their guidance around passwords. In my 1Password app, I counted over 250 passwords. Some of these are old and no longer used, but the large number reflects the nature of academic life, in which information and knowledge flow is more outside the institution than within it. My bugbear is of course the NHS and the practice of making people remember hard passwords and change these passwords every 3-4 weeks. This is just bad practice, and leads to people writing them down close to where they use them, or choosing more guessable passwords. Another example of bad practice is below.
Slashdot asks if password masking — replacing password characters with asterisks as you type them — is on the way out. I don’t know if that’s true, but I would be happy to see it go. Shoulder surfing, the threat it defends against, is largely nonexistent — especially with personal devices. And it is becoming harder to type in passwords on small screens and annoying interfaces. The IoT will only exacerbate this problem, and when passwords are harder to type in, users choose weaker ones.
“I’ve written extensively on the now famous Georgia Tech example of a tutorbot teaching assistant, where they swapped out one of their teaching assistants with a chatbot and none of the students noticed. In fact they though it was worthy of a teaching award”
I keep reading this as ‘turbot’, and wondered what the fish things was. I guess the tutorbot would have corrected me soon enough.
Via Bruce Schneier:
The trouble began last year when he noticed strange things happening: files went missing from his computer; his Facebook picture was changed; and texts from his daughter didn’t reach him or arrived changed. “Nobody believed me,” says Gary. “My wife and my brother thought I had lost my mind. They scheduled an appointment with a psychiatrist for me.”
But he built up a body of evidence and called in a professional cybersecurity firm. It found that his email addresses had been compromised, his phone records hacked and altered, and an entire virtual internet interface created. “All my communications were going through a man-in-the-middle unauthorised server,” he explains.
This can be read as typical Silicon Valley hype, but I think it is more right than wrong. Just as government thought computer education in schools was about using MS Office, too many in higher education think it is about copies of dismal Powerpoints online, lecture capture, or online surveillance of students and staff. The computer revolution hasn’t happened yet. Medical education is a good place to start.
What can we do to accelerate the revolution? From our observation, the computer revolution is intertwined with the education revolution(and vice versa). The next steps in both are also highly overlapped: the computer revolution needs a revolution in education, and the education revolution needs a revolution in computing.
We think that, for any topic, a good teacher and good books can provide an above threshold education. For computing, one problem is that there aren’t enough teachers who understand the subject deeply enough to teach effectively and to guide children. Perhaps we can utilize the power of the computer itself to make education better? We don’t hope to be able to replace good teachers, but can the computer be a better teacher than a bad teacher?
Interesting graphic from Audrey Watters on the bête noire, that is Pearson (especially if you are an investor). But although I think I am in a minority, I think universities are wrong to not understand how the world of content will impact on their business models. What is your content like, and what do you add to it? Content is key. But it doesn’t cost 9K, at least not if you scale it right.
In the context of her research about the implications of information technology she stated three laws:
Wikipedia. A lot of interesting links to her work. I never knew the origin.
“If we are really going to turn over our homes, our cars, our health and more to private tech companies, on a scale never imagined,” he wrote, “we need much, much stronger standards for security and privacy than now exist. Especially in the US, it’s time to stop dancing around the privacy and security issues and pass real, binding laws.
“And, if ambient technology is to become as integrated into our lives as previous technological revolutions like wood joists, steel beams and engine blocks, we need to subject it to the digital equivalent of enforceable building codes and auto safety standards. Nothing less will do. And health? The current medical device standards will have to be even tougher, while still allowing for innovation.”
Nice piece on Walt Mossberg from John Naughton.
Technology magnifies power in general, but the rates of adoption are different. Criminals, dissidents, the unorganized—all outliers—are more agile. They can make use of new technologies faster, and can magnify their collective power because of it. But when the already-powerful big institutions finally figured out how to use the Internet, they had more raw power to magnify.
This is true for both governments and corporations. We now know that governments all over the world are militarizing the Internet, using it for surveillance, censorship, and propaganda. Large corporations are using it to control what we can do and see, and the rise of winner-take-all distribution systems only exacerbates this.
This is the fundamental tension at the heart of the Internet, and information-based technology in general. The unempowered are more efficient at leveraging new technology, while the powerful have more raw power to leverage. These two trends lead to a battle between the quick and the strong: the quick who can make use of new power faster, and the strong who can make use of that same power more effectively.
Bruce Schneier, on Crooked Timber, talking around Cory Doctorow’s new novel ‘Walkaway’. BS—well worth reading in full, as usual.
Nice dissection of some of the issues by Ross Anderson, here.
And in today’s FT, we read
Microsoft held back from distributing a free repair for old versions of its software that could have slowed last week’s devastating ransomware attack, instead charging some customers $1,000 a year per device for protection against such threats.
Gee, a secure version of Windows? That’s extra.