Awhile back the University of Edinburgh changed some of their guidance around passwords. In my 1Password app, I counted over 250 passwords. Some of these are old and no longer used, but the large number reflects the nature of academic life, in which information and knowledge flow is more outside the institution than within it. My bugbear is of course the NHS and the practice of making people remember hard passwords and change these passwords every 3-4 weeks. This is just bad practice, and leads to people writing them down close to where they use them, or choosing more guessable passwords. Another example of bad practice is below.
Slashdot asks if password masking — replacing password characters with asterisks as you type them — is on the way out. I don’t know if that’s true, but I would be happy to see it go. Shoulder surfing, the threat it defends against, is largely nonexistent — especially with personal devices. And it is becoming harder to type in passwords on small screens and annoying interfaces. The IoT will only exacerbate this problem, and when passwords are harder to type in, users choose weaker ones.
Nice paper by Erik Driessen, and much more nuanced that my quotes might suggest. Worth a read. But still a phrase that I use as a bullshit detector.
Many students do not value reflection as a learning strategy, especially not when they are forced to use an artificial and fixed format and they have little opportunity to direct their learning as a result of these reflections.
“Few things seem to irritate doctors and medical students so much as mandatory reflection. (Tomlinson 2015)”
Twenty-five years of portfolio reveal a clear story: without mentoring, portfolios have no future and are nothing short of bureaucratic hurdles in our competency-based education program
But I do prefer the term ‘learner chart’, but why not just say, ‘a record of what you did’, just like handing a lab book in. Once again, school teachers might be our guides.
A comment in Science
A well-stated hypothesis describes a state of nature. It is either true or not true, not subject to probability. The phrase “probability the hypothesis is true” is meaningless. One can only say, “likelihood that the observed data came from a population characterized by the hypothesis.”
I only post, because I seem to spend my life trying to argue that the dismal null hypothesis is a tool for doing one type of statistics, and has a limited role in science. It has little to do with what we mean by a scientific hypothesis. There are not an infinite number of scientific hypotheses: there is a not a probability distribution in the way we use this term in statistics.
From Alan Kay. If it comes from a Turing award winner, maybe people might take notice. Perhaps not.
Everywhere I look I see stuff about how ‘unreliable’ much science might be. For instance see here, here and here. In truth most of this is about the recent paper on the lack of reproducibility of some outputs [irony intended] in psychology journals. John Ionnnidis writes:
Multiple lines of evidence suggest this is a recipe for disaster, leading to a scientific literature littered with long chains of irreproducible results.
I do not have much original to add to this debate, but find none of it very surprising. Go back and read the ‘Double Helix’, or look at how David Hubel and Torsten Wiesel described how they changed the world. I think of it in the following terms.
If people use probability management as the sole marker of truth, you have left science — that is science as the attempt to discover the deeper structures that underpin reality — behind. Science is about reliable knowledge, and if what you do is not reliable, you have not been doing science. There is a lot of activity that is going on that that is not science. Much is Cargo cult science; much is testing; much is effectively the D of R+D where we are playing and abusing Type 1 and Type 2 errors (Neyman Pearson errors, rather than Fisher); much the sort of thing any large logistics or commercial company does, but they probably have more humility. And it is why I have argued that much medical research (sic) could be usefully outsourced to Amazon: its logistics, stupid. ( see for instance, Of supermarkets and medical science: getting the product lines right). The Feynman quote that I recall was: “When, during the Challenger enquiry, Richard Feynman was told that the chance of an error in a certain process was 1:105, he retorted, “if a man tells me the chance of failure is 1 in 10^5 I know he is full of crap”.
Science relies on truth seeking; integrity; and honesty. It is not just a career. You have to have an attitude to your subject. Arrogance before men; humility before your subject (Bronowski). Most good science is iterative. You want to see the world honestly because your next bit of work depends on how you have interpreted what you did yesterday. In one sense, you publish for yourself. The hardest judge must be you and your reputation, not some anonymous reviewer. Your peers are not the people who review your papers, but those people you do not know who decide whether you are in the books 20 years later. Think of dietary observational epidemiology: how much will survive?
When publication is no longer a matter of record, but of career advancement, we are doomed to endlessly inventing ways to discover who has been debasing our currency. Institutions matter, not so much the bricks and mortar ones, but the way science is organised. John Ziman foresaw the mess we are getting into in his book Real Science, and elsewhere:
What is more, science is no longer what it was when Merton first wrote about it. The bureaucratic engine of policy is shattering the traditional normative frame. Big science has become a novel way of life, with its own conventions and practices. What price now those noble norms? Tied without tenure into a system of projects and proposals, budgets and assessments, how open, how disinterest- ed, how self-critical, how riskily original can one afford to be?
He was of course talking about Robert Merton’s essay (J. Legal and Political Sociology 1, 115–262; 1942). He went on to say:
There is no going back to that world we have lost. Anyway, science has never had it so good as in this past half-century, and is still going great guns. But soon it will be offered a new contract with society. To renegotiate that contract with its eyes open, on even terms, science will need to understand itself much better. That understanding is going to require, not adherence to an obsolete ethos, but a sharp but sympathetic sociological self- analysis. That is the unfinished business that Merton’s little paper began.
This is where we are. We need to find such a new contract. Bruce Charlton had the right diagnosis a long time ago (Not even trying: the corruption of real science).
John Kay of all people is surely not getting things quite correct. He starts the article with a complaint about the Economics profession (well, who wouldn’t) but then states:
“And I have learnt that such bias is almost as common in academia as among the viewers of Fox News: the work of John Ioannidis has shown how few scientific studies can be replicated successfully.”
What John Ioannidis studied was largely the field of RCT or observational epidemiology in medicine — not science. RCT are, at best, a form of technology assessment usually with no scientific hypothesis in sight; and observational epidemiology is known to be largely bunk, because of confounding. Much pharma animal work is similarly devoid of a meaningful hypothesis. The real issue is, in a world where technology is usually so effortless, and where much or most of this technology relies on science, so much medicine (and some other branches of natural science) are so unreliable and insecure. To understand why this is the case, is the relevant question. The Mertonian norms have ended, and the currency is being actively debased. As John Kay knows, institutions matter, and it is to these we should look. A statistical hypothesis is not the same as a scientific hypothesis.
Once it seemed that people wanted to negate the hyperlink, and confine us to walled gardens. Think AOL etc. Sadly, many of the organisations I am forced to deal with want to give up what we have learned about UI, and spend more and more time learning how their portal works. So, if I want to fill in CPD, I have to use this portal; if I want to be appraised, I have to use this portal. Etc, etc. Of course, I don’t seem to have any choice, I am obliged as a condition of employment. No longer can I see students and make notes in a software system that is universal (as in a textfile) and that is designed so as to encourage writing, but I have to type into badly designed online data management systems. All so the users, the people who do the work can be audited. And the audit gets in the way. I posted earlier about David Graeber, and he seems to have it right. I quoted Gillian Tett, of the FT, herself once an anthropologist:
But even if you disagree with his politics, Graeber’s book should offer a challenge to us all. Should we just accept this bureaucracy as inevitable? Or is there a way to get rid of all those hours spent listening to bad call-centre music? Do policemen, academics, teachers and doctors really need to spend half their time filling in forms? Or can we imagine another world?
There are no easy answers. But the next time you see a bureaucratic form — and I have several sitting in my inbox right now — it is worth asking who really benefits from it? And, more importantly, who would suffer if we were to all suddenly rip them up? It is, perhaps, one of the more subtly revolutionary ideas of our age.
So what has upset me now? I gave a talk at an international meeting, one of those overview talks, rather than presentation of primary data. Kindly, the editor of a journal asked me would I turn it into an essay. I agreed, I had put a lot of work into the talk, and he and I both agreed that some of my thoughts deserved a wider audience. All well and good, except the journal is not very OA, and my working prejudice is that I will not publish in, nor review articles for journals that are not predominantly OA and where the prices are modest, and where the journal is a not-for-profit (not a rule, a sort of bias). Anyway, I thought about it, and carried on, knowing that at least I can put the essay on this site in its original spoken form.
Then the proofs arise. Now, I am older enough to remember paper MS doubled spaced using a typewriter, then paper pdfs, but now what do I have to wrestle with? I have to download Adobe Reader, and go through a particular publishers portal. Not amend a pdf, and email it back. Nor am I allowed to use the pdf editor that sits on my machine that is simpler and more robust than Adobe’s but no, I have to alter my workflow, download software I don’t like, to undertake all those tasks that publishers used to do. Finally, please, please can everybody do what my colleagues in computing science have been doing for so long: BibTeX and LaTeX, or similar, rather than all these ridiculous journal specific font and references styles. Rant over. Maybe this is why the economists have commented that we see the ‘benefits of IT everywhere except in the productivity figures’. We have reinvented Babel. Rant over.
I was going to ignore this, but I have just chanced upon some more comments about it, so writing therapy is required. The article with the above title was written by Richard Smith, and published in the THE a few weeks back. I was always worry if I agree with Richard Smith’ arguments, but on this topic he is largely correct:
Yet peer review persists because of vested interests. Absurdly, academic credit is measured by where people publish, holding back scientists from simply posting their studies online rather than publishing in journals. Publishers of science journals, both commercial and society, are making returns of up to 30 per cent and journals employ thousands of people. As John Maynard Keynes observed, it is impossible to convince somebody of the value of an innovation if his or her job depends on maintaining the status quo.
Scrapping peer review may sound radical, but actually by doing so we would be returning to the origins of science. Before journals existed, scientists gathered together, presented their studies and critiqued them. The web allows us to do that on a global scale.
But I think he gets a few things dangerously wrong.
First, journals have taken over ‘peer review’ forgetting that its original meaning was judgment by one’s peers. It was always silly to image that 2 -10 people judging a MS could provide robust scrutiny or recognition of excellence. Peer review is why we recognise Watson and Crick, or Wiesel and Hubel, a half century on. Journal editors use this intellectual sleight to make their organs more appear more powerful, when in point of fact many journals work outside the real community of science.
Second, he states, ‘The most cited paper in Plos Medicine, which was written by Stanford University’s John Ioannidis, shows that most published research findings are false.’ This is imply untrue. Ioannidis’s study was limited to medicine and largely concerned papers that tried to define truth using ‘probability management’. One the reasons we are in this mess, is that we have confused what I would call academic A/B testing with science. The former is, at best, technology assessment, and however important it is for medicine , it is not science— or at least not science if we define science as the attempt to produce broad and deep conceptual understanding of the natural world. You cannot do science without an attempt to produce theories about how the world works. That is why most RCT fail as science, because they are not obsessed with the structure of the world. They are frequently expensive forms of A/B testing.
Third, he states: ‘Cochrane reviews, which gather systematically all available evidence, are the highest form of scientific evidence.’ They are not the highest form of scientific evidence; and they do not systematically gather evidence (much of the evidence is in people’s heads). What they frequently are, are reasonable attempts to summarise domains of knowledge in which we lack critical insight, and have to rely on probability management (as if often the case in medicine). You may have to do this, but nothing in a Cochrane compares with say the theory of evolution, or the base structure of DNA, or how we define fitness, or laws of the conservation of energy etc etc. Probably the greatest series of scientific breakthrough over the last half century — the breakthroughs that allow the medium I write on to exist — owe nothing to Cochrane. The more you need Cochrane, the less you can be confident about what you are saying.
Rant over. I feel better already.
John Nash has died. The NYT says:
Dr. Nash was widely regarded as one of the great mathematicians of the 20th century, known for the originality of his thinking and for his fearlessness in wrestling down problems so difficult few others dared tackle them. A one-sentence letter written in support of his application to Princeton’s doctoral program in math said simply, “This man is a genius.”
Well, thank God he got through before the current tick-boxing charade crippled much of higher ed and the NHS.
Well yes, I am still moaning about having to spend all that time on a compulsory and badly designed health and safety e-learning package (why e-learning, anyway?). Well, here is nice quote to cheer me up:
The firm that insisted staff complete an ergonomic checklist and declaration when they moved desks, then introduced ‘hot desking’ such that everyone spent 20 minutes a day filling out forms.
Also in this report, was the following—more serious if you work in Higher Education:
….rules included regulations requiring universities – which already face a $280-million-a-year compliance burden – having to report in detail to the federal government about how they use their lecture theatres, tutorial halls and academic offices.
The figures are for Australia and the report is here. Although I am somewhat puzzled, since it is authored by Deloitte, and I thought silly rules were what their business was all about?