Interesting interview in the FT with the African guitarist Lionel Loueke, if you like to think about learning and certification, a couple of truths. The first is how technology can help. ‘Slow it down’ has helped many of us. Being able to record yourself, and then listen ( a point Eric Clapton talks about) is an interesting example of how you blur the gap between private practice and the external ear provided by a teacher.
He first heard jazz when a friend played him cassettes by Wes Montgomery and George Benson. At first, Loueke didn’t even know that jazz was an improvised music. ‘I approached it like I was playing Afropop, and learnt it by ear,’ he says. ‘I slowed down the cassette by putting in weak batteries, then back to electricity to get the speed. That’s how I started jazz”.
And of course, certification has its limits, and the ‘place to learn’ in not always in the classroom. Papert’s ‘mathland’, revisited.
When guitarist Lionel Loueke was a teenager in Benin, boiling precious guitar strings in vinegar to make them last, he didn’t think that one day he’d be auditioning in Los Angeles for a place at the Thelonious Monk Institute of Jazz Performance. Or that the panel of jazz professors would include Wayne Shorter, Terence Blanchard and Herbie Hancock. And certainly not that Hancock would exclaim, ‘How about we just forget about the school and I take you on the road right now?’
We deal with children as individuals when we “teach” them as children. What is wrong with MOOCs is the “massive” part. Education cannot be both massive and actual education. Learning starts with a goal followed by questions when you have trouble reaching your goal. We each have our own questions and our own goals.
Clark is wrong about MOOCs because the very concept of massive education is oxymoronic. Education is only massive because we have created a world of schools that include classrooms and not enough teachers to do one on one education. MOOCs are an extension of a vary bad educational idea called lecturing. We have come to accept lecturing because it is everywhere and we all had to endure it.
Roger Schank. Worth reading in full.
One of my mantras is that unless we do the online better, we cannot make use of the offline opportunities. Online, should allow us to make better use of the bedside. The following are some quotes from an FT article on MBA degrees.
The great thing about a virtual classroom is that your students are already in a digital format, which means you can run algorithms that recognise patterns in facial expressions to assess understanding and identify students’ emotional state and levels of attention in your class,” says Prof Boehm. Analytics can be used in real time to address students whose attention is wandering or later to improve teaching plans or faculty performance, he adds.
Teaching staff also find students to be more engaged in the virtual classroom. “Because of the way students are positioned on the wall, a headshot from the chest up, it’s very difficult for them to text on their phones or work on their PCs,” says LizHess, managing director of HBX. “It’s very easy for faculty to see if people are distracted — they joke that there’s no back row any more.”
The technology looks terrific in the images shown. But there are other factors at play. Note the group sizes are small in comparison with what many undergraduates receive, and the investment in technology is focussed on those who pay most (upfront). If you look at the money apparently going into medical education, this should be the norm for most undergraduate medical students.
There is a good piece on Wonke by David Morris, dealing with the issue of how research and teaching are related, and the dearth of empirical support for any positive relation between the two. R & T are related at the highest level — some universities can do doctoral research and teaching well — and although I have little direct experience, the same can apply at Masters level. The problems arise at undergraduate level, the level in which most universities compete, and which accounts for the majority of teaching income. As ever, I think we have to think ecology, variation and the long now. What seems clear to me, is that research is indeed often at the expense of teaching, and that the status quo needs to be changed if universities are to continue to attract public (and political) support. Cross subsidies and the empty rhetoric of ‘research led teaching’ do not address what are structural issues in Higher Ed, issues that have been getting worse, driven by poor leadership over many decades.
For many universities this is a pizza and / or pasta issue: some of us like both. Just because the two show little covariation in ecological data, does not mean that they shouldn’t inform each other much better than they have over the recent past. On the other hand, scale and education are unhappy bedfellows, and staff time and attention matter. Do you really think about teaching the same way you approach research? If T & R do not covary, then are your students in the best place, and why did you admit them? Honest answers please.
I have written before about the problems that learning outcomes fail to deal with (Jorge Luis Borges and learning outcomes), but there is another cognate issue that bugs me from time to time. The following is a quote from The Undercover Economist in the FT
Should the rules and targets we set up be precise, clear and sophisticated? Or should they be vague, ambiguous and crude? I used to think that the answer was obvious — who would favour ambiguity over clarity? Now I am not so sure. Ponder the scandal that engulfed Volkswagen in late 2015, when it emerged that the company had been cheating on US emissions tests. What made such cheating possible was the fact that the tests were absurdly predictable — a series of pre-determined manoeuvres on a treadmill.
I think there is a very clear downside to making too precise what it is that students should learn. I actually think we need noise in the system, simply because assessment methods are imperfect and unnatural, and the more you seek particular psychometric qualities the greater the distance between what is important and what and how you test. This is not a popular position to take, and I confess I am one of the worst offenders in terms of producing tightly defined content. But if:
assessment drives learning
the I would add, to make a couplet
assessment wrecks learning
Nice piece in ‘Science’ with the title: ‘No easy answers: What does it mean to ask whether a prekindergarten math program “works”?’ Geoff Norman, many years ago, used the term RCT in the context of medical education to stand for Randomised, Confounded and Trivial. Research into what works and what does not work in education is hard, and most studies (IMHO) fail to inform. Education isn’t a product like a drugs is, and gee it is hard to demonstrate when and where most drugs will work if you do not have an understanding of the biology and large effects to play with and outcomes that need to be measured over the long term.
I think about this a lot, but have no easy rules to guide action. Which is, of course, exactly the problem.
From Audrey Watters excellent round up of the year that was:
I think it’s safe to say, for example, that venture capital investment has fallen off rather precipitously this year. True, 2015 was a record-breaking year for ed-tech funding – over $4 billion by my calculations. But it appears that the massive growth that the sector has experienced since 2010 stopped this year. Funding has shrunk. A lot. The total dollars invested in 2016 are off by about $2 billion from this time last year; the number of deals are down by a third; and the number of acquisitions are off by about 20%.
To the entrepreneur who wrote the Techcrunch op-ed in August that ed-tech is “2017’s big, untapped and safe investor opportunity.” You are a fool. A dangerous, exploitative one at that.
Lots of good reasons for this, but surely the main one is that the products are so awful. It is a big domain of human activity, although whether it is a market I will leave for the moment. But people may prefer to spend their money on something that works. And that is before we mention LMS. Of course we can just sell our students user data….
Lots more good stuff from her here, although a stiff drink may be seasonally appropriate.
I am often accused of being too cynical. Events in 2016 have not dissuaded me that my approach was not the right one. But I like this quote, which is new to me:
As George Carlin said: “Scratch any cynic and you will find a disappointed idealist.” Guilty as charged: I was an idealist and remain one, well, sort of. Tim Wu interviewed by John Naughton
Tim Wu’s last book, “The Big Switch: The Rise and Fall of Information Empires”, was magisterial, and of interest to anybody interested in education, tech and the web. His latest, has not been published in the UK yet, but tracks the relation between attention and advertising (‘The Attention Merchants’). In the interview Naughton reprises one of Herb Simon’s great insights:
The cue for his new book, The Attention Merchants, is an observation the Nobel prize-winning economist Herbert Simon made in 1971. “In an information-rich world,” Simon wrote, “the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”
Anybody who looks at how students want to learn, and the fractured landscape of content and ‘learning behaviour’, needs to think hard about this topic.
At the risk of raising the ire of many researchers, I should note that I am not basing my assessment on the rapid growth in educational neuroscience. You know, the kind of study where a subject is slid into an fMRI machine and asked to solve math puzzles. Those studies are valuable, but at the present stage, at best they provide at most tentative clues about how people learn, and little specific in terms of how to help people learn. (A good analogy would be trying to diagnose an engine fault in a car by moving a thermometer over the hood.) One day, educational neuroscience may provide a solid basis for education the way, say, the modern theory of genetics advanced medical practice. But not yet.
Keith Devlin, talking sense — again. I want to believe the the rest of the article but, worry it may not be so. But it contains some gems:
Classroom studies invariably end up as studies of the teacher as much as of the students, and often measure the effect of the students’ home environment rather than what goes on in the classroom.
This just adds to the problem that Geoff Norman (DOI 10.1007/s10459-016-9705-6) and others have talked about in course evaluations, namely that many studies — even accepting of the limitations outlines above — are riddled with pseudoreplication.
What is missing is any insight into what is actually going on in the student’s mind—something that can be very different from what the evidence shows, as was dramatically illustrated for mathematics learning several decades ago
But, like many outwith medicine, I think he puts too much store by the robustness of the RCT approach — even with digital tools to allow large scale measurement. RCT: ‘randomised, confounded and trivial’, as has been said before (Norman).
His introduction is at 1:10 and his talk begins at 25:00. I would skip the Andreas Schleicher (aka Mr PISA) talk, although Roger has something to say about testing. His style of presentation may make you think he exaggerates.
A nice way to end the year.
Video is cool. Text isn’t.
Houston, we have a problem.
I went to the OEB meeting for this first time this year. I was not certain how much I would like it, but found it really enjoyable. Not a meeting I would go to each year but, if you are interested in teaching and learning in the broadest sense, it is well worth a visit. I would go again.
One of the sessions I enjoyed most was a fairly small concurrent session with the title ‘The value and the price: discussing Open Online Courses’, chaired by Brian Mulligan (IoT, Sligo), and with panellists Stephen Downes(NRC, Canada), Nina Huntermann (edX), Diana Laurillard (UCL), and KonstantinScheller (European Commission). It was all wonderfully informal, with not too many people there and plenty of time for questions and discussion. I got involved too, rather than just listening. The discussion ranged widely over MOOCs (c or x), online learning, ‘conventional teaching and learning’ and other topics, but that is to be expected. You cannot discuss online learning without thinking about offline learning; you cannot discuss new tech, without discussing old tech; you cannot discuss scale without discussing one-to-one; you cannot discuss value without talking about money and non-money.
I didn’t take notes but the thoughts going round in my head (prompted no doubt by the panel were):
I am at the OEB16 meeting in Berlin. As ever, thoughts cross. If you go around the stands you have to ask about the relation between learning — a personal act — and all that you can sell to go with it. Software, hardware etc. And you have to wonder about the balance of goods and dreams.
Marcia Angell reviews Alice Gopnik’s ‘The Gardener and the Carpenter’ in the NYRB (November 2016)
‘Among the book’s strengths is that Gopnik leaves no doubt about where she stands on the peculiarly American way of leaving families on their own in an increasingly unequal society. “Middle-class parents are consumed by the pressure to acquire parenting expertise,” she writes.
(Gopnik quoted text)
“They spend literally billions of dollars on parenting advice and equipment. But at the same time, the social institutions of the US, the genre at originator and epicenter of parenting, provide less support to children that those of any other developed country. The US, where all those parenting books are sold, also has the highest rates of infant mortality and child poverty in he developed world.”
Another great video of Alan Kay, explaining how intellectual revolutions occur ( ‘appoint people who are not amenable to management’)
“You have the MOOCs and bla-bla-bla – you can quote me on that,” he says, laughing, “but the real revolution that has happened is in YouTube, Wikipedia, Minecraft, and people publishing things on the internet.”
I think MOOCs are interesting, mainly because of the light they cast on dated and inadequate models of university mass education, but he is right.
Mark Surman of Mozilla.
Undergraduates frequently complain that they don’t have enough contact hours. But a major study in the UK suggests that students develop skills better out of – rather than in – the classroom.
I do not find this claim too surprising, but if you read the article and go to the HEA ‘engagement report’, I find it hard to know how they claim to have established this as a fact. Confounding and hidden variables all around. But as has been said before, classroom learning has the air of an oxymoron. Contact hours are both relevant and irrelevant. Context matters most.
There is a nice piece by Nassim Nicholas Taleb in Medium. It is from a forward to a book (I think) on physical / strength training. If you have read Taleb you will know this is not too surprising.
You will never get an idea of the strength of a bridge by driving several hundred cars on it, making sure they are all of different colors and makes, which would correspond to representative traffic. No, an engineer would subject it instead to a few multi-ton vehicles. You may not thus map all the risks, as heavy trucks will not show material fatigue, but you can get a solid picture of the overall safety.
Likewise, to train pilots, we do not make them spend time on the tarmac flirting with flight attendants, then switch the autopilot on and start daydreaming about vacations, thinking about mortgages or meditating about corporate airline intrigues — which represent about the bulk of the life of a pilot. We make pilots learn from storms, difficult landings, and intricate situations — again, from the tails.
In one sense he is saying something that is easy to agree with. But if you delve a little deeper, it is not what we always do in medical education.
The structures we create to enable learning in a clinical discipline are not mirrors of what goes on in the real world. Pace the airline example. We shouldn’t expect teaching time to mirror disease prevalence; we don’t spend most of our time in dermatology teaching students about viral warts, or dandruff, or toxic erythema. When you try to recognise objects, you do not just study those particularly objects. Rather, you have to study all the other objects. If you want to be able to ‘call out’ whenever you see a dog, you have to study cats. And chimps, and wolves and so on. This is one of the reasons why just learning about the top ten conditions makes little sense, if acts of recognition are involved. Most things are defined by what they are not. To think in the box, you have to know what is outside the box. This is what makes medical education a hard problem.
There are implications for clinical practice for the expert, too. Everyday practice appears to minimise the role of the statistical tails. Your learning about common condition may be ‘everyday stuff’ requiring little formal study. But for rare conditions, or odd presentations of common conditions, everyday practice, may not be sufficient — simply put, you do not see rare events frequently enough to consolidate and strengthen your memories. Everyday practice rarely provides enough critical mass, you might say. A practical example.
When I was a trainee in Newcastle if we saw an ‘interesting patient’ or a patient in which the diagnosis was unclear, we pressed a buzzer. The buzzer and flashing light went off in all the clinic rooms, the laboratories, the professor’s office and the seminar room. What happened then, resembled the Stepford wives. All descended on the particularly clinic room, as though under some malign influence. There were times when this was quite funny, although some patients might have told this differently.
This simple tool was just an implementation of another one of Rees’s rules: routine clinical practice is not sufficient to consolidate or acquire the skills you need to provide routine clinical practice. This seems like a paradox, but it isn’t. “A sailor gets to know the sea only after he has waded ashore.” Rather, I always view it as a solution to the forgetting curve that Ebbinghaus described (although I think there may be other justifications)
There is a simple learning point here. The acquisition or maintenance of clinical competence requires much more than seeing patients (and by this, I do not just mean reading research papers). Software, and virtual worlds that we control, might help. But the Rees maxim remains: routine clinical practice is not sufficient to consolidate or acquire the skills you need to provide routine clinical practice
“Imagine if we taught baseball the way we teach science. Until they were twelve, children would read about baseball technique and history, and occasionally hear inspirational stories of the great baseball players. They would fill out quizzes about baseball rules. College undergraduates might be allowed, under strict supervision, to reproduce famous historic baseball plays. But only in the second or third year of graduate school, would they, at last, actually get to play a game. If we taught baseball this way, we might expect about the same degree of success in the Little League World Series that we currently see in our children’s science scores.”
‘You Americans have the best high school education in the world. What a pity you have to go to college to get it.’ In Alan Kay
‘People pay a lot for a great education now, but you can become expert level on most things by looking at your phone.’
It is just nice to see it in black and white. So simple. BTW, the quote came via the wiser (not ‘smarter’) Nick Carr, who commented: ‘By “a really good life,” Altman means a virtual reality headset and an opioid prescription’.
I have been busy updating some teaching stuff. It is never finished but there is time for a little pause. I have completed all the SoundCloud audio answers to the questions in ed.derm.101 (Part C) and there is a ‘completed’ version of ed.derm.101 Part C half way down the linked page. Not all the links have been checked, and a lot had to be redone because the superb New Zealand Dermnet site changed their design (the best source of dermatology images, IMHO). An example of the sort of audio material is below.
Apart from money, that is.
HigherEd is awash with rankings. Governments like them, and so do publishers. Just look at the THE, with its myriad of bullshit scores. The allure of bogus numbers, over judgment. A feel-good frenzy of metrics. When rankings of US colleges first came in, the assessors used to actually live on campus for a while, go to lectures, and talk to students. There was an attempt at face validity. Not any more. All you need is GIGO data, and you can sell it, or use it to buy power and kickbacks like the politicians. There was even a time when the notion of common sense mattered, but then came the RAE/REF. Then the TEF. Larry Lessig’s comment is worth repeating, again.
The best example of this, I am sure many of you know are familiar with this, is the tyranny of counting in the British educational system for academics, where everything is a function of how many pages you produce that get published by journals. So your whole scholarship is around this metric which is about counting something which is relatively easy to count. All of us have the sense that this can’t be right. That can’t be the way to think about what is contributing to good scholarship.
Anyway, at the back of last week’s Economist I came across a single page advert in the ‘Courses’ section, about IMD (shown below).
I suspect I would have passed over it, except that I used to meet up from time to time with a Professor of finance who lived in Edinburgh and worked at IMD. We had many conversations about teaching, and what impressed me was the focus on ‘education and teaching’ and thinking hard how to do it better. There is a lot about MBA programs that I do not understand, and a fair bit I am suspicious of, but I have little doubt they offer something medicine could learn from.
It piqued my interest enough to follow it up, so here is some more text from the IMD page.
At the end of September, we were informed that IMD would be included in the 2016 MBA ranking, despite the Economist’s initial agreement. This is surprising as we had not supplied any information and our participants and alumni had not been surveyed for this ranking. This contradicts the paper’s statistical method, which requires a minimum 25% survey response rate to be ranked.
Needless to say, IMD has serious reservations regarding the Economist’s methodology and its outcomes. In 2015, relative to the previous year’s ranking, LBS & IESE fell 9 ranks, IMD fell 11, and ESMT fell 23. Meanwhile, IE, Warwick and Macquire all jumped up 19 scores. As a result, Queensland, Warwick, Henley were ranked better than Cornell, London Business School, Carnegie Mellon and IMD!
They then go on to argue that the Economist ranking is scale dependent, and will discriminate against small schools (and, as we know small universities like Cal Tech are terrible…).They finish off with:
Again, IMD was not surveyed for the 2016 ranking and did not actively participate. Given this, we do not know which data the Economist will use to establish our position. What we do know is that the Economist ranking has just lost its last bit of credibility. Unfortunately, there is little IMD can do to stop the Economist from proceeding.
Says it all. Wake up. At least the Economist accepted the money (for the advert).
In my ‘online’ textbook of ‘rashes’ (ed.derm.101), that is the non-cancer bits of dermatology, I have a chapter called ragbag. I used to have two ragbag chapters, but now by combining them, the subject has been made simpler and easier, even though the content is unchanged. I am sure students agree. I put in this chapter all the things I do not put elsewhere. I thought I should now fess up a little, as Gödel would not have said.
In ‘John Wilkins’ Analytical Language’, Borges refers to the work of Franz Kuhn and his work on the the Chinese encyclopaedia the ‘Heavenly Emporium of Benevolent Knowledge’ (John Luis Borges, Selected Non-fiction, ISBN978-0-14-029011-0). This work is a classification of the world. For instance, you can classify the world into various groupings or categories. For animals we have:
All very straightforward stuff, as any medical student will agree. I also like the system proposed by the Bibliographical Institute of Brussels (after Borges). It parcelled the universe into 1000 subdivisions with number 262 corresponding to the Pope, 268 to Sunday Schools, 298 to Mormonism, and 179 to cruelty to animals, duelling and suicide.
We have lots of similar systems in medicine, some of which seem less fit for purpose than those described above. My incomplete categorisation of categorisation (the grant was turned down) is as follows:
Dreyfus and Dreyfus, the US philosophers and students of AI, pointed out that although we like to use rule-based systems in teaching, experts quickly forget them, and do not appear to use them (From Socrates to Expert Systems:The Limits of Calculative Rationality, 1984). We just inflict them on the young either because that is the only way we know how to encourage learning, or because we repeat what happened to us earlier in our career without good reason. Quoting Dreyfus and Dreyfus:
The beginning student wants to do a good job, but lacking any coherent sense of the overall task, he judges his performance mainly by how well he follows his learned rules. After he has acquired more than just a few rules, so much concentration is required that his capacity to talk or listen to advice is severely limited.
They were not writing about medical students. But I recognise what is going on. I can remember it too. Classifications may or may not be useful in learning a subject, and in chunking, may provide a guide to action. But the more experience you get, the less the clinician uses them. Academics, of course, play by different rules, and get to write the rules.
Marvin Minsky once quipped “Every educational reform is doomed to succeed”. He meant “with some students”.
“If I had to reduce all of educational psychology to just one principle, I would say this: The most important single factor influencing learning is what the learner already knows. Ascertain this and teach him accordingly.”(Ausubel, 1968 p. vi). Via Dylan Wiliam.
When I was a child, growing up in Wales, my father would express puzzlement that I didn’t seem to know how to pronounce certain words. He didn’t get that since Welsh was his equal first tongue — but not mine— knowing how you pronounce Welsh words was obvious to him, but not to me. For my part, it was only scores of years later that I realised some of his verbal mannerisms were not just odd idiosyncratic English or slang, but Welsh, although the meaning was clear to me. I had just not realised these were Welsh words or phrases, and of course I too would use them.
I have noted in the past, that when students mispronounce some of these dreadful dermatological terms, it was a signal that they had read about a disease, but had never been taught on that disease. It signalled to me how much they were acquiring on their own. English is like that, certainly in comparison with German: until you hear the word spoken, guessing how you say it, is tricky. More so, when you chuck in the various languages that contribute to the dermatological lexicon — and when they are then spoken / bastardised by English speakers.
But today, a student today pointed out that it would be helpful to include how words are pronounced in our course material. I am not certain how to do this yet, but I can believe that not knowing how to pronounce a term might ‘inhibit’ thinking and ‘silent talking’ about the topic (I do not know whether there is any research to back this opinion up).
I wrote a post on this topic awhile back trying to map out the territory of funds going in and out of undergraduate medical education. It was a bit too rambling but more thought out I feel than Jeremy Hunt’s latest slant on statistics that the Guardian (and others) reported. So here are some bullet like points on this issue, together with some questions. The backdrop is the article I wrote before, and the issue of ‘we paid for their training so we can seize their passports’ (Phil Hammond’s, ‘Hotel California clause’: ‘you can check in but you can never leave’.) And because, somebody asked me to spell things out a little more.
In England medical students pay 9K fees. HEFCE (Higher education funding) add another 10K. Lets round up and call it 20K. HEFCE also adds money beyond fees for other expensive degrees (engineering, for example) although I do not know if it is 10K or less. This 20K goes through the universities
Medical students do not pay their final year fees in England, but must meet all their living costs, and 9K fees for the other 4 or 5 years. Government loans attract interest and, as others have commented, the government alters the conditions in a way that would be illegal for any bank (Gee! The government makes even the bankers look like saints). So say 40-45K fees, plus living costs. I doubt much change from a 100-120K. The money attracts interest and will be much larger by the time it is paid back, and will also feed into the debt of students who do not earn enough to pay back their fees.
The other funding stream is via the NHS. In England this is called SIFT, in Scotland it is called ACT. This is probably in the region of 20K per student per clinical year, and is designed to meet the costs of the ‘students on the wards’ and pay for all the NHS staff time for those involved in teaching. This money stays within the NHS, and the universities have essentially no access to it.
If you add there two streams together you are talking about close to 30K of ‘state funding’ plus 10K from the students. Living expenses are on top, and I will ignore opportunity costs of what students defer from earning.
The problem with the 30K state funding figure is it fails the reality test. These sums add up to a figure (40K) close to what Stanford charges its small medical student cohort, and yet it is clear that our UK medical students get a much worse deal. Or just compare what this sort of money buys you at an expensive private school. There is a (fat) rabbit off somewhere. Nobody with any knowledge of medical education, and who isn’t playing politics, believes that is what we spend on each of our students.
Above, I said 20K goes through the universities, but I did not say that universities spend that 20K on delivering undergraduate teaching. The obvious issue is that medical research is big business, and most research in general loses an institution money — this is especially true of charity funded research, the main funder of medical research in the UK (although there is an attempt to make up this deficit from QR funds but it is grossly inadequate). Peacocks tail, and all that. So, teaching fees are used to subsidise this loss. There are good costings for this in some US schools, but they use endowments to meet the costs; in the UK we get students to pay for this. To what extent? I do not know. Do not ask, is the mantra. This will run and run. And then unwind.
What about the NHS money. Well, nothing is transparent in the NHS, but we know most of the this money is not used to support teaching, but siphoned off to pay for clinical care. What proportion? I would start at saying 70% (i.e. only 30% goes for what it is intended for). So I think 18K over the whole course. But I know of no convincing data in this area, just the sort of bumph Hansard repeats, which is not reality based. Do not ask, is again the mantra.
My previous post added in come complexities. And there are more, that I have not mentioned.
The key points are:
Anyway you can still listen….
‘Innovative’ educational practice is more fashion-driven than those who attend the catwalks are.