This article (‘Humans may not always grasp why AIs act’) in the Economist gets to the right answer, but by way of a silly example involving brain scanning. The issue is that people are alarmed that that it may not be possible to understand how AI might come to a certain decision. The article rightly points out that we have the same problem with humans. This issue looms large in medicine where many clinicians believe they can always explain to students how they come to the correct answer. The following is one of my favourite Geoff Norman quotes:
Furthermore, diagnostic success may be a result of processes that can never be described by the clinician. If the right diagnosis arises from pattern recognition, clinicians are unlikely to be able to tell you why they thought the patient had gout, any more than we can say how we recognize that the person on the street corner is our son. Bowen claims that “strong diagnosticians can generally readily expand on their thinking”; I believe, instead, that strong diagnosticians can tell a credible story about how they might have been thinking, but no one, themselves included, can really be sure that it is an accurate depiction.
We are Strangers to Ourselves, as Timothy Wilson put it.