Categories
Publications (EN)

Expertise, opacity, and trust in AI systems

Boisseau, É. Expertise, opacity, and trust in AI systems, Synthese 207, 104 (2026). https://doi.org/10.1007/s11229-026-05484-2

Read the full article here (or email me for the pdf version): https://rdcu.be/e5NrU

Abstract

This paper critically examines a family of arguments by analogy which suggest that the trust granted to an AI system should mirror the one usually granted by a layperson to a human expert. I particularly dispute the idea that, on the grounds that both share some degree of opacity, human experts and AI systems can be considered epistemic ‘black boxes’ and both be subject to the same blind trust on the part of non-experts. To uncover the problematic nature of this rather widespread analogy, I proceed by identifying a form of ambivalence in the notion of opacity mobilised, as well as a number of highly charged presuppositions concerning expertise (notably relating to a kind of obsession with what is sometimes dubbed its ‘veritistic’ character). I suggest that such a reductionist, monomaniacal conception of expertise is flawed, negligent or inadequate. The other forgotten facets of expertise are not merely cosmetic, but constitutive of it. I show that artificial systems cannot instantiate them, and that we cannot expect them to ever do so.

Categories
Publications (EN)

Imitation and Large Language Models

Boisseau, É. Imitation and Large Language Models. Minds & Machines 34, 42 (2024). https://doi.org/10.1007/s11023-024-09698-6 SMASH
Read the full article here (or email me for the pdf version): https://rdcu.be/dWvcH

Abstract:

The concept of imitation is both ubiquitous and curiously under-analysed in theoretical discussions about the cognitive powers and capacities of machines, and in particular—for what is the focus of this paper—the cognitive capacities of large language models (LLMs). The question whether LLMs understand what they say and what is said to them, for instance, is a disputed one, and it is striking to see this concept of imitation being mobilised here for sometimes contradictory purposes. After illustrating and discussing how this concept is being used in various ways in the context of conversational systems, I draw a sketch of the different associations that the term ‘imitation’ conveys and distinguish two main senses of the notion. The first one is what I call the ‘imitative behaviour’ and the second is what I call the ‘status of imitation’. I then highlight and untangle some conceptual difficulties with these two senses and conclude that neither of these applies to LLMs. Finally, I introduce an appropriate description that I call ‘imitation manufacturing’. All this ultimately helps me to explore a radical negative answer to the question of machine understanding.

Categories
Publications (EN)

The Metonymical Trap

É. Boisseau, ‘The Metonymical Trap’, in Alice C. Helliwell, Alessandro Rossi, Brian Ball (eds), Wittgenstein and Artificial Intelligence, vol. 1 Mind and Language, Anthem Press, pp. 85-104, 2024.

Abstract:

In this chapter, I discuss and evaluate the question of the attribution of predicates to machines. Specifically, I address the question of the literal or metonymic nature of such attributions. In order to do so, I distinguish between what I call ‘physical’ or ‘natural’ predicates on the one hand, and ‘intellectual’ or ‘cognitive’ predicates on the other hand. I argue that while the former can be ascribed indistinguishably and literally both to a human and a non-human agent, the latter can only be ascribed literally to human agents. I then suggest that there is a risk of falling into what I call the ‘metonymical trap’, which consists in forgetting that the ascription of cognitive predicates to machines is only a derivative one and of therefore taking the machine used to perform an action for the literal subject of that action.

Publisher website