Categories
Publications (EN)

Imitation and Large Language Models

Boisseau, É. Imitation and Large Language Models. Minds & Machines 34, 42 (2024). https://doi.org/10.1007/s11023-024-09698-6 SMASH
Read the full article here (or email me for the pdf version): https://rdcu.be/dWvcH

Abstract:

The concept of imitation is both ubiquitous and curiously under-analysed in theoretical discussions about the cognitive powers and capacities of machines, and in particular—for what is the focus of this paper—the cognitive capacities of large language models (LLMs). The question whether LLMs understand what they say and what is said to them, for instance, is a disputed one, and it is striking to see this concept of imitation being mobilised here for sometimes contradictory purposes. After illustrating and discussing how this concept is being used in various ways in the context of conversational systems, I draw a sketch of the different associations that the term ‘imitation’ conveys and distinguish two main senses of the notion. The first one is what I call the ‘imitative behaviour’ and the second is what I call the ‘status of imitation’. I then highlight and untangle some conceptual difficulties with these two senses and conclude that neither of these applies to LLMs. Finally, I introduce an appropriate description that I call ‘imitation manufacturing’. All this ultimately helps me to explore a radical negative answer to the question of machine understanding.

Categories
Publications (EN)

‘The Metonymical Trap’

É. Boisseau, ‘The Metonymical Trap’, in Alice C. Helliwell, Alessandro Rossi, Brian Ball (eds), Wittgenstein and Artificial Intelligence, vol. 1 Mind and Language, Anthem Press, pp. 85-104, 2024.

Abstract:

In this chapter, I discuss and evaluate the question of the attribution of predicates to machines. Specifically, I address the question of the literal or metonymic nature of such attributions. In order to do so, I distinguish between what I call ‘physical’ or ‘natural’ predicates on the one hand, and ‘intellectual’ or ‘cognitive’ predicates on the other hand. I argue that while the former can be ascribed indistinguishably and literally both to a human and a non-human agent, the latter can only be ascribed literally to human agents. I then suggest that there is a risk of falling into what I call the ‘metonymical trap’, which consists in forgetting that the ascription of cognitive predicates to machines is only a derivative one and of therefore taking the machine used to perform an action for the literal subject of that action.

Publisher website