Categories
Publications (EN)

Imitation and Large Language Models

Boisseau, É. Imitation and Large Language Models. Minds & Machines 34, 42 (2024). https://doi.org/10.1007/s11023-024-09698-6 SMASH
Read the full article here (or email me for the pdf version): https://rdcu.be/dWvcH

Abstract:

The concept of imitation is both ubiquitous and curiously under-analysed in theoretical discussions about the cognitive powers and capacities of machines, and in particular—for what is the focus of this paper—the cognitive capacities of large language models (LLMs). The question whether LLMs understand what they say and what is said to them, for instance, is a disputed one, and it is striking to see this concept of imitation being mobilised here for sometimes contradictory purposes. After illustrating and discussing how this concept is being used in various ways in the context of conversational systems, I draw a sketch of the different associations that the term ‘imitation’ conveys and distinguish two main senses of the notion. The first one is what I call the ‘imitative behaviour’ and the second is what I call the ‘status of imitation’. I then highlight and untangle some conceptual difficulties with these two senses and conclude that neither of these applies to LLMs. Finally, I introduce an appropriate description that I call ‘imitation manufacturing’. All this ultimately helps me to explore a radical negative answer to the question of machine understanding.