26.09.2024 - Seminarium Instytutowe — godz. 12:00
Daniel Rothschild (University College London)
Streszczenie (autorskie):
Daniel Dennett speculated in Kinds of Minds (1996):
“Perhaps the kind of mind you get when you add language to it is so different from the kind of mind you can have without language that calling them both minds is a mistake.”
Recent work in AI can be seen as testing Dennett’s thesis by exploring the performance of AI systems with and without linguistic training. I will argue that the success of Large Language Models at inferential reasoning, limited though it is, supports Dennett’s radical view about the effect of language on thought. I suggest it is the abstractness and efficiency of linguistic encoding that lies behind the capacity of LLMs to perform inferences across a wide range of domains. In a slogan, language makes inference computationally tractable. I will assess what these results in AI indicate about the role of language in the workings of our own biological minds.