Theory of Mind May Have Spontaneously Emerged in Large Language Models
Abstract
Theory of mind (ToM), or the ability to impute unobservable mental states to others, is central to human social interactions, communication, empathy, self-consciousness, and morality. We tested several language models using 40 classic false-belief tasks widely used to test ToM in humans. The models published before 2020 showed virtually no ability to solve ToM tasks. Yet, the first version of GPT-3 ("davinci-001"), published in May 2020, solved about 40% of false-belief tasks-performance comparable with 3.5-year-old children. Its second version ("davinci-002"; January 2022) solved 70% of false-belief tasks, performance comparable with six-year-olds. Its most recent version, GPT-3.5 ("davinci-003"; November 2022), solved 90% of false-belief tasks, at the level of seven-year-olds. GPT-4 published in March 2023 solved nearly all the tasks (95%). These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models' improving language skills.
- Publication:
-
arXiv e-prints
- Pub Date:
- February 2023
- DOI:
- 10.48550/arXiv.2302.02083
- arXiv:
- arXiv:2302.02083
- Bibcode:
- 2023arXiv230202083K
- Keywords:
-
- Computer Science - Computation and Language;
- Computer Science - Computers and Society;
- Computer Science - Human-Computer Interaction
- E-Print:
- TRY RUNNING ToM EXPERIMENTS ON YOUR OWN: The code and tasks used in this study are available at Colab (https://colab.research.google.com/drive/1zQKSDEhqEFcLCf5LuW--A-TGcAhF19hT). Don't worry if you are not an expert coder, you should be able to run this code with no-to-minimum Python skills. Or copy-paste the tasks to ChatGPT's web interface