Manage episode 520239285 series 3673686
Lucius Caviola is an Assistant Professor in the Social Science of AI at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, and a Research Associate in Psychology at Harvard University. His research explores how the potential arrival of conscious AI will reshape our social and moral norms. In today's interview, Lucius examines the psychological and social factors that will determine whether this transition unfolds well, or ends in moral catastrophe. He discusses:
- Why experts estimate a 50% chance that conscious digital minds will emerge by 2050
- The "takeoff" scenario where digital minds could outnumber humans in welfare capacity within a decade
- How "biological chauvinism" leads people to deny consciousness even in perfect whole-brain emulations
- The dual risks of "under-attribution" (unwittingly creating mass suffering) and "over-attribution" (sacrificing human values for unfeeling code)
- Surprising findings that people refuse to "harm" AI in economic games even when they explicitly believe the AI isn't conscious
Lucius argues that rigorous social science and forecasting are essential tools for navigating these risks, moving beyond intuition to prevent us from accidentally creating vast populations of digital beings capable of suffering, or failing to recognise consciousness where it exists.
Chapters
1. Introduction and Lucius's background (00:00:00)
2. Expert forecasts on the creation of digital minds (00:01:31)
3. Do comparisons to humans affect predictions of AI consciousness? (00:06:54)
4. Timelines and the population explosion of digital minds (00:11:13)
5. How theories of consciousness affect predictions (00:15:14)
6. Resolving the debate through revealed preferences (00:23:09)
7. Future scenarios: Net positive welfare vs. suffering (00:25:24)
8. Mind crime versus the Terminator scenario (00:29:48)
9. Public skepticism and the "Emma" thought experiment (00:31:48)
10. What convinces the public? (00:37:28)
11. Fictional accounts of social disruption due to conscious AI (00:44:39)
12. The "Reluctance to Harm" behavioral experiment (00:48:01)
13. Why people hesitate to harm AI they don't believe is conscious (00:53:48)
14. Policy actions, resources, and conclusion (01:01:28)
15. The twin risks of over-attribution and under-attribution (01:05:33)
8 episodes