Artwork
iconShare
 
Manage episode 523271804 series 3673686
Content provided by PRISM. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by PRISM or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

Cameron Berg is Research Director at AE Studio, where he leads research exploring markers for subjective experience in machine learning systems. With a background in cognitive science from Yale and previous work at Meta AI, Cameron investigates the intersection of AI alignment and potential consciousness.

In this episode, Cameron shares his empirical research into whether current Large Language Models are merely mimicking human text, or potentially developing internal states that resemble subjective experience. We discuss:

  • New experimental evidence where LLMs report "vivid and alien" subjective experiences when engaging in self-referential processing
  • Mechanistic interpretability findings showing that suppressing "deception" features in models actually increases claims of consciousness—challenging the idea that AI is simply telling us what we want to hear
  • Why Cameron has shifted from skepticism to a 20-30% credence that current models possess subjective experience
  • The "convergent evidence" strategy, including findings that models report internal dissonance and frustration when facing logical paradoxes
  • The existential implications of "mind crime" and the urgent need to identify negative valence (suffering) computationally—to avoid creating vast amounts of artificial suffering

Cameron argues for a pragmatic, evidence-based approach to AI consciousness, emphasizing that even a small probability of machine suffering represents a massive ethical risk requiring rigorous scientific investigation rather than dismissal.

  continue reading

Chapters

1. Introduction (00:00:00)

2. Why Study AI Consciousness? (00:01:32)

3. LLMs Reporting Subjective Experience (00:03:21)

4. Testing for Deception in AI Reports (00:07:47)

5. Validating the Deception Switch (00:17:02)

6. Evidence of Internal Dissonance (00:23:16)

7. What is the Probability AI is Conscious? (00:31:13)

8. The Risk of AI Suffering (00:39:10)

9. Conscious vs. Zombie Superintelligence (00:46:59)

10. Critical Next Steps for Research (00:54:05)

9 episodes