Artwork
iconShare
 
Manage episode 491278910 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

In this episode, I talk with Peter Salib about his paper "AI Rights for Human Safety", arguing that giving AIs the right to contract, hold property, and sue people will reduce the risk of their trying to attack humanity and take over. He also tells me how law reviews work, in the face of my incredulity.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

Transcript: https://axrp.net/episode/2025/06/28/episode-44-peter-salib-ai-rights-human-safety.html

Topics we discuss, and timestamps:

0:00:40 Why AI rights

0:18:34 Why not reputation

0:27:10 Do AI rights lead to AI war?

0:36:42 Scope for human-AI trade

0:44:25 Concerns with comparative advantage

0:53:42 Proxy AI wars

0:57:56 Can companies profitably make AIs with rights?

1:09:43 Can we have AI rights and AI safety measures?

1:24:31 Liability for AIs with rights

1:38:29 Which AIs get rights?

1:43:36 AI rights and stochastic gradient descent

1:54:54 Individuating "AIs"

2:03:28 Social institutions for AI safety

2:08:20 Outer misalignment and trading with AIs

2:15:27 Why statutes of limitations should exist

2:18:39 Starting AI x-risk research in legal academia

2:24:18 How law reviews and AI conferences work

2:41:49 More on Peter moving to AI x-risk research

2:45:37 Reception of the paper

2:53:24 What publishing in law reviews does

3:04:48 Which parts of legal academia focus on AI

3:18:03 Following Peter's research

Links for Peter:

Personal website: https://www.peternsalib.com/

Writings at Lawfare: https://www.lawfaremedia.org/contributors/psalib

CLAIR: https://clair-ai.org/

Research we discuss:

AI Rights for Human Safety: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913167

Will humans and AIs go to war? https://philpapers.org/rec/GOLWAA

Infrastructure for AI agents: https://arxiv.org/abs/2501.10114

Governing AI Agents: https://arxiv.org/abs/2501.07913

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

58 episodes