Artwork
iconShare
 
Manage episode 509804128 series 3321545
Content provided by Kevin Owocki. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kevin Owocki or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

New @greenpillnet pod! Kevin chats with Joe Edelman, founder of the Meaning Alignment Institute, about his Full Stack Alignment paper. They dive into why current AI alignment methods fall short, explore richer “thick” models of value, lessons from social media, and four bold moonshots for AI and institutions that support human flourishing.

Links: https://meaningalignment.substack.com/p/introducing-full-stack-alignment https://meaninglabs.notion.site/The-Full-Stack-Alignment-Project-List-21cc5bada1d08016a496ca729476d970 @edelwax @meaningaligned @greenpillnet @owocki

Timestamps: 00:00 – Introduction to Green Pill’s new season and Joe Edelman 01:59 – Joe’s background and the Meaning Alignment Institute 03:43 – Why alignment matters for AI and institutions 05:46 – Lessons from social media and the attention economy 09:06 – Critique of shallow AI alignment approaches (RLHF, values-as-text) 13:20 – Thick models of value: going deeper than abstract ideals 15:11 – Full stack alignment across models, metrics, and institutions 17:00 – Reconciling values with capitalist incentive structures 19:17 – Avoiding dystopian economies and building value-driven markets 21:32 – Four moonshots: super negotiators, public resource regulators, market intermediaries, value stewardship agents 27:32 – Intermediaries vs. value stewardship agents explained 29:09 – How builders and academics can get involved in full stack alignment projects 31:10 – Why cross-institutional collaboration is critical 32:46 – Joe’s vision of the world in 10 years with full stack alignment 34:51 – Food system analogy: from “sugar” to nourishing AI 36:40 – Long-term vs. short-term incentives in markets 38:25 – Hopeful outlook: building integrity into AI and institutions 39:04 – Closing remarks and links to Joe’s work

  continue reading

260 episodes