Artwork
iconShare
 
Manage episode 520346498 series 3681137
Content provided by Neil C. Hughes. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Neil C. Hughes or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

What happens when a field races forward faster than society can understand it, let alone shape it? And how do we balance the promise of superintelligence with the responsibility to ensure it reflects the values of the people it will eventually serve? In this episode of AI at Work, I sit down with Dr Craig Kaplan, a pioneer who has been building intelligent systems since the 1980s and one of the few voices urging a deliberate and safer path toward AGI. Craig brings decades of perspective to a debate often dominated by short-term thinking, sharing why speed without design can become a trap and why the next breakthroughs must be grounded in intention rather than chance.

Throughout our conversation, Craig explains why current alignment methods often rely on narrow viewpoints, which creates both ethical and technical blind spots. He shares his belief that the values guiding future intelligence should come from millions of people across cultures rather than a handful of researchers writing a constitution behind closed doors. Drawing on his work at Predict Wall Street, he illustrates how collective intelligence can outperform experts, why diverse viewpoints matter, and how these lessons shape the architecture he believes is needed for safe AGI and the superintelligent systems that follow. His clarity on the difference between tools and entities, and how quickly AI is shifting into the latter category, offers a grounding moment for anyone trying to navigate what comes next.

This episode moves beyond fear and hype. Craig talks openly about risk, but he also brings optimism about the potential for systems that are safer, faster to build, less costly, and more reflective of humanity. For leaders wondering how to prepare their organisations, he shares what signals to watch, why transparency and design matter, and how a more democratic approach to intelligence could shift the odds of a better outcome. If you want a clear, thoughtful look at the road ahead for AGI, superintelligence, and the role humans still play in shaping both, you will find a lot to chew on here.

Listeners wanting to learn more can explore superintelligence.com, where Craig and the iQ Company team share research, videos, papers, and ways to get involved. What part of this conversation sparks your own questions about the future we are building together?

Sponsored by NordLayer:

Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.

  continue reading

21 episodes