Content provided by Stewart Alsop. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Stewart Alsop or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Crazy Wisdom
Mark all (un)played …
Manage series 2497498
Content provided by Stewart Alsop. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Stewart Alsop or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.
…
continue reading
460 episodes
Mark all (un)played …
Manage series 2497498
Content provided by Stewart Alsop. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Stewart Alsop or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.
…
continue reading
460 episodes
All episodes
×
1 Episode #459: AI, Mate, and the End of Web Dev as We Knew It 51:12
51:12
Play Later
Play Later
Lists
Like
Liked51:12
In this episode of the Crazy Wisdom Podcast, I, Stewart Alsop, talk with AJ Beckner about how AI is reshaping the terrain of software development—from the quiet ritual of mate-fueled coding sessions to the radical shift in DevOps, tool use, and what it means to build software in a post-LLM world. AJ shares insights from his time in the Gauntlet AI program, reflecting on how platforms like Cursor, Lovable, and Supabase are changing what’s possible for both seasoned engineers and newcomers alike. We also explore the nuanced barbell dynamic of skill disruption, the philosophical limits of current AI tooling, and how rapid prototyping has morphed from a fringe craft into a mainstream practice. You can find more about AJ on Twitter at @thisistheaj . Check out this GPT we trained on the conversation! Timestamps 00:00 — Stewart and AJ kick off with mate culture , using it as a metaphor for vibe coding and discussing AJ’s caffeine stack of coffee and yerba mate . 05:00 — They explore how AI coding reshaped AJ’s perspective, from URBIT and functional languages to embracing JavaScript due to LLMs’ strength in common corpuses. 10:00 — AJ breaks down why DevOps remains difficult even as AI accelerates coding , comparing deployment friction across tools like Cursor, Replit, and Lovable . 15:00 — They outline the barbell effect of AI disruption—how seasoned engineers and non-technical users thrive while the middle gets squeezed—and highlight Supabase ’s role in streamlining backends. 20:00 — AJ dives into context windows , memory limits, and the UX framing of AI’s intelligence. Cursor becomes a metaphor for tooling that “gets it right” through interaction. 25:00 — Stewart reflects on metadata , chunking , and structuring his 450+ podcast archive for AI. AJ proposes strategic summary hierarchies and cascading summaries . 30:00 — The Gauntlet AI program emerges as a case study in training high-openness engineers for applied AI work, replacing skeptical Stanford CS grads with practical builders. 35:00 — AJ outlines his background in rapid prototyping and how AI has supercharged that capacity. 40:00 — The conversation shifts to microservices , scale , and why shipping a monolith is still the right first move. 45:00 — They close with reflections on sovereignty, URBIT , and how AI may have functionally solved the UX problems URBIT originally aimed to address. Key Insights AI reshapes the value of programming languages and stacks: AJ Beckner reflects on how large language models have flipped the script on what makes a programming language or stack valuable. In a pre-LLM world, developers working with niche, complex systems like URBIT sought to simplify infrastructure to make app development manageable. But now, the vast and chaotic ecosystem of mainstream tools like JavaScript becomes a feature rather than a bug—LLMs thrive on the density of tokens and the volume of shared patterns. What was once seen as a messy stack becomes a fertile ground for AI-assisted development. DevOps remains the bottleneck even in an AI-accelerated workflow: Despite the dramatic speedups in building features with AI tools, deployment still presents significant friction. Connecting authentication, deploying to cloud services, or managing infrastructure introduces real delays. Platforms like Replit and Lovable attempt to solve this by integrating backend services like Supabase directly into their stack, but even then, complexity lingers. DevOps hasn’t disappeared—it’s just more concentrated and increasingly a specialization distinct from traditional app development. The 'barbell effect' splits value between experts and DIY builders: AI tools are democratizing access to software creation, allowing non-technical users to build functional apps. Yet this doesn’t flatten the playing field—it polarizes it. There’s growing value in both high-skill engineers who can build and maintain scalable, AI-enhanced systems, and low-skill users leveraging AI to extend their reach. But the middle—average developers doing routine app-building—risks being squeezed out. Success now hinges on openness, adaptability, and knowing where to play on the spectrum. Context isn’t about memory—it’s about tooling: A key insight AJ offers is that context window size isn’t the real limiter for LLMs; it’s a UX issue. Tools like Cursor demonstrate that with the right infrastructure—error tracking, smart diffs, and contextual awareness—you don’t need infinite memory, just the right slices of relevance. It mirrors human memory: we don’t recall everything, just the crucial bits. Designing around that constraint is more impactful than waiting for models with bigger context windows. The future belongs to those who prototype with conviction: AJ emphasizes that the ability to spin up ideas rapidly with AI is more than a technical advantage—it’s a philosophical one. People coming from a culture of iteration and lightweight experimentation are uniquely positioned to capitalize on this moment. He contrasts this with legacy engineers and institutions still approaching AI with skepticism or nostalgia. The ones who win will be those willing to reimagine their workflows, not retrofit AI into old mental models. Sovereignty and convenience sit in quiet tension: While platforms like URBIT aim to give users sovereign control over their compute, AJ points out that ease-of-use platforms like Lovable and Replit are delivering on the practical dream of custom software for all. There’s a moral ideal to sovereignty, but in practice, many people just want software that adapts to their needs, not servers they can control. The deeper question isn’t just about owning your stack, but about where ownership and usability intersect. AI doesn’t eliminate engineering—it clarifies its edges: Far from replacing engineers wholesale, AI sharpens the distinction between development and engineering. The role of site reliability engineers (SREs), DevOps specialists, and system architects remains crucial—especially in scaling, monitoring, and maintaining complex systems. Meanwhile, traditional “web dev” becomes increasingly automated. The craft of engineering doesn’t vanish; it migrates to the edges, where problems are less about writing code and more about ensuring resilience, scalability, and coherence.…

1 Episode #458: How a Tiny Irish Startup Beat Amazon to the Air 1:03:18
1:03:18
Play Later
Play Later
Lists
Like
Liked1:03:18
In this episode of the Crazy Wisdom Podcast, Stewart Alsop III talks with Bobby Healy, CEO and co-founder of Manna Drone Delivery, about the evolving frontier where the digital meets the physical—specifically, the promise and challenges of autonomous drone logistics. They explore how regulatory landscapes are shaping the pace of drone delivery adoption globally, why Europe is ahead of the U.S., and what it takes to build scalable infrastructure for airborne logistics. The conversation also touches on the future of aerial mobility, the implications of automation for local commerce, and the philosophical impacts of deflationary technologies. For more about Bobby and Manna, visit mana.aero or follow Bobby on Twitter at @RealBobbyHealy . Check out this GPT we trained on the conversation! Timestamps 00:00 – Stewart Alsop introduces Bobby Healy and opens with the promise vs. reality of drone tech; Healy critiques early overpromising and sets the stage for today's tech maturity . 05:00 – Deep dive into FAA vs. EASA regulation, highlighting the regulatory bottleneck in the U.S. and the agility of the EU's centralized model. 10:00 – Comparison of airspace complexity between the U.S. and Europe; Healy explains why drone scaling is easier in the EU’s less crowded sky. 15:00 – Discussion of urban vs. suburban deployment , the ground risk challenge, and why automated (not fully autonomous) operations are still standard. 20:00 – Exploration of pilot oversight , the role of remote monitoring , and how the system is already profitable per flight . 25:00 – LLMs and vibe coding accelerate software iteration; Healy praises AI-powered development , calling it transformative for engineers and founders. 30:00 – Emphasis on local delivery revolution ; small businesses are beating Amazon with ultra-fast drone drop-offs. 35:00 – Touches on Latin America’s opportunity , Argentina’s regulatory climate , and localized drone startups . 40:00 – Clarifies noise and privacy concerns; drone presence is minimal and often unnoticed, especially in suburbs . 45:00 – Final thoughts on airspace utilization , ground robots , and the deflationary effect of drone logistics on global commerce. Key Insights Drone Delivery’s Real Bottleneck is Regulation, Not Technology: While drone delivery technology has matured significantly—with off-the-shelf components now industrial-grade and reliable—the real constraint is regulatory. Bobby Healy emphasizes that in the U.S., drone delivery is several years behind Europe, not due to a lack of technological readiness, but because of a slower-moving and more complex regulatory environment governed by the FAA. In contrast, Europe benefits from a nimble, centralized aviation regulator (EASA), which has enabled faster deployment by treating regulation as the foundational "product" that allows the industry to launch. The U.S. Airspace is Inherently More Complex: Healy draws attention to the density and fragmentation of U.S. airspace as a major challenge. From private planes to hobbyist aircraft and military operations, the sheer volume and variety of stakeholders complicate the regulatory path. Even though the FAA has created a solid framework (e.g., Part 108), implementing and scaling it across such a vast and fragmented system is slow. This puts the U.S. at a disadvantage, even though it holds the largest market potential for drone delivery. Drone Logistics is Already Economically Viable at a Small Scale: Unlike many emerging technologies, drone delivery is already profitable on a per-flight basis. Healy notes that Manna’s drones, operating primarily in suburban areas, achieve unit economics that allow them to scale without needing to replace human pilots yet. These remote pilots still play a role for oversight and legal compliance, but full autonomy is technically ready and likely to be adopted within a few years. This puts Manna ahead of competitors, including some well-funded giants. Suburban and Rural Areas Will Benefit Most from Drone Delivery First: The initial commercial impact of drone delivery is strongest in high-density suburban regions where traditional logistics are inefficient. These environments allow for easy takeoff and landing without the spatial constraints of dense urban cores. Healy explains that rooftops, parking lots, and small-scale launch zones can already support dozens of flights per hour. Over time, this infrastructure could rebalance urban and rural economies by enabling local producers and retailers to compete effectively with large logistics giants. Drone Logistics Will Redefine Local Commerce: One of the most compelling outcomes discussed is how drone delivery changes the playing field for small, local businesses. Healy shares an example of a local Irish bookstore now beating Amazon on delivery speed thanks to Manna’s platform. With a six-minute turnaround from purchase to backyard delivery, drone logistics could dramatically lower barriers to entry for small businesses, giving them access to modern fulfillment without needing massive infrastructure. Massive Deflation in Logistics Could Lead to Broader Economic Shifts: Healy argues that drone delivery, like AI, will drive a deflationary wave across sectors. By reducing the marginal cost of transportation to near zero, this technology could increase consumption and economic activity while also creating new jobs and opportunities in non-urban areas. This shift resembles the broad societal transformation brought on by the spread of electricity in the early 20th century—ubiquitous, enabling, and invisible. Drones Could Transform Defense Strategy Through “Mutually Assured Defense”: In a thought-provoking segment, Healy discusses how cheap, scalable drone technology might shift the geopolitical landscape. Instead of focusing solely on destruction, drones could enable countries to build robust “defense clouds” over their borders—creating a deterrent similar to nuclear weapons but more accessible and less catastrophic. He proposes that wide-scale deployment of autonomous defensive drones could prevent conflicts by making invasion logistically impossible.…

1 Episode #457: Surviving the Dark Forest: Cryptography, Tribes, and the End of Institutions 1:01:56
1:01:56
Play Later
Play Later
Lists
Like
Liked1:01:56
On this episode of Crazy Wisdom, I, Stewart Alsop, spoke with Neil Davies, creator of the Extelligencer project, about survival strategies in what he calls the “Dark Forest” of modern civilization — a world shaped by cryptographic trust, intelligence-immune system fusion, and the crumbling authority of legacy institutions. We explored how concepts like zero-knowledge proofs could defend against deepening informational warfare, the shift toward tribal "patchwork" societies, and the challenge of building a post-institutional framework for truth-seeking. Listeners can find Neil on Twitter as @sigilante and explore more about his work in the Extelligencer substack. Check out this GPT we trained on the conversation! Timestamps 00:00 Introduction of Neil Davies and the Extelligencer project, setting the stage with Dark Forest theory and operational survival concepts. 05:00 Expansion on Dark Forest as a metaphor for Internet-age exposure, with examples like scam evolution, parasites, and the vulnerability of modern systems. 10:00 Discussion of immune-intelligence fusion, how organisms like anthills and the Portuguese Man o’ War blend cognition and defense, leading into memetic immune systems online. 15:00 Introduction of cryptographic solutions, the role of signed communications, and the growing importance of cryptographic attestation against sophisticated scams. 20:00 Zero-knowledge proofs explained through real-world analogies like buying alcohol, emphasizing minimal information exposure and future-proofing identity verification. 25:00 Transition into post-institutional society, collapse of legacy trust structures, exploration of patchwork tribes, DAOs, and portable digital organizations. 30:00 Reflection on association vs. hierarchy, the persistence of oligarchies, and the shift from aristocratic governance to manipulated mass democracy. 35:00 AI risks discussed, including trapdoored LLMs, epistemic hygiene challenges, and historical examples like gold fulminate booby-traps in alchemical texts. 40:00 Controlled information flows, secular religion collapse, questioning sources of authority in a fragmented information landscape. 45:00 Origins and evolution of universities, from medieval student-driven models to Humboldt's research-focused institutions, and the absorption by the nation-state. 50:00 Financialization of universities, decay of independent scholarship, and imagining future knowledge structures outside corrupted legacy frameworks. Key Insights The "Dark Forest" is not just a cosmological metaphor, but a description of modern civilization's hidden dangers. Neil Davies explains that today's world operates like a Dark Forest where exposure — making oneself legible or visible — invites predation. This framework reshapes how individuals and groups must think about security, trust, and survival, particularly in an environment thick with scams, misinformation, and parasitic actors accelerated by the Internet. Immune function and intelligence function have fused in both biological and societal contexts. Davies draws a parallel between decentralized organisms like anthills and modern human society, suggesting that intelligence and immunity are inseparable functions in highly interconnected systems. This fusion means that detecting threats, maintaining identity, and deciding what to incorporate or reject is now an active, continuous cognitive and social process. Cryptographic tools are becoming essential for basic trust and survival. With the rise of scams that mimic legitimate authority figures and institutions, Davies highlights how cryptographic attestation — and eventually more sophisticated tools like zero-knowledge proofs — will become fundamental. Without cryptographically verifiable communication, distinguishing real demands from predatory scams may soon become impossible, especially as AI-generated deception grows more convincing. Institutions are hollowing out, but will not disappear entirely. Rather than a sudden collapse, Davies envisions a future where legacy institutions like universities, corporations, and governments persist as "zombie" entities — still exerting influence but increasingly irrelevant to new forms of social organization. Meanwhile, smaller, nimble "patchwork" tribes and digital-first associations will become more central to human coordination and identity. Modern universities have drifted far from their original purpose and structure. Tracing the history from medieval student guilds to Humboldt’s 19th-century research universities, Davies notes that today’s universities are heavily compromised by state agendas, mass democracy, and financialization. True inquiry and intellectual aloofness — once core to the ideal of the university — now require entirely new, post-institutional structures to be viable. Artificial intelligence amplifies both opportunity and epistemic risk. Davies warns that large language models (LLMs) mainly recombine existing information rather than generate truly novel insights. Moreover, they can be trapdoored or poisoned at the data level, introducing dangerous, invisible vulnerabilities. This creates a new kind of "Dark Forest" risk: users must assume that any received information may carry unseen threats or distortions. There is no longer a reliable central authority for epistemic trust. In a fragmented world where Wikipedia is compromised, traditional media is polarized, and even scientific institutions are politicized, Davies asserts that we must return to "epistemic hygiene." This means independently verifying knowledge where possible and treating all claims — even from AI — with skepticism. The burden of truth-validation increasingly falls on individuals and their trusted, cryptographically verifiable networks.…

1 Episode #456: What Happens When Your AI Thinks Like You (On Purpose) 54:41
54:41
Play Later
Play Later
Lists
Like
Liked54:41
On this episode, Stewart Alsop talks with Suman Kanuganti, founder of Personal.ai and a pioneer in AI for accessibility and human-machine collaboration. Together, they explore how Suman’s journey from launching Aira to building Personal.ai reflects a deeper mission of creating technology that enhances memory, communication, and personal empowerment. They touch on entrepreneurship, inclusive design, and the future of AI as a personal extension of human potential. For more information, visit the Personal.ai website or connect with Suman on LinkedIn . Check out this GPT we trained on the conversation! Timestamps 00:00 Introduction to Suman Kanuganti and the vision behind Personal.ai, setting the stage with AI for accessibility and personal empowerment. 05:00 Discussing the startup journey, the leap from corporate life to entrepreneurship, and the founding of Aira with a focus on inclusive technology. 10:00 Deep dive into communication empowerment, how Aira built independence for the blind community, and lessons learned from solving real-world problems. 15:00 Transitioning from Aira to Personal.ai, exploring memory extension and the future of personal communication through AI models. 20:00 Addressing privacy, ownership of personal data, and why trust is fundamental in the development of personalized AI systems. 25:00 Vision of human-machine collaboration, future scenarios where AI supports memory, creativity, and human potential without replacing human agency. 30:00 Closing reflections on entrepreneurship, building technology with deep purpose, and how inclusive design drives innovation for everyone. Key Insights Personalized AI is the Next Evolution in Human Communication: Suman Kanuganti emphasizes that AI is moving beyond generic tools and into deeply personal territory, where each individual can have an AI modeled after their own thoughts, memories, and style of communication. This evolution is aimed at making technology an extension of the self rather than a replacement. Accessibility Technologies Have Broader Applications: Through his work with Aira, Suman discovered that building tools for accessibility often results in innovations that serve a much wider audience. By designing with people with disabilities in mind, entrepreneurs can create more universally empowering technologies that enhance independence for everyone. Entrepreneurship Requires a Deep Sense of Purpose: Suman’s transition from corporate engineering to entrepreneurship was fueled by a personal desire to create meaningful change. He highlights that a strong mission—like empowering individuals through technology—helps sustain entrepreneurs through the inevitable challenges and uncertainties of building startups. Memory Is a Key Frontier for AI Development: One of the core ideas discussed is that memory preservation and recall is an essential human function that AI can augment. Personal.ai aims to assist individuals by organizing and retrieving personal memories and knowledge, offering a future where mental workload is reduced without losing personal agency. Building Trust Is Critical in Personal AI: Suman stresses that for AI to become truly personal and trusted, users must retain ownership and control over their data. Personal.ai is designed with privacy and individual autonomy at its core, reflecting a future where users dictate how their information is stored, accessed, and shared. The Best Innovations Come from Solving Specific, Real Problems: Rather than chasing trends, Suman advocates for entrepreneurs to focus on tangible problems they understand deeply. His success with Aira stemmed from addressing a clear need in the blind community, and that same principle now drives the mission behind Personal.ai—addressing the growing problem of information overload and memory fragmentation. Human-AI Symbiosis Will Define the Future: Suman paints a future where humans and AI work symbiotically, each complementing the other’s strengths. Instead of replacing human intelligence, the best AI systems will support cognitive functions like memory, creativity, and communication, ultimately expanding what individuals can achieve personally and professionally.…

1 Episode #455: The End of IPOs and the Rise of Tokenized Everything 50:15
50:15
Play Later
Play Later
Lists
Like
Liked50:15
In this episode of the Crazy Wisdom Podcast, I, Stewart Alsop III, speak with David Packham, CEO and co-founder of Chintai, about the real-world implications of tokenizing assets—from real estate and startup equity to institutional finance and beyond. David shares insights from his time inside Goldman Sachs during the 2008 crash, his journey into blockchain starting in 2016, and how Chintai is now helping reshape the financial system through compliant, blockchain-based infrastructure. We talk about the collapse of institutional trust, the weirdness of meme coins, the possible obsolescence of IPOs, and the deeper societal shifts underway. For more on David and Chintai, check out chintai.io and chintainexus.com . Check out this GPT we trained on the conversation! Timestamps 00:00 – David Packham introduces Chintai and explains the vision of tokenizing real world assets , highlighting the failure of early promises and the need for real transformation in finance. 05:00 – The conversation turns to accredited investors , regulatory controls , and how Chintai ensures compliance while preserving self-custody and smart contract-level restrictions. 10:00 – Discussion of innovative asset models like yield-bearing tokens tied to Manhattan real estate and tokenized private funds , showing how commercial use cases are overtaking DeFi gimmicks. 15:00 – Packham unpacks how liquidity is reshaping startup equity , potentially making IPOs obsolete by offering secondary markets and early investor exits through tokenization. 20:00 – The focus shifts to global crypto hubs. Singapore’s limitations , US entrepreneurial resurgence , and Hong Kong’s return to crypto leadership come up. 25:00 – Stewart and David discuss the broader decentralization of institutions , including government finance on blockchain, and the surprising effect of CBDCs in China . 30:00 – They explore the cultural dimensions of decentralization, including the network state , societal decline , and the importance of shared values for cohesion. 35:00 – Wrapping up, they touch on the philosophy of investment vs. speculation , the corruption of fiat systems , and the potential for real-world assets to stabilize crypto portfolios. Key Insights Tokenization is transforming access to financial markets: David Packham explains how tokenizing real-world assets—like real estate, private debt, and startup equity—can unlock previously illiquid sectors. Through blockchain, assets become tradable, accessible, and transparent, with innovations like fractional ownership and yield-bearing tokens making markets more efficient. Chintai, his company, enables this transformation by providing compliant infrastructure for institutions and investors to engage with these assets securely. The era of IPOs may be nearing its end: Packham suggests that traditional IPOs, with their delayed liquidity and gatekeeping, are becoming obsolete. With blockchain, companies can now tokenize equity and provide liquidity earlier in their lifecycle. This changes the game for startups and investors alike, enabling ongoing access to investment opportunities and exits without needing to go public in the conventional sense. The crypto industry is maturing beyond speculation: Reflecting on the shift from the ideologically driven early days of crypto to the speculative fervor of ICOs, NFTs, and meme coins, Packham calls for a return to fundamentals. He envisions a future where crypto supports real economic activity, especially through projects that build infrastructure for compliant, meaningful use cases. Degenerate gambling, he argues, may coexist with more serious ventures, but the latter will shape the future. Decentralization is challenging traditional power structures: The conversation touches on how blockchain can reduce favoritism and control in financial systems. Packham highlights how tools like permissioned ledgers and smart contracts can enforce fairness, resist corruption, and enhance access. He contrasts this with legacy systems, which often protect elite interests, drawing on his own experience at Goldman Sachs during the 2008 crisis. Global leadership in crypto is shifting: While Singapore positioned itself as a key crypto hub, Packham notes its lack of entrepreneurial culture compared to the U.S. and China. He observes that regulatory openness is important, but business culture and capital depth are decisive. The U.S. has reemerged as a key player, showing renewed interest and drive, while Hong Kong and China continue to move boldly in this space. The societal impact of financial technology is profound: The episode explores how blockchain might influence governance and societal organization. From the potential tokenization of government operations to more transparent fiscal policies, Packham sees emerging possibilities for better systems—though he warns against naive techno-utopianism. He reflects on the dual-edged nature of technologies like CBDCs, which can enhance transparency but also increase state control. Cultural values matter in shaping the future: The conversation ends on a philosophical note, examining the tension between decentralization, cultural identity, and immigration. Packham emphasizes that shared values and cultural cohesion are crucial for societal stability. He challenges idealistic notions like the “network state” by pointing out that human nature and cultural alignment still play a major role in the success or failure of social systems.…

1 Episode #453: Trustware vs. Adware: Toward a Humane Stack for Human Life 58:50
58:50
Play Later
Play Later
Lists
Like
Liked58:50
On this episode of the Crazy Wisdom podcast, I, Stewart Alsop, sat down once again with Aaron Lowry for our third conversation, and it might be the most expansive yet. We touched on the cultural undercurrents of transhumanism, the fragile trust structures behind AI and digital infrastructure, and the potential of 3D printing with metals and geopolymers as a material path forward. Aaron shared insights from his hands-on restoration work, our shared fascination with Amish tech discernment, and how course-correcting digital dependencies can restore sovereignty. We also explored what it means to design for long-term human flourishing in a world dominated by misaligned incentives. For those interested in following Aaron’s work, he's most active on Twitter at @Aaron_Lowry . Check out this GPT we trained on the conversation! Timestamps 00:00 – Stewart welcomes Aaron Lowry back for his third appearance. They open with reflections on cultural shifts post-COVID, the breakdown of trust in institutions, and a growing societal impulse toward individual sovereignty, free speech, and transparency. 05:00 – The conversation moves into the changing political landscape, specifically how narratives around COVID, Trump, and transhumanism have shifted. Aaron introduces the idea that historical events are often misunderstood due to our tendency to segment time, referencing Dan Carlin’s quote, “everything begins in the middle of something else.” 10:00 – They discuss how people experience politics differently now due to the Internet’s global discourse, and how Aaron avoids narrow political binaries in favor of structural and temporal nuance. They explore identity politics, the crumbling of party lines, and the erosion of traditional social anchors. 15:00 – Shifting gears to technology, Aaron shares updates on 3D printing, especially the growing maturity of metal printing and geopolymers. He highlights how these innovations are transforming fields like automotive racing and aerospace, allowing for precise, heat-resistant, custom parts. 20:00 – The focus turns to mechanical literacy and the contrast between abstract digital work and embodied craftsmanship. Stewart shares his current tension between abstract software projects (like automating podcast workflows with AI) and his curiosity about the Amish and Mennonite approach to technology. 25:00 – Aaron introduces the idea of a cultural “core of integrated techne”—technologies that have been refined over time and aligned with human flourishing. He places Amish discernment on a spectrum between Luddite rejection and transhumanist acceleration, emphasizing the value of deliberate integration. 30:00 – The discussion moves to AI again, particularly the concept of building local, private language models that can persistently learn about and serve their user without third-party oversight. Aaron outlines the need for trust, security, and stateful memory to make this vision work. 35:00 – Stewart expresses frustration with the dominance of companies like Google and Facebook, and how owning the Jarvis-like personal assistant experience is critical. Aaron recommends options like GrapheneOS on a Pixel 7 and reflects on the difficulty of securing hardware at the chip level. 40:00 – They explore software development and the problem of hidden dependencies. Aaron explains how digital systems rest on fragile, often invisible material infrastructure and how that fragility is echoed in the complexity of modern software stacks. 45:00 – The concept of “always be reducing dependencies” is expanded. Aaron suggests the real goal is to reduce untrustworthy dependencies and recognize which are worth cultivating. Trust becomes the key variable in any resilient system, digital or material. 50:00 – The final portion dives into incentives. They critique capitalism’s tendency to exploit value rather than build aligned systems. Aaron distinguishes rivalrous games from infinite games and suggests the future depends on building systems that are anti-rivalrous—where ideas compete, not people. 55:00 – They wrap up with reflections on course correction, spiritual orientation, and cultural reintegration. Stewart suggests titling the episode around infinite games, and Aaron shares where listeners can find him online. Key Insights Transhumanism vs. Techne Integration: Aaron frames the modern moment as a tension between transhumanist enthusiasm and a more grounded relationship to technology, rooted in "techne"—practical wisdom accumulated over time. Rather than rejecting all new developments, he argues for a continuous course correction that aligns emerging technologies with deep human values like truth, goodness, and beauty. The Amish and Mennonite model of communal tech discernment stands out as a countercultural but wise approach—judging tools by their long-term effects on community, rather than novelty or entertainment. 3D Printing as a Material Frontier: While most of the 3D printing world continues to refine filaments and plastic-based systems, Aaron highlights a more exciting trajectory in printed metals and geopolymers. These technologies are maturing rapidly and finding serious application in domains like Formula One, aerospace, and architectural experimentation. His conversations with others pursuing geopolymer 3D printing underscore a resurgence of interest in materially grounded innovation, not just digital abstraction. Digital Infrastructure is Physical: Aaron emphasizes a point often overlooked: that all digital systems rest on physical infrastructure—power grids, servers, cables, switches. These systems are often fragile and loaded with hidden dependencies. Recognizing the material base of digital life brings a greater sense of responsibility and stewardship, rather than treating the internet as some abstract, weightless realm. This shift in awareness invites a more embodied and ecological relationship with our tools. Local AI as a Trustworthy Companion: There’s a compelling vision of a Jarvis-like local AI assistant that is fully private, secure, and persistent. For this to function, it must be disconnected from untrustworthy third-party cloud systems and trained on a personal, context-rich dataset. Aaron sees this as a path toward deeper digital agency: if we want machines that truly serve us, they need to know us intimately—but only in systems we control. Privacy, persistent memory, and alignment to personal values become the bedrock of such a system. Dependencies Shape Power and Trust: A recurring theme is the idea that every system—digital, mechanical, social—relies on a web of dependencies. Many of these are invisible until they fail. Aaron’s mantra, “always be reducing dependencies,” isn’t about total self-sufficiency but about cultivating trustworthy dependencies. The goal isn’t zero dependence, which is impossible, but discerning which relationships are resilient, personal, and aligned with your values versus those that are extractive or opaque. Incentives Must Be Aligned with the Good: A core critique is that most digital services today—especially those driven by advertising—are fundamentally misaligned with human flourishing. They monetize attention and personal data, often steering users toward addiction or ...…

1 Episode #452: Text as Interface: Rethinking Human-Computer Symbiosis 55:06
55:06
Play Later
Play Later
Lists
Like
Liked55:06
In this episode of Crazy Wisdom, Stewart Alsop talks with Will Bickford about the future of human intelligence, the exocortex, and the role of software as an extension of our minds. Will shares his thinking on brain-computer interfaces, PHEXT (a plain text protocol for structured data), and how high-dimensional formats could help us reframe the way we collaborate and think. They explore the abstraction layers of code and consciousness, and why Will believes that better tools for thought are not just about productivity, but about expanding the boundaries of what it means to be human. You can connect with Will in Twitter at @wbic16 or check out the links mentioned by Will in Github . Check out this GPT we trained on the conversation! Timestamps 00:00 – Introduction to the concept of the exocortex and how current tools like plain text editors and version control systems serve as early forms of cognitive extension. 05:00 – Discussion on brain-computer interfaces (BCIs), emphasizing non-invasive software interfaces as powerful tools for augmenting human cognition. 10:00 – Introduction to PHEXT, a plain text format designed to embed high-dimensional structure into simple syntax, facilitating interoperability between software systems. 15:00 – Exploration of software abstraction as a means of compressing vast domains of meaning into manageable forms, enhancing understanding rather than adding complexity. 20:00 – Conversation about the enduring power of text as an interface, highlighting its composability, hackability, and alignment with human symbolic processing. 25:00 – Examination of collaborative intelligence and the idea that intelligence emerges from distributed systems involving people, software, and shared ideas. 30:00 – Discussion on the importance of designing better communication protocols, like PHEXT, to create systems that align with human thought processes and enhance cognitive capabilities. 35:00 – Reflection on the broader implications of these technologies for the future of human intelligence and the potential for expanding the boundaries of human cognition. Key Insights The exocortex is already here, just not evenly distributed. Will frames the exocortex not as a distant sci-fi future, but as something emerging right now in the form of external software systems that augment our thinking. He suggests that tools like plain text editors, command-line interfaces, and version control systems are early prototypes of this distributed cognitive architecture—ways we already extend our minds beyond the biological brain. Brain-computer interfaces don’t need to be invasive to be powerful. Rather than focusing on neural implants, Will emphasizes software interfaces as the true terrain of BCIs. The bridge between brain and computer can be as simple—and profound—as the protocols we use to interact with machines. What matters is not tapping into neurons directly, but creating systems that think with us, where interface becomes cognition. PHEXT is a way to compress meaning while remaining readable. At the heart of Will’s work is PHEXT, a plain text format that embeds high-dimensional structure into simple syntax. It’s designed to let software interoperate through shared, human-readable representations of structured data—stripping away unnecessary complexity while still allowing for rich expressiveness. It's not just a format, but a philosophy of communication between systems and people. Software abstraction is about compression, not complexity. Will pushes back against the idea that abstraction means obfuscation. Instead, he sees abstraction as a way to compress vast domains of meaning into manageable forms. Good abstractions reveal rather than conceal—they help you see more with less. In this view, the challenge is not just to build new software, but to compress new layers of insight into form. Text is still the most powerful interface we have. Despite decades of graphical interfaces, Will argues that plain text remains the highest-bandwidth cognitive tool. Text allows for versioning, diffing, grepping—it plugs directly into the brain's symbolic machinery. It's composable, hackable, and lends itself naturally to abstraction. Rather than moving away from text, the future might involve making text higher-dimensional and more semantically rich. The future of thinking is collaborative, not just computational. One recurring theme is that intelligence doesn’t emerge in isolation—it’s distributed. Will sees the exocortex as something inherently social: a space where people, software, and ideas co-think. This means building interfaces not just for solo users, but for networked groups of minds working through shared representations. Designing better protocols is designing better minds. Will’s vision is protocol-first. He sees the structure of communication—between apps, between people, between thoughts—as the foundation of intelligence itself. By designing protocols like PHEXT that align with how we actually think, we can build software that doesn’t just respond to us, but participates in our thought processes.…

1 Episode #451: Narrative as Infrastructure: Why Culture Now Runs on Memes 56:43
56:43
Play Later
Play Later
Lists
Like
Liked56:43
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Trent Gillham—also known as Drunk Plato—for a far-reaching conversation on the shifting tides of technology, memetics, and media. Trent shares insights from building Meme Deck (find it at memedeck.xyz or follow @memedeckapp on X), exploring how social capital, narrative creation, and open-source AI models are reshaping not just the tools we use, but the very structure of belief and influence in the information age. We touch on everything from the collapse of legacy media, to hyperstition and meme warfare, to the metaphysics of blockchain as the only trustable memory in an unmoored future. You can find Trent in twitter as @AidenSolaran . Check out this GPT we trained on the conversation! Timestamps 00:00 – Introduction to Trent Gillham and Meme Deck, early thoughts on AI’s rapid pace, and the shift from training models to building applications around them. 05:00 – Discussion on the collapse of the foundational model economy, investor disillusionment, GPU narratives, and how AI infrastructure became a kind of financial bubble. 10:00 – The function of markets as belief systems, blowouts when inflated narratives hit reality, and how meme-based value systems are becoming indistinguishable from traditional finance. 15:00 – The role of hyperstition in creation, comparing modern tech founders to early 20th-century inventors, and how visual proof fuels belief and innovation. 20:00 – Reflections on the intelligence community’s influence in tech history, Facebook’s early funding, and how soft influence guides the development of digital tools and platforms. 25:00 – Weaponization of social media, GameStop as a memetic uprising, the idea of memetic tools leaking from government influence into public hands. 30:00 – Meme Deck’s vision for community-led narrative creation, the shift from centralized media to decentralized, viral, culturally fragmented storytelling. 35:00 – The sophistication gap in modern media, remix culture, the idea of decks as mini subreddits or content clusters, and incentivizing content creation with tokens. 40:00 – Good vs bad meme coins, community-first approaches, how decentralized storytelling builds real value through shared ownership and long-term engagement. 45:00 – Memes as narratives vs manipulative psyops, blockchain as the only trustable historical record in a world of mutable data and shifting truths. 50:00 – Technical challenges and future plans for Meme Deck, data storage on-chain, reputation as a layer of trust, and AI’s need for immutable data sources. 55:00 – Final reflections on encoding culture, long-term value of on-chain media, and Trent’s vision for turning podcast conversations into instant, storyboarded, memetic content. Key Insights The real value in AI isn’t in building models—it’s in building tools that people can use: Trent emphasized that the current wave of AI innovation is less about creating foundational models, which have become commoditized, and more about creating interfaces and experiences that make those models useful. Training base models is increasingly seen as a sunk cost, and the real opportunity lies in designing products that bring creative and cultural capabilities directly to users. Markets operate as belief machines, and the narratives they run on are increasingly memetic: He described financial markets not just as economic systems, but as mechanisms for harvesting collective belief—what he called “hyperstition.” This dynamic explains the cycles of hype and crash, where inflated visions eventually collide with reality in what he terms "blowouts." In this framing, stocks and companies function similarly to meme coins—vehicles for collective imagination and risk. Memes are no longer just jokes—they are cultural infrastructure: As Trent sees it, memes are evolving into complex, participatory systems for narrative building. With tools like Meme Deck, entire story worlds can be generated, remixed, and spread by communities. This marks a shift from centralized, top-down media (like Hollywood) to decentralized, socially-driven storytelling where virality is coded into the content from the start. Community is the new foundation of value in digital economies: Rather than focusing on charismatic individuals or short-term hype, Trent emphasized that lasting projects need grassroots energy—what he calls “vibe strapping.” Successful meme coins and narrative ecosystems depend on real participation, sustained engagement, and a shared sense of creative ownership. Without that, projects fizzle out as quickly as they rise. The battle for influence has moved from borders to minds: Reflecting on the information age, Trent noted that power now resides in controlling narratives, and thus in shaping perception. This is why information warfare is subtle, soft, and persistent—and why traditional intelligence operations have evolved into influence campaigns that play out in digital spaces like social media and meme culture. Blockchains may become the only reliable memory in a world of digital manipulation: In an era where digital content is easily altered or erased, Trent argued that blockchain offers the only path to long-term trust. Data that ends up on-chain can be verified and preserved, giving future intelligences—or civilizations—a stable record of what really happened. He sees this as crucial not only for money, but for culture itself. Meme Deck aims to democratize narrative creation by turning community vibes into media outputs: Trent shared his vision for Meme Deck as a platform where communities can generate not just memes, but entire storylines and media formats—from anime pilots to cinematic remixes—by collaborating and contributing creative energy. It's a model where decentralized media becomes both an art form and a social movement, rooted in collective imagination rather than corporate production.…

1 Episode #450: 102% Backed and 100% Transparent: Inside the Wyoming Stable Token 53:33
53:33
Play Later
Play Later
Lists
Like
Liked53:33
On this episode of Crazy Wisdom, I’m joined by David Pope, Commissioner on the Wyoming Stable Token Commission, and Executive Director Anthony Apollo, for a wide-ranging conversation that explores the bold, nuanced effort behind Wyoming’s first-of-its-kind state-issued stable token. I’m your host Stewart Alsop, and what unfolds in this dialogue is both a technical unpacking and philosophical meditation on trust, financial sovereignty, and what it means for a government to anchor itself in transparent, programmable value. We move through Anthony’s path from Wall Street to Web3, the infrastructure and intention behind tokenizing real-world assets, and how the U.S. dollar’s future could be shaped by state-level innovation. If you're curious to follow along with their work, everything from blockchain selection criteria to commission recordings can be found at stabletoken.wyo.gov . Check out this GPT we trained on the conversation! Timestamps 00:00 – David Pope and Anthony Apollo introduce themselves, clarifying they speak personally, not for the Commission. You, Stewart, set an open tone, inviting curiosity and exploration. 05:00 – Anthony shares his path from traditional finance to Ethereum and government, driven by frustration with legacy banking inefficiencies. 10:00 – Tokenized bonds enter the conversation via the Spencer Dinwiddie project. Pope explains early challenges with defining “real-world assets.” 15:00 – Legal limits of token ownership vs. asset title are unpacked. You question whether anything “real” has been tokenized yet. 20:00 – Focus shifts to the Wyoming Stable Token: its constitutional roots and blockchain as a tool for fiat-backed stability without inflation. 25:00 – Comparison with CBDCs: Apollo explains why Wyoming’s token is transparent, non-programmatic, and privacy-focused. 30:00 – Legislative framework: the 102% backing rule, public audits, and how rulemaking differs from law. You explore flexibility and trust. 35:00 – Global positioning: how Wyoming stands apart from other states and nations in crypto policy. You highlight U.S. federalism’s role. 40:00 – Topics shift to velocity, peer-to-peer finance, and risk. You connect this to Urbit and decentralized systems. 45:00 – Apollo unpacks the stable token’s role in reinforcing dollar hegemony, even as BRICS move away from it. 50:00 – Wyoming’s transparency and governance as financial infrastructure. You reflect on meme coins and state legitimacy. 55:00 – Discussion of Bitcoin reserves, legislative outcomes, and what’s ahead. The conversation ends with vision and clarity. Key Insights Wyoming is pioneering a new model for state-level financial infrastructure. Through the creation of the Wyoming Stable Token Commission, the state is developing a fully-backed, transparent stable token that aims to function as a public utility. Unlike privately issued stablecoins, this one is mandated by law to be 102% backed by U.S. dollars and short-term treasuries, ensuring high trust and reducing systemic risk. The stable token is not just a tech innovation—it’s a philosophical statement about trust. As David Pope emphasized, the transparency and auditability of blockchain-based financial instruments allow for a shift toward self-auditing systems, where trust isn’t assumed but proven. In contrast to the opaque operations of legacy banking systems, the stable token is designed to be programmatically verifiable. Tokenized real-world assets are coming, but we’re not there yet. Anthony Apollo and David Pope clarify that most "real-world assets" currently tokenized are actually equity or debt instruments that represent ownership structures, not the assets themselves. The next leap will involve making the token itself the title, enabling true fractional ownership of physical or financial assets without intermediary entities. This initiative strengthens the U.S. dollar rather than undermining it. By creating a transparent, efficient vehicle for global dollar transactions, the Wyoming Stable Token could bolster the dollar’s role in international finance. Instead of competing with the dollar, it reinforces its utility in an increasingly digital economy—offering a compelling alternative to central bank digital currencies that raise concerns around surveillance and control. Stable tokens have the potential to become major holders of U.S. debt. Anthony Apollo points out that the aggregate of all fiat-backed stable tokens already represents a top-tier holder of U.S. treasuries. As adoption grows, state-run stable tokens could play a crucial role in sovereign debt markets, filling gaps left by foreign governments divesting from U.S. securities. Public accountability is central to Wyoming’s approach. Unlike private entities that can change terms at will, the Wyoming Commission is legally bound to go through a public rulemaking process for any adjustments. This radical transparency offers both stability and public trust, setting a precedent for how digital public infrastructure can be governed. The ultimate goal is to build a bridge between traditional finance and the Web3 future. Rather than burn the old system down, Pope and Apollo are designing the stable token as a pragmatic transition layer—something institutions can trust and privacy advocates can respect. It’s about enabling safe experimentation and gradual transformation, not triggering collapse.…

1 Episode #449: The Strange Loop: How Biology and Computation Shape Each Other 55:10
55:10
Play Later
Play Later
Lists
Like
Liked55:10
In this episode of Crazy Wisdom, Stewart Alsop speaks with German Jurado about the strange loop between computation and biology, the emergence of reasoning in AI models, and what it means to "stand on the shoulders" of evolutionary systems. They talk about CRISPR not just as a gene-editing tool, but as a memory architecture encoded in bacterial immunity; they question whether LLMs are reasoning or just mimicking it; and they explore how scientists navigate the unknown with a kind of embodied intuition. For more about German’s work, you can connect with him through email at germanjurado7@gmail.com. Check out this GPT we trained on the conversation! Timestamps 00:00 - Stewart introduces German Jurado and opens with a reflection on how biology intersects with multiple disciplines—physics, chemistry, computation. 05:00 - They explore the nature of life’s interaction with matter, touching on how biology is about the interface between organic systems and the material world. 10:00 - German explains how bioinformatics emerged to handle the complexity of modern biology, especially in genomics, and how it spans structural biology, systems biology, and more. 15:00 - Introduction of AI into the scientific process—how models are being used in drug discovery and to represent biological processes with increasing fidelity. 20:00 - Stewart and German talk about using LLMs like GPT to read and interpret dense scientific literature, changing the pace and style of research. 25:00 - The conversation turns to societal implications—how these tools might influence institutions, and the decentralization of expertise. 30:00 - Competitive dynamics between AI labs, the scaling of context windows, and speculation on where the frontier is heading. 35:00 - Stewart reflects on English as the dominant language of science and the implications for access and translation of knowledge. 40:00 - Historical thread: they discuss the Republic of Letters, how the structure of knowledge-sharing has evolved, and what AI might do to that structure. 45:00 - Wrap-up thoughts on reasoning, intuition, and the idea of scientists as co-evolving participants in both natural and artificial systems. 50:00 - Final reflections and thank-yous, German shares where to find more of his thinking, and Stewart closes the loop on the conversation. Key Insights CRISPR as a memory system – Rather than viewing CRISPR solely as a gene-editing tool, German Jurado frames it as a memory architecture—an evolved mechanism through which bacteria store fragments of viral DNA as a kind of immune memory. This perspective shifts CRISPR into a broader conceptual space, where memory is not just cognitive but deeply biological. AI models as pattern recognizers, not yet reasoners – While large language models can mimic reasoning impressively, Jurado suggests they primarily excel at statistical pattern matching. The distinction between reasoning and simulation becomes central, raising the question: are these systems truly thinking, or just very good at appearing to? The loop between computation and biology – One of the core themes is the strange feedback loop where biology inspires computational models (like neural networks), and those models in turn are used to probe and understand biological systems. It's a recursive relationship that’s accelerating scientific insight but also complicating our definitions of intelligence and understanding. Scientific discovery as embodied and intuitive – Jurado highlights that real science often begins in the gut, in a kind of embodied intuition before it becomes formalized. This challenges the myth of science as purely rational or step-by-step and instead suggests that hunches, sensory experience, and emotional resonance play a crucial role. Proteins as computational objects – Proteins aren’t just biochemical entities—they’re shaped by information. Their structure, function, and folding dynamics can be seen as computations, and tools like AlphaFold are beginning to unpack that informational complexity in ways that blur the line between physics and code. Human alignment is messier than AI alignment – While AI alignment gets a lot of attention, Jurado points out that human alignment—between scientists, institutions, and across cultures—is historically chaotic. This reframes the AI alignment debate in a broader evolutionary and historical context, questioning whether we're holding machines to stricter standards than ourselves. Standing on the shoulders of evolutionary processes – Evolution is not just a backdrop but an active epistemic force. Jurado sees scientists as participants in a much older system of experimentation and iteration—evolution itself. In this view, we’re not just designing models; we’re being shaped by them, in a co-evolution of tools and understanding.…

1 Episode #448: From Prompt Injection to Reverse Shells: Navigating AI's Dark Alleyways with Naman Mishra 47:55
47:55
Play Later
Play Later
Lists
Like
Liked47:55
In this episode of Crazy Wisdom, I, Stewart Alsop, sit down with Naman Mishra, CTO of Repello AI, to unpack the real-world security risks behind deploying large language models. We talk about layered vulnerabilities—from the model, infrastructure, and application layers—to attack vectors like prompt injection, indirect prompt injection through agents, and even how a simple email summarizer could be exploited to trigger a reverse shell. Naman shares stories like the accidental leak of a Windows activation key via an LLM and explains why red teaming isn’t just a checkbox, but a continuous mindset. If you want to learn more about his work, check out Repello's website at repello.ai . Check out this GPT we trained on the conversation! Timestamps 00:00 - Stewart Alsop introduces Naman Mishra, CTO of Repel AI. They frame the episode around AI security , contrasting prompt injection risks with traditional cybersecurity in ML apps. 05:00 - Naman explains the layered security model : model , infrastructure , and application layers. He distinguishes safety (bias, hallucination) from security (unauthorized access, data leaks). 10:00 - Focus on the application layer , especially in finance , healthcare , and legal . Naman shares how ChatGPT leaked a Windows activation key and stresses data minimization and security-by-design . 15:00 - They discuss red teaming , how Repel AI simulates attacks, and Anthropic’s HackerOne challenge . Naman shares how adversarial testing strengthens LLM guardrails . 20:00 - Conversation shifts to AI agents and autonomy . Naman explains indirect prompt injection via email or calendar, leading to real exploits like reverse shells —all triggered by summarizing an email. 25:00 - Stewart compares the Internet to a castle without doors. Naman explains the cat-and-mouse game of security—attackers need one flaw; defenders must lock every door. LLM insecurity lowers the barrier for attackers. 30:00 - They explore input/output filtering , role-based access control , and clean fine-tuning . Naman admits most guardrails can be broken and only block low-hanging fruit . 35:00 - They cover denial-of-wallet attacks—LLMs exploited to run up massive token costs. Naman critiques DeepSeek’s weak alignment and state bias , noting training data risks . 40:00 - Naman breaks down India’s AI scene: Bangalore as a hub, US-India GTM , and the debate between sovereignty vs. pragmatism . He leans toward India building foundational models . 45:00 - Closing thoughts on India’s AI future. Naman mentions Sarvam AI , Krutrim , and Paris Chopra’s Loss Funk . He urges devs to red team before shipping—"close the doors before enemies walk in." Key Insights AI security requires a layered approach. Naman emphasizes that GenAI applications have vulnerabilities across three primary layers: the model layer, infrastructure layer, and application layer. It's not enough to patch up just one—true security-by-design means thinking holistically about how these layers interact and where they can be exploited. Prompt injection is more dangerous than it sounds. Direct prompt injection is already risky, but indirect prompt injection—where an attacker hides malicious instructions in content that the model will process later, like an email or webpage—poses an even more insidious threat. Naman compares it to smuggling weapons past the castle gates by hiding them in the food. Red teaming should be continuous, not a one-off. One of the critical mistakes teams make is treating red teaming like a compliance checkbox. Naman argues that red teaming should be embedded into the development lifecycle, constantly testing edge cases and probing for failure modes, especially as models evolve or interact with new data sources. LLMs can unintentionally leak sensitive data. In one real-world case, a language model fine-tuned on internal documentation ended up leaking a Windows activation key when asked a completely unrelated question. This illustrates how even seemingly benign outputs can compromise system integrity when training data isn’t properly scoped or sanitized. Denial-of-wallet is an emerging threat vector. Unlike traditional denial-of-service attacks, LLMs are vulnerable to economic attacks where a bad actor can force the system to perform expensive computations, draining API credits or infrastructure budgets. This kind of vulnerability is particularly dangerous in scalable GenAI deployments with limited cost monitoring. Agents amplify security risks. While autonomous agents offer exciting capabilities, they also open the door to complex, compounded vulnerabilities. When agents start reading web content or calling tools on their own, indirect prompt injection can escalate into real-world consequences—like issuing financial transactions or triggering scripts—without human review. The Indian AI ecosystem needs to balance speed with sovereignty. Naman reflects on the Indian and global context, warning against simply importing models and infrastructure from abroad without understanding the security implications. There’s a need for sovereign control over critical layers of AI systems—not just for innovation’s sake, but for national resilience in an increasingly AI-mediated world.…

1 Episode #447: From Frustration to Creation: Building with Chaos Instead of Blueprints 59:21
59:21
Play Later
Play Later
Lists
Like
Liked59:21
In this episode of the Crazy Wisdom Podcast, I, Stewart Alsop, speak with Perry Knoppert, founder of The Octopus Movement, joining us from the Netherlands. We explore everything from octopus facts (like how they once had bones and decided to ditch them—wild, right?) to neurodivergence, non-linear thinking, the alien-like nature of both octopuses and AI, and how the future of education might finally reflect the chaos and creativity of human intelligence. Perry drops insight bombs on ADHD, dyslexia, chaos as a superpower, and even shares a wild idea about how frustration—not just ideas—can shape the world. You can connect with him and explore more at theoctopusmovement.org , and check out his playful venting app at tellTom.ink . Check out this GPT we trained on the conversation! Timestamps 00:00 Introduction to the Crazy Wisdom Podcast 00:31 Fascinating Facts About Octopi 02:03 The Octopus Movement: Origins and Symbolism 05:55 Exploring Neurodivergence and AI 20:15 The Future of Education with AI 29:48 Challenges in the Dutch Education System 30:59 Educational Pathways in the US 31:50 Exploring Neurodiversity 32:34 The Origin of Neurodiversity 34:34 Nomadic DNA and ADHD 36:02 Personal Nomadic Experiences 37:20 Cultural Insights from China 41:59 Trust in Different Cultures 44:20 The Foreigner Experience 52:21 Artificial and Natural Intelligence 55:11 The Octopus Movement and Tell Tom App Key Insights Neurodivergence isn't a superpower—it's a different lens on reality. Perry challenges the popular narrative that conditions like ADHD or dyslexia are inherently "superpowers." Instead, he sees them as part of a broader, complex human experience—often painful, often misunderstood, but rich with potential once liberated from linear systems that define what's "normal." AI is the beautiful product of linear thought—and it's freeing us from it. Perry reframes artificial intelligence not as a threat, but as the ultimate tool born from centuries of structured, logical thinking. With AI handling the systems and organization, humans are finally free to return to creativity, chaos, and nonlinear, intuitive modes of intelligence that machines can't touch. Octopuses are the ultimate symbol of curious misfits. The octopus—alien, adaptable, emotion-rich—becomes a metaphor for people who don't fit the mold. With three hearts, nine brains, and a decentralized nervous system, octopuses reflect the kind of intelligence and distributed awareness Perry celebrates in neurodivergent thinkers. Frustration is more generative than ideas. In one of the episode’s most unexpected insights, Perry argues that frustration is a more powerful starting point for change than intellectual ideation. Ideas are often inert without action, while frustration is raw, emotional, and deeply human—fuel for meaningful transformation. Education needs to shift from repetition to creation. The current model of education—memorization, repetition, testing—serves linearity, not creativity. With AI taking over traditional knowledge tasks, Perry envisions classrooms where kids learn how their minds work, engage with the world directly, and practice making meaning instead of memorizing facts. Being a foreigner is a portal to freedom. Living in unfamiliar cultures (like Perry did in China or Stewart in Argentina) reveals the absurdities of our own norms and invites new ways of being. Foreignness becomes a superpower in itself—a space of lowered expectations, fewer assumptions, and greater possibility. Labels like “neurodivergent” are both helpful and illusory. While diagnostic labels can offer relief and clarity, Perry warns against attaching too tightly to them. These constructs are inventions of linear thought, useful for navigating systems but ultimately limiting when it comes to embracing the full, messy, nonlinear reality of being human.…

1 Episode #446: Decentralized Truth and the Digital Republic 51:11
51:11
Play Later
Play Later
Lists
Like
Liked51:11
On this episode of the Crazy Wisdom Podcast, I, Stewart Alsop, sit down with Federico Ast, founder of Kleros, to explore how decentralized justice systems can resolve both crypto-native and real-world disputes. We talk about the pilot with the Supreme Court in Mendoza, Argentina, where Kleros is helping small claims courts resolve cases faster and more transparently, and how this ties into a broader vision for digital governance using tools like proof of humanity and soulbound tokens. We also get into the philosophical and institutional implications of building a digital republic, and how blockchain can offer new models of legitimacy and truth-making. Show notes and more about Federico’s work can be found via his Twitter: @federicoast (https://twitter.com/federicoast) and by joining the Kleros Telegram community. Check out this GPT we trained on the conversation! 00:00 Introduction and Guest Welcome 00:38 Claros Pilot Program in Mendoza 02:00 Claros and the Legal System 05:13 Personal Journey into Crypto 07:16 Challenges and Innovations in Kleros 18:02 Proof of Humanity and Soulbound Tokens 26:54 Incentives and Proof of Humanity 27:01 Interesting DAO Court Cases 27:21 Prediction Markets and Disputes 31:36 Customer Service and Dispute Resolution 38:21 Governance and Online Communities 40:02 Future of Civilization and Technology 47:16 Bounties and Legal Systems 49:06 Conclusion and Contact Information Key Insights Decentralized Justice Can Bridge the Gap Between Traditional Legal Systems and Web3: Federico Ast explains how Kleros functions as a decentralized dispute resolution system, offering a faster, more transparent, and more accessible alternative to conventional courts. In places like Mendoza, Argentina, Kleros has been piloted in collaboration with the Supreme Court to help resolve small claims that would otherwise take years, demonstrating how blockchain tools can support real-world judicial systems rather than replace them. Crypto Tools Are Most Powerful When Rooted in Real-World Problems: Ast emphasizes that his motivation for building in the blockchain space came not from hype but from firsthand experience with institutional inefficiencies in Argentina—such as corruption, inaccessible courts, and predatory financial systems. For him, crypto is a means to address these structural issues, not an end in itself. This grounded approach contrasts with many in the space who begin with the technology and try to retrofit a use case. Proof of Humanity and Soulbound Tokens Expand the Scope of Legitimate Governance: To address concerns over who gets to participate in decentralized juries, Kleros integrates identity verification through Proof of Humanity and uses non-transferable Soulbound Tokens to grant eligibility. These innovations allow communities—whether geographic, organizational, or digital—to define their own membership criteria, making decentralized courts feel more legitimate and relevant to participants. Decentralized Courts Can Handle Complex, Subjective Disputes: While early versions of Kleros were built for binary disputes (yes/no, Alice vs. Bob), real-world conflicts are often more nuanced. Over time, the platform evolved to support more flexible decision-making, including proportional fault, ranked outcomes, and variable payouts. This adaptability allows Kleros to handle a broader spectrum of disputes, including ambiguous or interpretive cases like those found in prediction markets. Incentive Systems Create New Forms of Justice Participation: Kleros applies game theory to create juror incentives that reward honest and aligned decisions. In systems like Proof of Humanity, it even gamifies fraud detection by offering financial bounties to those who uncover duplicate or fake identities. These economic incentives encourage voluntary participation in public-good functions such as identity verification and dispute resolution. Kleros Offers a Middle Ground Between Corporate Automation and Legal Bureaucracy: Many companies use rigid, automated systems to deny customer claims, leaving individuals with no real recourse except to complain on social media. Kleros offers an intermediate option: a transparent, peer-based adjudication process that can resolve disputes quickly. In pilot programs with fintech companies like Lemon, over 90% of users who lost their case still accepted the result and remained customers, showing how fairness in process can build trust even when outcomes disappoint. Digital Communities Are Becoming the New Foundations of Governance: Ast points out that many people now feel more connected to online communities than to their local or national institutions. Blockchain governance—enabled by tools like Kleros, Proof of Humanity, and decentralized IDs—allows these communities to build their own civil infrastructure. This marks a shift toward what he calls a “digital republic,” where shared values and participation, rather than geography, form the basis of collective decision-making and legitimacy.…

1 Episode #445: How Decentralized Tech Could Challenge Nation-States 51:17
51:17
Play Later
Play Later
Lists
Like
Liked51:17
In this episode of Crazy Wisdom, host Stewart Alsop talks with Rosario Parlanti, a longtime crypto investor and real estate attorney, about the shifting landscape of decentralization, AI, and finance. They explore the power struggles between centralized and decentralized systems, the role of AI agents in finance and infrastructure, and the legal gray areas emerging around autonomous technology. Rosario shares insights on trusted execution environments, token incentives, and how projects like Phala Network are building decentralized cloud computing. They also discuss the changing narrative around Bitcoin, the potential for AI-driven financial autonomy, and the future of censorship-resistant platforms. Follow Rosario on X @DeepinWhale and check out Phala Network to learn more. Check out this GPT we trained on the conversation! Timestamps 00:00 Introduction to the Crazy Wisdom Podcast 00:25 Understanding Decentralized Cloud Infrastructure 04:40 Centralization vs. Decentralization: A Philosophical Debate 06:56 Political Implications of Centralization 17:19 Technical Aspects of Phala Network 24:33 Crypto and AI: The Future Intersection 25:11 The Convergence of Crypto and AI 25:59 Challenges with Centralized Cloud Services 27:36 Decentralized Cloud Solutions for AI 30:32 Legal and Ethical Implications of AI Agents 32:59 The Future of Decentralized Technologies 41:56 Crypto's Role in Global Financial Freedom 49:27 Closing Thoughts and Future Prospects Key Insights Decentralization is not absolute, but a spectrum. Rosario Parlanti explains that decentralization doesn’t mean eliminating central hubs entirely, but rather reducing choke points where power is overly concentrated. Whether in finance, cloud computing, or governance, every system faces forces pushing toward centralization for efficiency and control, while counterforces work to redistribute power and increase resilience. Trusted execution environments (TEE) are crucial for decentralized cloud computing. Rosario highlights how Phala Network uses TEEs, a hardware-based security measure that isolates sensitive data from external access. This ensures that decentralized cloud services can operate securely, preventing unauthorized access while allowing independent providers to host data and run applications outside the control of major corporations like Amazon and Google. AI agents will need decentralized infrastructure to function autonomously. The conversation touches on the growing power of AI-driven autonomous agents, which can execute financial trades, conduct research, and even generate content. However, running such agents on centralized cloud providers like AWS could create regulatory and operational risks. Decentralized cloud networks like Phala offer a way for these agents to operate freely, without interference from governments or corporations. Regulatory arbitrage will shape the future of AI and crypto. Rosario describes how businesses and individuals are already leveraging jurisdiction shopping—structuring AI entities or financial operations in countries with more favorable regulations. He speculates that AI agents could be housed within offshore LLCs or irrevocable trusts, creating legal distance between their creators and their actions, raising new ethical and legal challenges. Bitcoin’s narrative has shifted from currency to investment asset. Originally envisioned as a peer-to-peer electronic cash system, Bitcoin has increasingly been treated as digital gold, largely due to the influence of institutional investors and regulatory frameworks like Bitcoin ETFs. Rosario argues that this shift in perception has led to Bitcoin being co-opted by the very financial institutions it was meant to disrupt. The rise of AI-driven financial autonomy could bypass traditional banking and regulation. The combination of AI, smart contracts, and decentralized finance (DeFi) could enable AI agents to conduct financial transactions without human oversight. This could range from algorithmic trading to managing business operations, potentially reducing reliance on traditional banking systems and challenging the ability of governments to enforce financial regulations. The accelerating clash between technology and governance will redefine global power structures. As AI and decentralized systems gain momentum, traditional nation-state mechanisms for controlling information, currency, and infrastructure will face unprecedented challenges. Rosario and Stewart discuss how this shift mirrors previous disruptions—such as social media’s impact on information control—and speculate on whether governments will adapt, resist, or attempt to co-opt these emerging technologies.…

1 Episode #444: The Hidden Frameworks of the Internet: Knowledge Graphs, Ontologies, and Who Controls Truth 1:00:23
1:00:23
Play Later
Play Later
Lists
Like
Liked1:00:23
On this episode of the Crazy Wisdom Podcast, host Stewart Alsop welcomes Jessica Talisman, a senior information architect deeply immersed in the worlds of taxonomy, ontology, and knowledge management. The conversation spans the evolution of libraries, the shifting nature of public and private access to knowledge, and the role of institutions like the Internet Archive in preserving digital history. They also explore the fragility of information in the digital age, the ongoing battle over access to knowledge, and how AI is shaping—and being shaped by—structured data and knowledge graphs. To connect with Jessica Talisman, you can reach her via LinkedIn . Check out this GPT we trained on the conversation! Timestamps 00:05 – Libraries, Democracy, Public vs. Private Knowledge Jessica explains how libraries have historically shifted between public and private control, shaping access to knowledge and democracy. 00:10 – Internet Archive, Cyberattacks, Digital Preservation Stewart describes visiting the Internet Archive post-cyberattack, sparking a discussion on threats to digital preservation and free information. 00:15 – AI, Structured Data, Ontologies, NIH, PubMed Jessica breaks down how AI trains on structured data from sources like NIH and PubMed but often lacks alignment with authoritative knowledge. 00:20 – Linked Data, Knowledge Graphs, Semantic Web, Tim Berners-Lee They explore how linked data enables machines to understand connections between knowledge, referencing the vision behind the semantic web. 00:25 – Entity Management, Cataloging, Provenance, Authority Jessica explains how libraries are transitioning from cataloging books to managing entities, ensuring provenance and verifiable knowledge. 00:30 – Digital Dark Ages, Knowledge Loss, Corporate Control Stewart compares today’s deletion of digital content to historical knowledge loss, warning about the fragility of digital memory. 00:35 – War on Truth, Book Bans, Algorithmic Bias, Censorship They discuss how knowledge suppression—from book bans to algorithmic censorship—threatens free access to information. 00:40 – AI, Search Engines, Metadata, Schema.org, RDF Jessica highlights how AI and search engines depend on structured metadata but often fail to prioritize authoritative sources. 00:45 – Power Over Knowledge, Open vs. Closed Systems, AI Ethics They debate the battle between corporations, governments, and open-source efforts to control how knowledge is structured and accessed. 00:50 – Librarians, AI Misinformation, Knowledge Organization Jessica emphasizes that librarians and structured knowledge systems are essential in combating misinformation in AI. 00:55 – Future of Digital Memory, AI, Ethics, Information Access They reflect on whether AI and linked data will expand knowledge access or accelerate digital decay and misinformation. Key Insights The Evolution of Libraries Reflects Power Struggles Over Knowledge: Libraries have historically oscillated between being public and private institutions, reflecting broader societal shifts in who controls access to knowledge. Jessica Talisman highlights how figures like Andrew Carnegie helped establish the modern public library system, reinforcing libraries as democratic spaces where information is accessible to all. However, she also notes that as knowledge becomes digitized, new battles emerge over who owns and controls digital information. The Internet Archive Faces Systematic Attacks on Knowledge: Stewart Alsop shares his firsthand experience visiting the Internet Archive just after it had suffered a major cyberattack. This incident is part of a larger trend in which libraries and knowledge repositories worldwide, including those in Canada, have been targeted. The conversation raises concerns that these attacks are not random but part of a broader, well-funded effort to undermine access to information. AI and Knowledge Graphs Are Deeply Intertwined: AI systems, particularly large language models (LLMs), rely on structured data sources such as knowledge graphs, ontologies, and linked data. Talisman explains how institutions like the NIH and PubMed provide openly available, structured knowledge that AI systems train on. Yet, she points out a critical gap—AI often lacks alignment with real-world, authoritative sources, which leads to inaccuracies in machine-generated knowledge. Libraries Are Moving From Cataloging to Entity Management: Traditional library systems were built around cataloging books and documents, but modern libraries are transitioning toward entity management, which organizes knowledge in a way that allows for more dynamic connections. Linked data and knowledge graphs enable this shift, making it easier to navigate vast repositories of information while maintaining provenance and authority. The War on Truth and Information Is Accelerating: The episode touches on the increasing threats to truth and reliable information, from book bans to algorithmic suppression of knowledge. Talisman underscores the crucial role librarians play in preserving access to primary sources and maintaining records of historical truth. As AI becomes more prominent in knowledge dissemination, the need for robust, verifiable sources becomes even more urgent. Linked Data is the Foundation of Digital Knowledge: The conversation explores how linked data protocols, such as those championed by Tim Berners-Lee, allow machines and AI to interpret and connect information across the web. Talisman explains that institutions like NIH publish their taxonomies in RDF format, making them accessible as structured, authoritative sources. However, many organizations fail to leverage this interconnected data, leading to inefficiencies in knowledge management. Preserving Digital Memory is a Civilization-Defining Challenge: In the digital age, the loss of information is more severe than ever. Alsop compares the current state of digital impermanence to the Dark Ages, where crucial knowledge risks disappearing due to corporate decisions, cyberattacks, and lack of preservation infrastructure. Talisman agrees, emphasizing that digital archives like the Internet Archive, WorldCat, and Wikimedia are foundational to maintaining a collective human memory.…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.