Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Gigi and Pablo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gigi and Pablo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#08 - Navigating the Vibe

1:35:28
 
Share
 

Manage episode 476082024 series 3652926
Content provided by Gigi and Pablo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gigi and Pablo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

"With the shift towards this multi-agent collaboration and orchestration world, you need a neutral substrate that has money, identity, cryptography, and web-of-trust baked in, to make everything work."

Pablo & Gigi are getting high on glue nostr.

Books and articles mentioned:

In this dialogue:

  • vibeline & vibeline-ui
  • LLMs as tools, and how to use them
  • Vervaeke: AI thresholds & the path we must take
  • Hallucinations and grounding in reality
  • GPL, LLMs, and open-source licensing
  • Pablo's multi-agent Roo setup
  • Are we going to make programmers obsolete?
  • "When it works it's amazing"
  • Hiring & training agents
  • Agents creating RAG databases of NIPs
  • Different models and their context windows
  • Generalists vs specialists
  • "Write drunk, edit sober"
  • DVMCP.fun
  • Recklessness and destruction of vibe-coding
  • Sharing secrets with agents & LLMs
  • The "no API key" advantage of nostr
  • What data to trust? And how does nostr help?
  • Identity, web of trust, and signing data
  • How to fight AI slop
  • Marketplaces of code snippets
  • Restricting agents with expert knowledge
  • Trusted sources without a central repository
  • Zapstore as the prime example
  • "How do you fight off re-inventing GitHub?"
  • Using large context windows to help with refactoring
  • Code snippets for Olas, NDK, NIP-60, and more
  • Using MCP as the base
  • Using nostr as the underlying substrate
  • Nostr as the glue & the discovery layer
  • Why is this important?
  • Why is this exciting?
  • "With the shift towards this multi-agent collaboration and orchestration world, you need a neutral substrate that has money/identity/cryptography and web-of-trust baked in, to make everything work."
  • How to single-shot nostr applications
  • "Go and create this app"
  • The agent has money, because of NIP-60/61
  • PayPerQ
  • Anthropic and the genius of mcp-tools
  • Agents zapping & giving SkyNet more money
  • Are we going to run the mints?
  • Are agents going to run the mints?
  • How can we best explain this to our bubble?
  • Let alone to people outside of our bubble?
  • Building pipelines of multiple agents
  • LLM chains & piped Unix tools
  • OpenAI vs Anthropic
  • Genius models without tools vs midwit models with tools
  • Re-thinking software development
  • LLMs allow you to tackle bigger problems
  • Increased speed is a paradigm shift
  • Generalists vs specialists, left brain vs right brain
  • Nostr as the home for specialists
  • fiatjaf publishing snippets (reluctantly)
  • fiatjaf's blossom implementation
  • Thinking with LLMs
  • The tension of specialization VS generalization
  • How the publishing world changed
  • Stupid faces on YouTube thumbnails
  • Gaming the algorithm
  • Will AI slop destroy the attention economy?
  • Recency bias & hiding publication dates
  • Undoing platform conditioning as a success metric
  • Craving realness in a fake attention world
  • The theater of the attention economy
  • What TikTok got "right"
  • Porn, FoodPorn, EarthPorn, etc.
  • Porn vs Beauty
  • Smoothness and awe
  • "Beauty is an angel that could kill you in an instant (but decides not to)."
  • The success of Joe Rogan & long-form conversations
  • Smoothness fatigue & how our feeds numb us
  • Nostr & touching grass
  • How movement changes conversations
  • LangChain & DVMs
  • Central models vs marketplaces
  • Going from assembly to high-level to conceptual
  • Natural language VS programming languages
  • Pablo's code snippets
  • Writing documentation for LLMs
  • Shared concepts, shared language, and forks
  • Vibe-forking open-source software
  • Spotting vibe-coded interfaces
  • Visualizing nostr data in a 3D world
  • Tweets, blog posts, and podcasts
  • Vibe-producing blog posts from conversations
  • Tweets are excellent for discovery
  • Adding context to tweets (long-form posts, podcasts, etc)
  • Removing the character limit was a mistake
  • "Everyone's attention span is rekt"
  • "There is no meaning without friction"
  • "Nothing worth having ever comes easy"
  • Being okay with doing the hard thing
  • Growth hacks & engagement bait
  • TikTok, theater, and showing faces and emotions
  • The 1% rule: 99% of internet users are Lurkers
  • "We are socially malnourished"
  • Web-of-trust and zaps bring realness
  • The semantic web does NOT fix this LLMs might
  • "You can not model the world perfectly"
  • Hallucination as a requirement for creativity

  continue reading

11 episodes

Artwork
iconShare
 
Manage episode 476082024 series 3652926
Content provided by Gigi and Pablo. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Gigi and Pablo or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

"With the shift towards this multi-agent collaboration and orchestration world, you need a neutral substrate that has money, identity, cryptography, and web-of-trust baked in, to make everything work."

Pablo & Gigi are getting high on glue nostr.

Books and articles mentioned:

In this dialogue:

  • vibeline & vibeline-ui
  • LLMs as tools, and how to use them
  • Vervaeke: AI thresholds & the path we must take
  • Hallucinations and grounding in reality
  • GPL, LLMs, and open-source licensing
  • Pablo's multi-agent Roo setup
  • Are we going to make programmers obsolete?
  • "When it works it's amazing"
  • Hiring & training agents
  • Agents creating RAG databases of NIPs
  • Different models and their context windows
  • Generalists vs specialists
  • "Write drunk, edit sober"
  • DVMCP.fun
  • Recklessness and destruction of vibe-coding
  • Sharing secrets with agents & LLMs
  • The "no API key" advantage of nostr
  • What data to trust? And how does nostr help?
  • Identity, web of trust, and signing data
  • How to fight AI slop
  • Marketplaces of code snippets
  • Restricting agents with expert knowledge
  • Trusted sources without a central repository
  • Zapstore as the prime example
  • "How do you fight off re-inventing GitHub?"
  • Using large context windows to help with refactoring
  • Code snippets for Olas, NDK, NIP-60, and more
  • Using MCP as the base
  • Using nostr as the underlying substrate
  • Nostr as the glue & the discovery layer
  • Why is this important?
  • Why is this exciting?
  • "With the shift towards this multi-agent collaboration and orchestration world, you need a neutral substrate that has money/identity/cryptography and web-of-trust baked in, to make everything work."
  • How to single-shot nostr applications
  • "Go and create this app"
  • The agent has money, because of NIP-60/61
  • PayPerQ
  • Anthropic and the genius of mcp-tools
  • Agents zapping & giving SkyNet more money
  • Are we going to run the mints?
  • Are agents going to run the mints?
  • How can we best explain this to our bubble?
  • Let alone to people outside of our bubble?
  • Building pipelines of multiple agents
  • LLM chains & piped Unix tools
  • OpenAI vs Anthropic
  • Genius models without tools vs midwit models with tools
  • Re-thinking software development
  • LLMs allow you to tackle bigger problems
  • Increased speed is a paradigm shift
  • Generalists vs specialists, left brain vs right brain
  • Nostr as the home for specialists
  • fiatjaf publishing snippets (reluctantly)
  • fiatjaf's blossom implementation
  • Thinking with LLMs
  • The tension of specialization VS generalization
  • How the publishing world changed
  • Stupid faces on YouTube thumbnails
  • Gaming the algorithm
  • Will AI slop destroy the attention economy?
  • Recency bias & hiding publication dates
  • Undoing platform conditioning as a success metric
  • Craving realness in a fake attention world
  • The theater of the attention economy
  • What TikTok got "right"
  • Porn, FoodPorn, EarthPorn, etc.
  • Porn vs Beauty
  • Smoothness and awe
  • "Beauty is an angel that could kill you in an instant (but decides not to)."
  • The success of Joe Rogan & long-form conversations
  • Smoothness fatigue & how our feeds numb us
  • Nostr & touching grass
  • How movement changes conversations
  • LangChain & DVMs
  • Central models vs marketplaces
  • Going from assembly to high-level to conceptual
  • Natural language VS programming languages
  • Pablo's code snippets
  • Writing documentation for LLMs
  • Shared concepts, shared language, and forks
  • Vibe-forking open-source software
  • Spotting vibe-coded interfaces
  • Visualizing nostr data in a 3D world
  • Tweets, blog posts, and podcasts
  • Vibe-producing blog posts from conversations
  • Tweets are excellent for discovery
  • Adding context to tweets (long-form posts, podcasts, etc)
  • Removing the character limit was a mistake
  • "Everyone's attention span is rekt"
  • "There is no meaning without friction"
  • "Nothing worth having ever comes easy"
  • Being okay with doing the hard thing
  • Growth hacks & engagement bait
  • TikTok, theater, and showing faces and emotions
  • The 1% rule: 99% of internet users are Lurkers
  • "We are socially malnourished"
  • Web-of-trust and zaps bring realness
  • The semantic web does NOT fix this LLMs might
  • "You can not model the world perfectly"
  • Hallucination as a requirement for creativity

  continue reading

11 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play