Search a title or topic

Over 20 million podcasts, powered by 

Player FM logo
Artwork

Content provided by Colin Wright. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Colin Wright or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Model Context Protocol

15:39
 
Share
 

Manage episode 480988307 series 2954370
Content provided by Colin Wright. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Colin Wright or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

This week we talk about the Marshall Plan, standardization, and USB.

We also discuss artificial intelligence, Anthropic, and protocols.

Recommended Book: Fuzz by Mary Roach

Transcript

In the wake of WWII, the US government implemented the European Recovery Program, more commonly known as the Marshall Plan, to help Western Europe recover from a conflict that had devastated the afflicted countries’ populations, infrastructure, and economies.

It kicked off in April of 1948, and though it was replaced by a successor program, the Mutual Security Act, just three years later in 1951—which was similar to the Marshall Plan, but which had a more militant, anti-communism bent, the idea being to keep the Soviets from expanding their influence across the continent and around the world—the general goal of both programs was similar: the US was in pretty good shape, post-war, and in fact by waiting to enter as long as it did, and by becoming the arsenal of the Allied side in the conflict, its economy was flourishing, its manufacturing base was all revved up and needed something to do with all the extra output capacity it had available, all the resources committed to producing hardware and food and so on, so by sharing these resources with allies, by basically just giving a bunch of money and assets and infrastructural necessities to these European governments, the US could get everybody on side, bulwarked against the Soviet Union’s counterinfluence, at a moment in which these governments were otherwise prone to that influence; because they were suffering and weaker than usual, and thus, if the Soviets came in with the right offer, or with enough guns, they could conceivably grab a lot of support and even territory. So it was considered to be in everyone’s best interest, those who wanted to keep the Soviet Union from expanding, at least, to get Europe back on its feet, posthaste.

So this program, and its successor program, were highly influential during this period, and it’s generally considered to be one of the better things the US government has done for the world, as while there were clear anti-Soviet incentives at play, it was also a relatively hands-off, large-scale give-away that favorably compared with the Soviets’ more demanding and less generous version of the same.

One interesting side effect of the Marshall Plan is that because US manufacturers were sending so much stuff to these foreign ports, their machines and screws and lumber used to rebuild entire cities across Europe, the types of machines and screws and lumber, which were the standard models of each in the US, but many of which were foreign to Europe at the time, became the de facto standard in some of these European cities, as well.

Such standards aren’t always the best of all possible options, sometimes they stick around long past their period of ideal utility, and they don’t always stick, but the standards and protocols within an industry or technology do tend to shape that industry or technology’s trajectory for decades into the future, as has been the case with many Marshall Plan-era US standards that rapidly spread around the world as a result of these giveaways.

And standards and protocols are what I’d like to talk about today. In particular a new protocol that seems primed to shape the path today’s AI tools are taking.

Today’s artificial intelligence, or AI, which is an ill-defined type of software that generally refers to applications capable of doing vaguely human-like things, like producing text and images, but also somewhat superhuman things, like working with large data-sets and bringing meaning to them, are developing rapidly, becoming more potent and capable seemingly every day.

This period of AI development has been in the works for decades, and the technologies required to make the current batch of generative AI tools—the type that makes stuff based on libraries of training data, deriving patterns from that data and then coming up with new stuff based on the prompting of human users—were originally developed in the 1970s, but the transformer, which was a fresh approach to what’s called deep learning architectures, was first proposed in 2017 by a researcher at Google, and that led to the development of the generative pre-trained transformer, or GPT, in 2018.

The average non-tech-world person probably started to hear about this generation of AI tools a few years later, maybe when the first transformer-based voice and image tools started popping up around the internet, mostly as novelties, or even more likely in late-2022 when OpenAI released the first version of ChatGPT, a generative AI system attached to a chatbot interface, which made these sorts of tools more accessible to the average person.

Since then, there’s been a wave of investment and interest in AI tools, and we’ve reached a point where the seemingly obvious next-step is removing humans from the loop in more AI-related processes.

What that means in practice is that while today these tools require human prompting for most of what they do—you have to ask an AI for a specific image, then ask it to refine that image in order to customize it for your intended use-case, for instance—it’s possible to have AI do more things on their own, working from broader instructions to refine their creations themselves over multiple steps and longer periods of time.

So rather than chatting with an AI to come up with a marketing plan for your business, prompting it dozens or hundreds of times to refine the sales copy, the logo, the images for the website, the code for the website, and so on, you might tell an AI tool that you’re building a business that does X and ask it to spin up all the assets that you need. From there, the AI might research what a new business in that industry requires, make all the assets you need for it, go back and tweak all those assets based on feedback from other AI tools, and then deploy those assets for you on web hosting services, social media accounts, and the like.

It’s possible that at some point these tools could become so capable in this regard that humans won’t need to be involved at all, even for the initial ideation. You could ask an AI what sorts of businesses make sense at the moment, and tell it to build you a dozen minimum viable products for those businesses, and then ask it to run those businesses for you—completely hands off, except for the expressing your wishes part, almost like you’re working with a digital genie.

At the moment, components of that potential future are possible, but one of the main things standing in the way is that AI systems largely aren’t agentic enough, which in this context means they need a lot of hand-holding for things that a human being would be capable of doing, but which they largely, with rare exceptions, aren’t yet, and they often don’t have the permission or ability to interact with other tools required to do that kind of building—and that includes things like the ability to create a business account on Shopify, but also the ability to access and handle money, which would be required to set up business and bank accounts, to receive money from customers, and so on.

This is changing at a rapid pace, and more companies are making their offerings accessible to specific AI tools; Shopify has deployed its own cluster of internal AI systems, for instance, meant to manage various aspects of a business its customers perch on its platform.

What’s missing right now, though, is a unifying scaffolding that allows these services and assets and systems to all play nice with each other.

And that’s the issue the Model Context Protocol is meant to address.

The Model Context Protocol, or MCP, is a standard developed by AI company Anthropic, and it’s open and designed to be universal. The company intends for it to be the mycelium that connects large language model-based AI to all sorts of data and tools and other systems, a bit like the Hypertext Transfer Protocol, or HTTP, allows data on the web to be used and shared and processed, universally, in a standardized way, and to dip back into the world of physical objects, how standardized shipping containers make global trade a lot more efficient because everyone’s working with the same sized boxes, cargo vessels, and so on.

The Universal Serial Bus standard, usually shorthanded as USB, is also a good comparison here, as the USB was introduced to replaced a bunch of other standards in the early days of personal computing, which varied by computer maker, and which made it difficult for those makers, plus those who developed accessories, to make their products accessible and inexpensive for end-users, as you might buy a mouse that doesn’t work with your specific computer hardware, or you might have a cable that fits in the hole on your computer, but doesn’t send the right amount of data, or provide the power you need.

USB standards ensured that all devices had the same holes, and that a certain basic level of data and power transmission would be available. And while this standard has since fractured a bit, a period of many different types of USB leading to a lot of confusion, and the deployment of the USB C standard simplying things somewhat, but still being a bit confounding at times, as the same shaped plug may carry different amounts of data and power, despite all that, it has still made things a lot easier for both consumers and producers of electronic goods, as there are fewer plugs and charger types to purchase, and thus less waste, confusion, and so on. We’ve moved on from the wild west era of computer hardware connectivity into something less varied and thus, more predictable and interoperable.

The MCP, if it’s successful, could go on to be something like the USB standard in that it would serve as a universal connector between various AI systems and all the things you might want those AI systems to access and use.

That might mean you want one of Anthropic’s AI systems to build you a business, without you having to do much or anything at all, and it may be capable of doing so, asking you questions along the way if it requires more clarity or additional permissiosn—to open a bank account in your name, for instance—but otherwise acting more agentically, as intended, even to the point that it could run social media accounts, work with manufacturers of the goods you sell, and handle customer service inquiries on your behalf.

What makes this standard a standout compared to other options, though—and there are many other proposed options, right now, as this space is still kind of a wild west—is that though it was developed by Anthropic, which originally made it to work with its Claude family of AI tools, it has since also been adopted by OpenAI, Google DeepMind, and several of the other largest players in the AI world.

That means, although there are other options here, all with their own pros and cons, as was the case with USB compared to other connection options back in the day, MCP is usable with many of the biggest and most spendy and powerful entities in the AI world, right now, and that gives it a sort of credibility and gravity that the other standards don’t currently enjoy.

This standard is also rapidly being adopted by companies like Block, Apollo, PayPal, CloudFlare, Asana, Plaid, and Sentry, among many, many others—including other connectors, like Zapier, which basically allows stuff to connect to other stuff, further broadening the capacity of AI tools that adopt this standard.

While this isn’t a done deal, then, there’s a good chance that MCP will be the first big connective, near-universal standard in this space, which in turn means many of the next-step moves and tools in this space will need to work with it, in order to gain adoption and flourish, and that means, like the standards spread around the world by the Marshall Plan, it will go on to shape the look and feel and capabilities, including the limitations, of future AI tools and scaffoldings.

Show Notes

https://arstechnica.com/information-technology/2025/04/mcp-the-new-usb-c-for-ai-thats-bringing-fierce-rivals-together/

https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/

https://oldvcr.blogspot.com/2025/05/what-went-wrong-with-wireless-usb.html

https://arxiv.org/html/2504.16736v2

https://en.wikipedia.org/wiki/Model_Context_Protocol#cite_note-anthropic_mcp-1

https://github.com/modelcontextprotocol

https://www.anthropic.com/news/integrations

https://www.theverge.com/2024/11/25/24305774/anthropic-model-context-protocol-data-sources

https://beebom.com/model-context-protocol-mcp-explained/

https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/

https://techcrunch.com/2025/04/09/google-says-itll-embrace-anthropics-standard-for-connecting-ai-models-to-data/

https://en.wikipedia.org/wiki/Generative_artificial_intelligence

https://en.wikipedia.org/wiki/USB

https://www.archives.gov/milestone-documents/marshall-plan

https://en.wikipedia.org/wiki/Marshall_Plan

https://www.congress.gov/crs-product/R45079

https://www.ebsco.com/research-starters/history/marshall-plan

https://www.history.com/articles/marshall-plan


This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
  continue reading

467 episodes

Artwork
iconShare
 
Manage episode 480988307 series 2954370
Content provided by Colin Wright. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Colin Wright or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

This week we talk about the Marshall Plan, standardization, and USB.

We also discuss artificial intelligence, Anthropic, and protocols.

Recommended Book: Fuzz by Mary Roach

Transcript

In the wake of WWII, the US government implemented the European Recovery Program, more commonly known as the Marshall Plan, to help Western Europe recover from a conflict that had devastated the afflicted countries’ populations, infrastructure, and economies.

It kicked off in April of 1948, and though it was replaced by a successor program, the Mutual Security Act, just three years later in 1951—which was similar to the Marshall Plan, but which had a more militant, anti-communism bent, the idea being to keep the Soviets from expanding their influence across the continent and around the world—the general goal of both programs was similar: the US was in pretty good shape, post-war, and in fact by waiting to enter as long as it did, and by becoming the arsenal of the Allied side in the conflict, its economy was flourishing, its manufacturing base was all revved up and needed something to do with all the extra output capacity it had available, all the resources committed to producing hardware and food and so on, so by sharing these resources with allies, by basically just giving a bunch of money and assets and infrastructural necessities to these European governments, the US could get everybody on side, bulwarked against the Soviet Union’s counterinfluence, at a moment in which these governments were otherwise prone to that influence; because they were suffering and weaker than usual, and thus, if the Soviets came in with the right offer, or with enough guns, they could conceivably grab a lot of support and even territory. So it was considered to be in everyone’s best interest, those who wanted to keep the Soviet Union from expanding, at least, to get Europe back on its feet, posthaste.

So this program, and its successor program, were highly influential during this period, and it’s generally considered to be one of the better things the US government has done for the world, as while there were clear anti-Soviet incentives at play, it was also a relatively hands-off, large-scale give-away that favorably compared with the Soviets’ more demanding and less generous version of the same.

One interesting side effect of the Marshall Plan is that because US manufacturers were sending so much stuff to these foreign ports, their machines and screws and lumber used to rebuild entire cities across Europe, the types of machines and screws and lumber, which were the standard models of each in the US, but many of which were foreign to Europe at the time, became the de facto standard in some of these European cities, as well.

Such standards aren’t always the best of all possible options, sometimes they stick around long past their period of ideal utility, and they don’t always stick, but the standards and protocols within an industry or technology do tend to shape that industry or technology’s trajectory for decades into the future, as has been the case with many Marshall Plan-era US standards that rapidly spread around the world as a result of these giveaways.

And standards and protocols are what I’d like to talk about today. In particular a new protocol that seems primed to shape the path today’s AI tools are taking.

Today’s artificial intelligence, or AI, which is an ill-defined type of software that generally refers to applications capable of doing vaguely human-like things, like producing text and images, but also somewhat superhuman things, like working with large data-sets and bringing meaning to them, are developing rapidly, becoming more potent and capable seemingly every day.

This period of AI development has been in the works for decades, and the technologies required to make the current batch of generative AI tools—the type that makes stuff based on libraries of training data, deriving patterns from that data and then coming up with new stuff based on the prompting of human users—were originally developed in the 1970s, but the transformer, which was a fresh approach to what’s called deep learning architectures, was first proposed in 2017 by a researcher at Google, and that led to the development of the generative pre-trained transformer, or GPT, in 2018.

The average non-tech-world person probably started to hear about this generation of AI tools a few years later, maybe when the first transformer-based voice and image tools started popping up around the internet, mostly as novelties, or even more likely in late-2022 when OpenAI released the first version of ChatGPT, a generative AI system attached to a chatbot interface, which made these sorts of tools more accessible to the average person.

Since then, there’s been a wave of investment and interest in AI tools, and we’ve reached a point where the seemingly obvious next-step is removing humans from the loop in more AI-related processes.

What that means in practice is that while today these tools require human prompting for most of what they do—you have to ask an AI for a specific image, then ask it to refine that image in order to customize it for your intended use-case, for instance—it’s possible to have AI do more things on their own, working from broader instructions to refine their creations themselves over multiple steps and longer periods of time.

So rather than chatting with an AI to come up with a marketing plan for your business, prompting it dozens or hundreds of times to refine the sales copy, the logo, the images for the website, the code for the website, and so on, you might tell an AI tool that you’re building a business that does X and ask it to spin up all the assets that you need. From there, the AI might research what a new business in that industry requires, make all the assets you need for it, go back and tweak all those assets based on feedback from other AI tools, and then deploy those assets for you on web hosting services, social media accounts, and the like.

It’s possible that at some point these tools could become so capable in this regard that humans won’t need to be involved at all, even for the initial ideation. You could ask an AI what sorts of businesses make sense at the moment, and tell it to build you a dozen minimum viable products for those businesses, and then ask it to run those businesses for you—completely hands off, except for the expressing your wishes part, almost like you’re working with a digital genie.

At the moment, components of that potential future are possible, but one of the main things standing in the way is that AI systems largely aren’t agentic enough, which in this context means they need a lot of hand-holding for things that a human being would be capable of doing, but which they largely, with rare exceptions, aren’t yet, and they often don’t have the permission or ability to interact with other tools required to do that kind of building—and that includes things like the ability to create a business account on Shopify, but also the ability to access and handle money, which would be required to set up business and bank accounts, to receive money from customers, and so on.

This is changing at a rapid pace, and more companies are making their offerings accessible to specific AI tools; Shopify has deployed its own cluster of internal AI systems, for instance, meant to manage various aspects of a business its customers perch on its platform.

What’s missing right now, though, is a unifying scaffolding that allows these services and assets and systems to all play nice with each other.

And that’s the issue the Model Context Protocol is meant to address.

The Model Context Protocol, or MCP, is a standard developed by AI company Anthropic, and it’s open and designed to be universal. The company intends for it to be the mycelium that connects large language model-based AI to all sorts of data and tools and other systems, a bit like the Hypertext Transfer Protocol, or HTTP, allows data on the web to be used and shared and processed, universally, in a standardized way, and to dip back into the world of physical objects, how standardized shipping containers make global trade a lot more efficient because everyone’s working with the same sized boxes, cargo vessels, and so on.

The Universal Serial Bus standard, usually shorthanded as USB, is also a good comparison here, as the USB was introduced to replaced a bunch of other standards in the early days of personal computing, which varied by computer maker, and which made it difficult for those makers, plus those who developed accessories, to make their products accessible and inexpensive for end-users, as you might buy a mouse that doesn’t work with your specific computer hardware, or you might have a cable that fits in the hole on your computer, but doesn’t send the right amount of data, or provide the power you need.

USB standards ensured that all devices had the same holes, and that a certain basic level of data and power transmission would be available. And while this standard has since fractured a bit, a period of many different types of USB leading to a lot of confusion, and the deployment of the USB C standard simplying things somewhat, but still being a bit confounding at times, as the same shaped plug may carry different amounts of data and power, despite all that, it has still made things a lot easier for both consumers and producers of electronic goods, as there are fewer plugs and charger types to purchase, and thus less waste, confusion, and so on. We’ve moved on from the wild west era of computer hardware connectivity into something less varied and thus, more predictable and interoperable.

The MCP, if it’s successful, could go on to be something like the USB standard in that it would serve as a universal connector between various AI systems and all the things you might want those AI systems to access and use.

That might mean you want one of Anthropic’s AI systems to build you a business, without you having to do much or anything at all, and it may be capable of doing so, asking you questions along the way if it requires more clarity or additional permissiosn—to open a bank account in your name, for instance—but otherwise acting more agentically, as intended, even to the point that it could run social media accounts, work with manufacturers of the goods you sell, and handle customer service inquiries on your behalf.

What makes this standard a standout compared to other options, though—and there are many other proposed options, right now, as this space is still kind of a wild west—is that though it was developed by Anthropic, which originally made it to work with its Claude family of AI tools, it has since also been adopted by OpenAI, Google DeepMind, and several of the other largest players in the AI world.

That means, although there are other options here, all with their own pros and cons, as was the case with USB compared to other connection options back in the day, MCP is usable with many of the biggest and most spendy and powerful entities in the AI world, right now, and that gives it a sort of credibility and gravity that the other standards don’t currently enjoy.

This standard is also rapidly being adopted by companies like Block, Apollo, PayPal, CloudFlare, Asana, Plaid, and Sentry, among many, many others—including other connectors, like Zapier, which basically allows stuff to connect to other stuff, further broadening the capacity of AI tools that adopt this standard.

While this isn’t a done deal, then, there’s a good chance that MCP will be the first big connective, near-universal standard in this space, which in turn means many of the next-step moves and tools in this space will need to work with it, in order to gain adoption and flourish, and that means, like the standards spread around the world by the Marshall Plan, it will go on to shape the look and feel and capabilities, including the limitations, of future AI tools and scaffoldings.

Show Notes

https://arstechnica.com/information-technology/2025/04/mcp-the-new-usb-c-for-ai-thats-bringing-fierce-rivals-together/

https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/

https://oldvcr.blogspot.com/2025/05/what-went-wrong-with-wireless-usb.html

https://arxiv.org/html/2504.16736v2

https://en.wikipedia.org/wiki/Model_Context_Protocol#cite_note-anthropic_mcp-1

https://github.com/modelcontextprotocol

https://www.anthropic.com/news/integrations

https://www.theverge.com/2024/11/25/24305774/anthropic-model-context-protocol-data-sources

https://beebom.com/model-context-protocol-mcp-explained/

https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/

https://techcrunch.com/2025/04/09/google-says-itll-embrace-anthropics-standard-for-connecting-ai-models-to-data/

https://en.wikipedia.org/wiki/Generative_artificial_intelligence

https://en.wikipedia.org/wiki/USB

https://www.archives.gov/milestone-documents/marshall-plan

https://en.wikipedia.org/wiki/Marshall_Plan

https://www.congress.gov/crs-product/R45079

https://www.ebsco.com/research-starters/history/marshall-plan

https://www.history.com/articles/marshall-plan


This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe
  continue reading

467 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Listen to this show while you explore
Play