Artwork
iconShare
 
Manage episode 481599939 series 3636979
Content provided by Sébastien Stormacq and Amazon Web Services. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Sébastien Stormacq and Amazon Web Services or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
In this episode of the AWS Developers Podcast, we dive into the different ways to deploy large language models (LLMs) on AWS. From self-managed deployments on EC2 to fully managed services like SageMaker and Bedrock, we break down the pros and cons of each approach. Whether you're optimizing for compliance, cost, or time-to-market, we explore the trade-offs between flexibility and simplicity. You'll hear practical insights into instance selection, infrastructure management, model sizing, and prototyping strategies. We also examine how services like SageMaker Jumpstart and serverless architectures like Bedrock can streamline your machine learning workflows. If you're building or scaling AI applications in the cloud, this episode will help you navigate your options and design a deployment strategy that fits your needs.

With Germaine Ong, Startup Solution Architect ; With Jarett Yeo, Startup Solution Architect

  •   continue reading

    186 episodes