Android Backstage, a podcast by and for Android developers. Hosted by developers from the Android engineering team, this show covers topics of interest to Android programmers, with in-depth discussions and interviews with engineers on the Android team at Google. Subscribe to Android Developers YouTube → https://goo.gle/AndroidDevs
…
continue reading
MP3•Episode home
Manage episode 477654280 series 2579970
Content provided by ITPro. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by ITPro or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
We’ve spent the past few years discussing large language models, the huge AI models that power a lot of the generative AI tools that have dominated the headlines.
But small language models are also possible – and rapidly growing in popularity.
What benefits do these tiny AI models offer, and how much use will they get in the coming months and years, and how do they differ from other lightweight, ‘low latency’ models?
In this episode, Jane and Rory take a look at some of the smallest AI models on the market, asking what they're for and if they could be the future of the technology.
Read more:
- Small language models are growing in popularity — but they have a “hidden fallacy” that enterprises must come to terms with
- Small language models set for take-off next year
- Google’s new ‘Gemma’ AI models show that bigger isn’t always better
- Microsoft wants to take the hassle out of AI development with its ‘Models as a Service’ offering
- Three open source large language models you can use today
- Chinese AI firm DeepSeek has Silicon Valley flustered
309 episodes