MP3•Episode home
Manage episode 518878220 series 2600992
Content provided by Francesco Gadaleta. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Francesco Gadaleta or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.
LLMs generate text painfully slow, one low-info token at a time. Researchers just figured out how to compress 4 tokens into smart vectors & cut costs by 44%—with full code & proofs! Meanwhile OpenAI drops product ads, not papers.
We explore CALM & why open science matters. 🔥📊
Sponsors
This episode is brought to you by Statistical Horizons
At Statistical Horizons, you can stay ahead with expert-led livestream seminars that make data analytics and AI methods practical and accessible.
Join thousands of researchers and professionals who’ve advanced their careers with Statistical Horizons.
Get $200 off any seminar with code DATA25 at https://statisticalhorizons.com
297 episodes