Artwork
iconShare
 
Manage episode 506695477 series 2434477
Content provided by Tech Field Day. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Tech Field Day or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

AI will need less HBM (high bandwidth memory) because flash memory unification is changing training and inference. This episode of the Tech Field Day podcast features Sebastien Jean from Phison, Max Mortillaro, Brian Martin, and Alastair Cooke. Training, fine-tuning, and inference with Large Language Models traditionally use GPUs with high bandwidth memory to hold entire data models and data sets. Phison’s aiDaptiv+ framework offers the ability to trade lower cost of infrastructure against training speed or allow larger data sets (context) for inference. This approach enables users to balance cost, compute, and memory needs, making larger models accessible without requiring top-of-the-line GPUs, and giving smaller companies more access to generative AI.

Learn more about Phison's solutions here.

Phsion Representative: Sebastien Jean, CTO of Phison Electronics

Host

⁠⁠⁠⁠⁠Alastair Cooke⁠⁠⁠⁠⁠, Tech Field Day Event Lead

Panelists

Brian Martin, VP of AI and Datacenter Performance at Signal65

Max Mortillaro, Chief Research Officer at Osmium Group

Follow the Tech Field Day Podcast ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠on X/Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ or ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠on Bluesky⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ and use the Hashtag #TFDPodcast to join the discussion. Listen to more episodes ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠on the podcast page of the website⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

Follow ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Tech Field Day⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ for more information on upcoming and current event coverage ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠on X/Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠on Bluesky⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, and ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠on LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠, or ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠visit our website⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠.

  continue reading

329 episodes