Manage episode 512393744 series 3497898
Sanjoy Chowdhury reveals AI's hidden weakness: while systems can see objects and hear sounds perfectly, they can't reason across senses like humans do. His research at University of Maryland College Park, including the Meerkat model and AVTrustBench, exposes why AI recognizes worried faces and thunder separately but fails to connect them—and what this means for self-driving cars and medical AI.
Sponsors
This episode is proudly sponsored by Amethix Technologies. At the intersection of ethics and engineering, Amethix creates AI systems that don’t just function—they adapt, learn, and serve. With a focus on dual-use innovation, Amethix is shaping a future where intelligent machines extend human capability, not replace it. Discover more at https://amethix.com
This episode is brought to you by Intrepid AI. From drones to satellites, Intrepid AI gives engineers and defense innovators the tools to prototype, simulate, and deploy autonomous systems with confidence. Whether it's in the sky, on the ground, or in orbit—if it's intelligent and mobile, Intrepid helps you build it. Learn more at intrepid.ai
Resources:
- The first audio-visual LLM with fine-grained understanding: Meerkat: Audio-Visual Large Language Model for Grounding in Space and Time (Accepted at ECCV 2024)
- Benchmark for evaluating the robustness to adversarial attacks, compositional reasoning: AVTrustBench: Assessing and Enhancing Reliability and Robustness in Audio-Visual LLMs (Accepted at ICCV 2025)
- First audio-visual reasoning evaluation benchmark and test time reasoning distillation pipeline AURELIA: Test-time Reasoning Distillation in Audio-Visual LLMs Accepted at ICCV 2025
- For a detailed list of Sanjoy's work, please visit his webpage: https://schowdhury671.github.io/
294 episodes