AI lab podcast, "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view.
…
continue reading
Stable Diffusion Podcasts
1
Designing Futures: Exploring AI, Data, Architecture and beyond.
Nathalie Rozencwajg & Melanie Rozencwajg
Join us as we embark on a captivating journey through the ever-evolving intersection of AI, data, and architecture. In this podcast, we dive deep into the vast potential of AI for architecture and design, examining the remarkable possibilities it offers, while also acknowledging the challenges it presents. Our mission is to expand the conversation, engaging with leaders, thinkers, and doers in the ecosystem. We invite them to share their profound insights, groundbreaking ideas, and innovativ ...
…
continue reading
Machine learning audio course, teaching the fundamentals of machine learning and artificial intelligence. It covers intuition, models (shallow and deep), math, languages, frameworks, etc. Where your other ML resources provide the trees, I provide the forest. Consider MLG your syllabus, with highly-curated resources for each episode's details at ocdevel.com. Audio is a great supplement during exercise, commute, chores, etc.
…
continue reading
Become a Paid Subscriber: https://podcasters.spotify.com/pod/show/rebeltech/subscribe Welcome to my Rebel Rant Series podcast! Join me as I dive into topics that matter, and share my unfiltered thoughts and opinions. This podcast is a different side of me, separate from my YouTube videos that I upload. It's raw, it's real, and it's here to inspire and motivate. In this podcast, I'll be sharing never-before-seen footage and insights into my life, as well as discussing topics ranging from busi ...
…
continue reading
Knowledge Distillation is the podcast that brings together a mixture of experts from across the Artificial Intelligence community. We talk to the world’s leading researchers about their experiences developing cutting-edge models as well as the technologists taking AI tools out of the lab and turning them into commercial products and services. Knowledge Distillation also takes a critical look at the impact of artificial intelligence on society – opting for expert analysis instead of hysterica ...
…
continue reading
1
AI lab TL;DR | Joan Barata - Transparency Obligations for All AI Systems
17:05
17:05
Play later
Play later
Lists
Like
Liked
17:05🔍 In this TL;DR episode, Joan explains how Article 50 of the EU AI Act sets out high-level transparency obligations for AI developers and deployers—requiring users to be informed when they interact with AI or access AI-generated content—while noting that excessive labeling can itself be misleading. She highlights why the forthcoming Code of Practic…
…
continue reading
1
AI lab TL;DR | Aline Larroyed - The Fallacy Of The File
7:45
7:45
Play later
Play later
Lists
Like
Liked
7:45🔍 In this episode, Caroline and Alene unravel why the popular idea of “AI memorisation” leads policymakers down the wrong path—and how this metaphor obscures what actually happens inside large language models. Moving from the technical realities of parameter optimisation to the policy dangers of doctrinal drift, they explore how misleading language…
…
continue reading
Matthias Hollwich — founder of HWKN — joins Nathalie Rozencwajg and Melanie Rozencwajg on Designing Futuresto explore what it means to run a fully AI-integrated architecture studio. At HWKN, AI isn’t an add-on — it informs every stage of design, from concept to construction. We unpack how this shift enables architects to reclaim their role as visio…
…
continue reading
1
Understanding the Why: Behavioral Science Meets Data & AI
1:00:35
1:00:35
Play later
Play later
Lists
Like
Liked
1:00:35Jez Groom — founder & CEO of Cowry Consulting and a pioneer in applied behavioral science — joins us on Designing Futures. With 14+ years of helping global organizations from Amazon to HSBC understand human behavior, Jez explains why most decisions are non-rational, why intentions rarely translate into action, and how simple tweaks can shift behavi…
…
continue reading
1
Redesigning Time – Decision, Speed, and the New Logic of Practice
49:08
49:08
Play later
Play later
Lists
Like
Liked
49:08Matt Krissel — architect, educator, and principal at Perkins&Will — joins us to rethink our relationship with time in an AI-augmented practice. With decades of experience leading transformative projects and as co-founder of the Built Environment Futures Council, Matt brings a unique perspective on how speed, decision-making, and time are being reba…
…
continue reading
1
MLA 027 AI Video End-to-End Workflow
1:11:37
1:11:37
Play later
Play later
Lists
Like
Liked
1:11:37How to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3's "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dialogue, and Runway Gen-4 for camera control, while professional studios …
…
continue reading
1
MLA 026 AI Video Generation: Veo 3 vs Sora, Kling, Runway, Stable Video Diffusion
40:39
40:39
Play later
Play later
Lists
Like
Liked
40:39Google Veo leads the generative video market with superior 4K photorealism and integrated audio, an advantage derived from its YouTube training data. OpenAI Sora is the top tool for narrative storytelling, while Kuaishou Kling excels at animating static images with realistic, high-speed motion. Links Notes and resources at ocdevel.com/mlg/mla-26 Tr…
…
continue reading
1
MLA 025 AI Image Generation: Midjourney vs Stable Diffusion, GPT-4o, Imagen & Firefly
58:51
58:51
Play later
Play later
Lists
Like
Liked
58:51The AI image market has split: Midjourney creates the highest quality artistic images but fails at text and precision. For business use, OpenAI's GPT-4o offers the best conversational control, while Adobe Firefly provides the strongest commercial safety from its exclusively licensed training data. Links Notes and resources at ocdevel.com/mlg/mla-25…
…
continue reading
Luc Izri — architect, theorist, and co-founder of Inflexion Dynamics — joins us to explore the evolving relationship between computation and design thinking. With a deep background in algorithmic and topological design education, Luc brings a fresh perspective on how AI is transforming not only architectural practice, but the way we teach and learn…
…
continue reading
Auto encoders are neural networks that compress data into a smaller "code," enabling dimensionality reduction, data cleaning, and lossy compression by reconstructing original inputs from this code. Advanced auto encoder types, such as denoising, sparse, and variational auto encoders, extend these concepts for applications in generative modeling, in…
…
continue reading
1
AI lab TL;DR | Anna Mills and Nate Angell - The Mirage of Machine Intelligence
20:50
20:50
Play later
Play later
Lists
Like
Liked
20:50🔍 In this TL;DR episode, Anna and Nate unpack why calling AI outputs “hallucinations” misses the mark—and introduce “AI Mirage” as a sharper, more accurate metaphor. From scoring alternative terms to sparking social media debates, they show how language shapes our assumptions, trust, and agency in the age of generative AI. The takeaway: choosing th…
…
continue reading
1
AI lab TL;DR | Emmie Hine - Can Europe Lead the Open-Source AI Race?
11:03
11:03
Play later
Play later
Lists
Like
Liked
11:03🔍 In this TL;DR episode, Emmie Hine (Yale Digital Ethics Center) makes the case for Europe’s leadership in open-source AI—thanks to strong infrastructure, multilingual data, and regulatory clarity. With six key policy recommendations, the message is clear: trust and transparency can make EU models globally competitive. 📌 TL;DR Highlights ⏲️[00:00] …
…
continue reading
At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use ex…
…
continue reading
Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (Mo…
…
continue reading
In this episode, we’re joined by Roey Granot, co-founder of QBIQ, a trailblazing company using artificial intelligence to transform how we design and plan spaces. QBIQ’s platform allows brokers, landlords, architects, and tenants to generate instant, customized layout plans and immersive 3D tours—redefining workflows across real estate and architec…
…
continue reading
1
AI lab TL;DR | Milton Mueller - Why Regulating AI Misses the Point
18:06
18:06
Play later
Play later
Lists
Like
Liked
18:06🔍 In this TL;DR episode, Milton Mueller (the Georgia Institute of Technology School of Public Policy) argues that what we call “AI” is really just part of a broader digital ecosystem. Instead of vague, top-down AI regulation, he calls for context-specific rules that address actual uses—like facial recognition or medical diagnostics—rather than the …
…
continue reading
1
MLA 024 Code AI MCP Servers, ML Engineering
43:38
43:38
Play later
Play later
Lists
Like
Liked
43:38Tool use in code AI agents allows for both in-editor code completion and agent-driven file and command actions, while the Model Context Protocol (MCP) standardizes how these agents communicate with external and internal tools. MCP integration broadens the automation capabilities for developers and machine learning engineers by enabling access to a …
…
continue reading
Gemini 2.5 Pro currently leads in both accuracy and cost-effectiveness among code-focused large language models, with Claude 3.7 and a DeepSeek R1/Claude 3.5 combination also performing well in specific modes. Using local open source models via tools like Ollama offers enhanced privacy but trades off model performance, and advanced workflows like c…
…
continue reading
1
AI lab TL;DR | Kevin Frazier - How Smarter Copyright Law Can Unlock Fairer AI
16:51
16:51
Play later
Play later
Lists
Like
Liked
16:51🔍 In this TL;DR episode, Kevin Frazier (University of Texas at Austin school of Law) outlines a proposal to realign U.S. copyright law with its original goal of spreading knowledge. The discussion introduces three key reforms—an AI training presumption, research safe harbors, and data commons—to help innovators access data more easily. By reducing …
…
continue reading
1
AI lab TL;DR | Paul Keller - A Vocabulary for Opting Out of AI Training and TDM
15:06
15:06
Play later
Play later
Lists
Like
Liked
15:06🔍 In this TL;DR episode, Paul Keller (The Open Future Foundation) outlines a proposal for a common opt-out vocabulary to improve how EU copyright rules apply to AI training. The discussion introduces three clear use cases—TDM, AI training, and generative AI training—to help rights holders express their preferences more precisely. By standardizing t…
…
continue reading
1
AI lab TL;DR | João Pedro Quintais - Untangling AI Copyright and Data Mining in EU Compliance
25:10
25:10
Play later
Play later
Lists
Like
Liked
25:10🔍 In this TL;DR episode, João Quintais (Institute for Information Law) explains the interaction between the AI Act and EU copyright law, focusing on text and data mining (TDM). He unpacks key issues like lawful access, opt-out mechanisms, and transparency obligations for AI developers. João explores challenges such as extraterritoriality and trade …
…
continue reading
1
AI lab TL;DR | Anna Tumadóttir - Rethinking Creator Consent in the Age of AI
19:41
19:41
Play later
Play later
Lists
Like
Liked
19:41🔍 In this TL;DR episode, Anna Tumadóttir (Creative Commons) discusses how the evolution of creator consent and AI has reshaped perspectives on openness, highlighting the challenges of balancing creator choice with the risks of misuse. Examines the limitations of blunt opt-out mechanisms like those in the EU AI Act, the implications for marginalized…
…
continue reading
1
MLA 022 Code AI: Cursor, Cline, Roo, Aider, Copilot, Windsurf
55:29
55:29
Play later
Play later
Lists
Like
Liked
55:29Vibe coding is using large language models within IDEs or plugins to generate, edit, and review code, and has recently become a prominent and evolving technique in software and machine learning engineering. The episode outlines a comparison of current code AI tools - such as Cursor, Copilot, Windsurf, Cline, Roo Code, and Aider - explaining their a…
…
continue reading
Links: Notes and resources at ocdevel.com/mlg/33 3Blue1Brown videos: https://3blue1brown.com/ Try a walking desk stay healthy & sharp while you learn & code Try Descript audio/video editing with AI power-tools Background & Motivation RNN Limitations: Sequential processing prevents full parallelization—even with attention tweaks—making them ineffici…
…
continue reading
1
AI lab TL;DR | Carys J. Craig - The Copyright Trap and AI Policy
29:23
29:23
Play later
Play later
Lists
Like
Liked
29:23🔍 In this TL;DR episode, Carys J Craig (Osgoode Professional Development) explains the "copyright trap" in AI regulation, where relying on copyright favors corporate interests over creativity. She challenges misconceptions about copying and property rights, showing how this approach harms innovation and access. Carys offers alternative ways to prot…
…
continue reading
1
AI lab TL;DR | Ariadna Matas - Should Institutions Enable or Prevent Cultural Data Mining?
16:18
16:18
Play later
Play later
Lists
Like
Liked
16:18🔍 In this TL;DR episode, Ariadna Matas (Europeana Foundation) discusses how the 2019 Copyright Directive has influenced text and data mining practices in cultural heritage institutions, highlighting the tension between public interest missions and restrictive approaches, and explores the broader implications of opt-outs on access, research, and the…
…
continue reading
1
AI lab TL;DR | Martin Senftleben - How Copyright Challenges AI Innovation and Creativity
10:09
10:09
Play later
Play later
Lists
Like
Liked
10:09🔍 In this TL;DR episode, Martin Senftleben (Institute for Information Law (IViR) & University of Amsterdam) discusses how EU regulations, including the AI Act and copyright frameworks, impose heavy burdens on AI training and development. The discussion highlights concerns about bias, quality, and fairness due to opt-outs and complex rights manageme…
…
continue reading
1
AI lab TL;DR | Mark Lemley - How Generative AI Disrupts Traditional Copyright Law
8:40
8:40
Play later
Play later
Lists
Like
Liked
8:40🔍 In this TL;DR episode, Mark Lemley (Stanford Law School) discusses how generative AI challenges traditional copyright doctrines, such as the idea-expression dichotomy and substantial similarity test, and explores the evolving role of human creativity in the age of AI. 📌 TL;DR Highlights ⏲️[00:00] Intro ⏲️[00:54] Q1-How does genAI challenge tradit…
…
continue reading
1
AI lab TL;DR | Jacob Mchangama - Are AI Chatbot Restrictions Threatening Free Speech?
16:10
16:10
Play later
Play later
Lists
Like
Liked
16:10🔍 In this TL;DR episode, Jacob Mchangama (The Future of Free Speech & Vanderbilt University) discusses the high rate of AI chatbot refusals to generate content for controversial prompts, examining how this may conflict with the principles of free speech and access to diverse information. 📌 TL;DR Highlights ⏲️[00:00] Intro ⏲️[00:51] Q1-How does the …
…
continue reading
1
Can AI Unlearn? The Future of AI and Bias Correction
40:30
40:30
Play later
Play later
Lists
Like
Liked
40:30In the rapidly evolving world of AI, one of the most pressing questions is: Can AI models truly unlearn? As AI becomes more integrated into our daily lives, ensuring healthier, bias-free models is crucial. In this episode, we dive deep into a groundbreaking approach—machine unlearning—with Ben Louria, founder of Hirundo. His platform tackles one of…
…
continue reading
1
AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox
14:49
14:49
Play later
Play later
Lists
Like
Liked
14:49🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab 📌 TL;DR Highlights ⏲️[00:00] Intro ⏲️[01:08] Q1-The ‘Intelligence Paradox’: How does the language used to describe AI lead to misconceptions and the so-called ‘Intelligence Paradox’? ⏲️[…
…
continue reading
1
AI lab TL;DR | Stefaan G. Verhulst - Are we entering a Data Winter?
12:55
12:55
Play later
Play later
Lists
Like
Liked
12:55🔍 In this TL;DR episode, Dr. Stefaan G. Verhulst (The GovLab & The Data Tank) discusses his Frontiers Policy Labs contribution on the urgent need to preserve data access for the public interest with the AI lab 📌 TL;DR Highlights ⏲️[00:00] Intro ⏲️[01:13] Q1-‘Data Winter’: Can you provide a brief overview of your concept of 'Data Winter' and why you…
…
continue reading
1
AI lab – AI in Action | Episode 03: AI Tokenization
7:28
7:28
Play later
Play later
Lists
Like
Liked
7:28Let’s talk about AI tokenization in this third episode of our AI in Action series. Tokenization is actually pretty interesting, especially if you ever wondered how these fancy AI machines understand the stuff we type and say and produce things when we give them prompts. Next time you're marvelling at an AI-generated text, remember it's all about th…
…
continue reading
1
Building Smarter: How Data Drives Innovation
40:13
40:13
Play later
Play later
Lists
Like
Liked
40:13In this episode, we’re joined by Husain Al Asfoor, founder of Ebinaa, a platform dedicated to streamlining relationships throughout the real-estate journey. Husain has been at the forefront of digitizing the design and construction industry, with a deep focus on enhancing the flow of information across the real estate sector. His insights on how da…
…
continue reading
1
AI lab TL;DR | Bertin Martens - The Economics of GenAI & Copyright
13:41
13:41
Play later
Play later
Lists
Like
Liked
13:41🔍 In this TL;DR episode, Dr. Bertin Martens (Bruegel) discusses his working paper for the Brussels-based economic think tank on the economic arguments in favour of reducing copyright protection for generative AI inputs and outputs with the AI lab * 9:44: Mr Martens intended to say "humans" instead of machines 📌 TL;DR Highlights ⏲️[00:00] Intro ⏲️[0…
…
continue reading
In this experimental episode of the Rebel Rant Series, I sit down with the renowned Silicon Valley realtor and entrepreneur, Charlie Giang, for a candid and unfiltered conversation. We dive deep into the world of AI, explore the future of real estate and content creation, and share some hilarious stories along the way.This isn't your typical podcas…
…
continue reading
1
Unplugged with Enya Music. SEASON 2 PREMIERE of Rebel Rant Series
58:33
58:33
Play later
Play later
Lists
Like
Liked
58:33Get ready for a raw, uncensored and real conversation! We're kicking off Season 2 of the Rebel Rant Series with a bang! Our first guest is none other than Richard, the Global Marketing Director of @enyamusicglobal Dive deep into the world of music, tech, and entrepreneurship as we explore ENYA Music's journey, their innovative products like the Nov…
…
continue reading
1
AI lab TL;DR | Alexander Peukert - Copyright in the Artificial Intelligence Act–A Primer
15:36
15:36
Play later
Play later
Lists
Like
Liked
15:36🔍 In this TL;DR episode, Prof. Dr. Alexander Peukert (Goethe University Frankfurt am Main) discusses his primer on copyright in the EU AI Act with the AI lab 📌 TL;DR Highlights ⏲️[00:00] Intro ⏲️[01:26] Q1-Merging copyright & AI regulation: What challenges arise from merging copyright law and AI regulation? How might this impact legislation, compli…
…
continue reading
1
AI lab TL;DR | Thomas Margoni - Copyright Law & the Lifecycle of Machine Learning Models
13:29
13:29
Play later
Play later
Lists
Like
Liked
13:29🔍 In this TL;DR episode, Professor Thomas Margoni (CiTiP - Centre for IT & IP Law, KU Leuven) discusses copyright law and the lifecycle of machine learning models with the AI lab. The starting point is an article co-authored with Professor Martin Kretschmer (CREATe, University of Glasgow) and Dr Pinar Oruç (University of Manchester), and published …
…
continue reading
1
AI lab - AI in Action | Episode 02: AI Terminology
10:49
10:49
Play later
Play later
Lists
Like
Liked
10:49Let’s talk about AI terminology in the second episode in our AI in Action series. The AI term gets thrown around more than a beach ball at a summer picnic, and it’s not always clear what people are talking about. “AI” is to tech what “food” is to a grocery store – sure, it covers a lot, but a hot dog and a filet mignon are pretty darn different whe…
…
continue reading
1
AI lab TL;DR | Elisa Giomi - The Unacknowledged AI Revolution in the Media & Creative Industries
18:23
18:23
Play later
Play later
Lists
Like
Liked
18:23🔍 In this TL;DR episode, Dr Elisa Giomi, Associate Professor at the Roma Tre University and Commissioner of the Italian Communications Regulatory Authority (AGCOM), discusses her recent contribution on Intermedia, the journal of the International Institute of Communications (IIC), titled “The (almost) unacknowledged revolution of AI in the media an…
…
continue reading
We are delighted to present another thought-provoking episode exploring the critical topics of accountability and responsibility in Artificial Intelligence within the creative industries. In this episode, we are honored to welcome Vered Horesh from Bria AI, a visionary in the field of AI. Vered shares the journey and mission of Bria AI, emphasizing…
…
continue reading
1
AI lab TL;DR | Derek Slater - What the Copyright Case Against Ed Sheeran Can Teach Us About AI
13:06
13:06
Play later
Play later
Lists
Like
Liked
13:06🔍 In this TL;DR episode, Derek Slater (Proteus Strategies) discusses his recent blog post on the Tech Policy Press website, titled “What the Copyright Case Against Ed Sheeran Can Teach Us About AI”, with the AI lab 📌 TL;DR Highlights ⏲️[00:00] Intro ⏲️[01:11] Q1 - Legal boundaries & creativity: How to define the boundary between protectable express…
…
continue reading
1
AI lab - AI in Action | Episode 01: AI History
9:15
9:15
Play later
Play later
Lists
Like
Liked
9:15We are kickstarting our AI in Action series by diving headfirst into the key milestones that led to the gradual deployment of Artificial Intelligence, or AI for short. You might think it's some shiny new invention, looking at all the recent media coverage about robots taking over your jobs and writing bad poetry. But hold on to your Roomba, because…
…
continue reading
1
The Future of AI Accountability & Responsibility
51:01
51:01
Play later
Play later
Lists
Like
Liked
51:01We are thrilled to bring you an insightful episode that dives deep into the pressing issues of accountability and responsibility in the realm of Artificial Intelligence within the creative industries. Introduction by Dyann Heward-Mills: The EU AI Act To set the framework, we have the honor of welcoming Dyann Heward-Mills:, an esteemed expert in AI …
…
continue reading
1
AI lab TL;DR | Žiga Turk - Brussels is About to Protect Citizens from Intelligence
10:14
10:14
Play later
Play later
Lists
Like
Liked
10:14🔍 In this TL;DR episode, Professor Žiga Turk (University of Ljubljana, Slovenia) discusses his recent contribution for the Wilfried Martens Centre for European Studies on how “Brussels is About to Protect Citizens from Intelligence” with the AI lab 📌 TL;DR Highlights ⏲️[00:00] Intro ⏲️[01:55] Q1 - Why do you think AI regulation prioritises limiting…
…
continue reading
1
Neuroscience and AI with Basis co-founder Emily Mackevicius
35:05
35:05
Play later
Play later
Lists
Like
Liked
35:05Emily Mackevicius is a co-founder and director of Basis, a nonprofit applied research organization focused on understanding and building intelligence while advancing society’s ability to solve intractable problems. Emily is a member of the Simons Society of Fellows, and a postdoc in the Aronov lab and the Center for Theoretical Neuroscience at Colu…
…
continue reading
1
Stable Diffusion 3 with Stability AI's Kate Hodesdon
32:49
32:49
Play later
Play later
Lists
Like
Liked
32:49Stability AI’s Stable Diffusion model is one of the best known and most widely used text-to-image systems. The decision to open-source both the model weights and code has ensured its mass adoption, with the company claiming more than 330 million downloads. Details of the latest version - Stable Diffusion 3 - were revealed in a paper, published by t…
…
continue reading
1
AI lab hot item | MEP Axel Voss: In Search of Pragmatic Solutions for AI Devs & the Creative Sector
10:43
10:43
Play later
Play later
Lists
Like
Liked
10:43🔥 In this 'Hot Item', MEP Axel Voss (Germany, EPP) & the AI lab discuss his intentions to bring the creative industry and AI developers around the table in mid-April for a first exchange to gain a better understanding of the issues perceived on both sides 📌 Hot Item Highlights ⏲️[00:00] Intro ⏲️[00:53] MEP Axel Voss (Germany, EPP) ⏲️[09:51] Wrap-up…
…
continue reading
1
Revisiting Copyright through the Lens of AI
57:48
57:48
Play later
Play later
Lists
Like
Liked
57:48In this episode, we're joined by Florian Schneider, a prominent figure navigating the intersections of art, technology, and documentary practices. As a filmmaker, writer, curator, and esteemed Professor at NTNU, Florian has spearheaded groundbreaking discussions on the role of artificial intelligence (AI) in reshaping creativity and ownership acros…
…
continue reading