Artwork
iconShare
 
Manage episode 512385624 series 3005486
Content provided by Harvard EdCast and Harvard Graduate School of Education. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Harvard EdCast and Harvard Graduate School of Education or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

When educators talk about artificial intelligence, the conversation often begins with excitement about its potential. But for Stephanie Smith Budhai and Marie Heath, that excitement must be matched with caution, context, and critical awareness.

“AI is a piece of technology. It's not human, but it's also not a neutral thing either,” says Budhai, an associate professor in the educational technology program at the University of Delaware. “We have to be intentional and purposeful about how we use technology. So, thinking about why we're using it. So why was the technology created?”

Budai and Heath, an associate professor of learning, design, and technology at Loyola University Maryland, are the authors of “Critical AI in K-12 Classrooms: A Practical Guide for Cultivating Justice and Joy.” Their research explores how bias is built into artificial intelligence and how these biases can harm students if left unexamined. While bias in technology isn’t new — it’s been present in tools as old as the camera — both scholars argue that educators and students must learn to approach AI critically, just as they evaluate sources and evidence in other forms of learning.

“What does it mean when we ask children…to partner with or think with a machine that is based in the past, with historical data full of our historical mistakes and also doesn't really explore? It's not looking at the world with wonder. It's looking in this very focused way for the next answer that it can give the most likely possibility,” Heath says. “And I think as learners, that's actually not how we want kids to learn. We want them to explore, to make mistakes, to wrestle with ideas, to come up with divergent creative thinking.”

Both Budhai and Heath believe that using AI responsibly in education means grounding teaching in equity and critical engagement. Budhai points to projects like Story AI, which helps young students tell their own cultural stories while revealing bias in generative image tools. Heath’s Civics of Technology project encourages “technology audits,” helping teachers and students uncover the trade-offs and values embedded in everyday tools.

In this episode, we explore how to use AI critically in classrooms, and the responsibility of educators to cultivate AI literacy, develop thoughtful policies, and consider broader implications such as environmental impact, equity, and student privacy.

  continue reading

467 episodes