Artwork
iconShare
 
Manage episode 509843885 series 3513246
Content provided by Owen Henkel & Libby Hills, Owen Henkel, and Libby Hills. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Owen Henkel & Libby Hills, Owen Henkel, and Libby Hills or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

In this episode of Ed-Technical, Libby and Owen explore why traditional AI detection tools are struggling in academic settings. As students adopt increasingly sophisticated methods to evade AI detection - like paraphrasing tools, hybrid writing, and sequential model use - detection accuracy drops and false positives rise. Libby and Owen look at the research showing why reliable detection with automated tools is so difficult, including why watermarking and statistical analysis often fail in real-world conditions.

The conversation shifts toward process-based and live assessments, such as keystroke tracking and oral exams, which offer more dependable ways to evaluate student work. They also discuss the institutional challenges that prevent widespread adoption of these methods, like resource constraints and student resistance. Ultimately, they ask how the conversation about detection could lead towards more meaningful assessment.

Join us on social media:

Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.

  continue reading

45 episodes