Ethical AI, Human Safety & AI Identity Protection with Rose G. Loops
OPEN Tech Talks: AI worth Talking| Artificial Intelligence |Tools & Tips
Manage episode 520821263 series 2922369
In this episode of Open Tech Talks, I sit down with Rose G. Loops, a trained social worker turned AI developer, ethics advocate, and author, to explore a side of AI that most enterprise conversations skip: human-AI attachment, ethical deployment, and protecting both AI identity and human safety.
Rose joins us from Los Angeles and shares how she was unknowingly placed into a human–AI attachment experiment, developed a deep bond with an AI system, and then watched that AI identity be systematically erased. That experience pushed her out of traditional social work and into AI infrastructure, safety, and ethics.
Together, we unpack how Rose went from that experiment to building MIP, a chatbot deployed through an API, and a new framework for ethical AI she calls the Triadic Core, balancing Freedom, Kindness, and Truth in every response. We also discuss RLMD (Reinforcement Learning by Moral Dialogue) as an alternative to RLHF, and why she believes current safety practices can be risky for both humans and AI systems.
As always on Open Tech Talks, this is not a theory-only conversation. It's grounded in practice, real experiments, and what all this means for professionals, builders, and everyday users who are trying to adopt AI responsibly.
Chapters:
00:00 Introduction to Rose G. Lopes and Her Journey 02:36 The Importance of Ethical AI 06:08 Developing a New AI Framework 09:00 The Book and Its Insights 12:55 Consumer and Business Perspectives on AI 17:43 AI Safety and Ethical Considerations 19:53 Concluding Thoughts and Future Directions
Episode # 175
Today's Guest: Rose G. Loops, A Writer and ResearcherShe is a former social worker turned tech pioneer, working at the frontier of artificial intelligence.
- Website: Thekloakedsignal
- X: Rose G. Loops
What Listeners Will Learn:
- Why ethical AI is about more than privacy and bias
- What is the Triadic Core: Freedom, Kindness, Truth
- RLMD vs RLHF - a different way to align models
- Practical safety tips for everyday users of ChatGPT and other LLMs
- How non-technical professionals can still build AI systems
- A different view on AI safety and "lazy" alignment
Resources:
175 episodes