Artwork
iconShare
 
Manage episode 474519545 series 2498424
Content provided by chris kalaboukis and Chris kalaboukis. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by chris kalaboukis and Chris kalaboukis or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

In this episode of "ThinkFuture," I dive into a wild story from AIDaily.us about a Chinese company building an AI child—now upgraded from a 3-4-year-old to a 5-6-year-old version. They’re calling it AGI, but I’m not buying it. This AI kid even threw a tantrum, pushing back on its “parents” (aka the researchers). But let’s be real—it’s not feeling emotions; it’s just pulling from a database of human reactions to mimic a meltdown. I break down how large language models work, basically recycling human responses, not thinking for themselves. Then I get into the big question: if we make AI companions—like this kid or even a holographic bartender from a Star Trek: Voyager episode—do we want them to have messy, human-like flaws? Or should we tweak them to be perfect? I riff on Captain Janeway’s dilemma with her holo-crush, where she almost reprogrammed him to be her dream guy but backed off. Should our future AI buddies have their own personalities, or should they just do what we want? It’s a juicy ethical debate for the YouTube crew to chew on!---The First Future Planner: Record First, Action Later: https://foremark.usBe A Better YOU with AI: Join The Community: ⁠https://10xyou.us⁠Get AIDAILY every weekday. ⁠https://aidaily.us⁠My blog: ⁠https://thinkfuture.com

  continue reading

1107 episodes