Artwork
iconShare
 
Manage episode 513589911 series 3474148
Content provided by HackerNoon. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by HackerNoon or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://staging.podcastplayer.com/legal.

This story was originally published on HackerNoon at: https://hackernoon.com/we-are-very-early-in-our-work-with-llms-prem-ramaswami-head-of-data-commons-at-google.
Google's Head of Data Commons joined HackerNoon to discuss grounding AI in verifiable data, and why "we are very early with LLMs," MCP's open approach.
Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning. You can also check exclusive content about #llm, #data, #hackernoon-top-story, #interview, #work-with-llms, #data-with-llm, #accurate-data-with-llms, #datasets, and more.
This story was written by: @David. Learn more about this writer by checking @David's about page, and for more stories, please visit hackernoon.com.
Google Data Commons launched an MCP server to ground AI in verifiable public data from trusted sources like the UN, World Bank, and Census Bureau. The clever part: users' own LLMs do the translation work, so Google's compute isn't involved. Prem Ramaswami argues we're still "very early" with LLMs (Google's transformer paper was only 2017) and the answer to hallucinations is "try all of the above" - combining language models with robust, auditable data sources. The service is free, integrates hundreds of datasets with transparent provenance, and chose Anthropic's open MCP standard over building proprietary infrastructure. Key challenge: expanding beyond strong US/OECD coverage to make grounded AI systems globally representative.Retry

  continue reading

381 episodes