Skip to content

Rethinking AI in Data Ingestion: It’s Not About LLMs — It’s About Reducing Uncertainty

AI Didn’t Replace the Data Ingestion PM — It Changed What “Good” Means When people talk about AI reshaping technology roles, the conversation usually centers on automation, copilots, or generative assistants rewriting how work gets done. But in foundational engineering domains like data ingestion, the transformation has been far less dramatic—and far more meaningful. AI […]

Can We Trust LLMs to Judge AI Agents?

Why “LLM-as-a-Judge” is essential, risky—and how to use it right When teams demo AI agents, the storyline is familiar: a clean prompt, a neat answer, and confident nods across the room. But real-world agents aren’t tested in sanitized conditions. They face messy, ambiguous requests, incomplete context, policy constraints, and systems that don’t always behave as […]

5 Reasons Why Slapping an LLM on Your Data Catalog Still Doesn’t Do What You Think It Does

Ah, the promise: “Now anyone can ask a question in plain English and our AI will instantly show them the top-performing product line this quarter!” It sounds like the holy grail of data democratization. Just wire up a large language model (LLM) to your data catalog, call it a “Co-Pilot,” and — voilà — your […]