Data Governance For Everyone: Making Data (And AI) Work Across The Enterprise

In three decades of watching technology waves come and go, one pattern has stayed constant.

We rename the stack every few years—data warehouses, lakes, lakehouses, self-service BI, GenAI, AI agents. But at the decisive moment, the question in the room is still very human:

Do we trust this data enough to bet the business on it?

It rarely shows up as, “We have a data governance problem.” It shows up as delayed decisions, contested metrics, AI pilots that look impressive in demos but stall in production, and leaders quietly building their own versions of the truth.

Recent research reinforces what many of us see on the ground: a majority of AI projects never reach meaningful scale, and a large share are abandoned when organizations discover the underlying data is not “AI-ready”. Data quality, availability and governance consistently appear among the top barriers to AI

Underneath all of this is a simple gap: most enterprises do not have a shared, practical way to make data ready for real decisions, real experiences and real AI.

That, for me, is what “data governance for everyone” needs to solve.

Anchor First On Value, Not Governance

The fastest way to lose a cross-functional room is to open with policies, committees and tools.

The fastest way to win that room is to open with value:

  • What decision are we trying to make better or faster?
  • What experience are we trying to improve—for a customer, a patient, a citizen, an employee?
  • What risk are we trying to reduce or avoid?

Once that is clear, a more disciplined question follows:

What data, in what condition, is needed to make this real—repeatedly and at scale?

In the age of AI, this isn’t just about dashboards. Data flows into analytics, analytics into models and agents, and those into decisions and experiences. Data governance and AI governance are no longer separate worlds; they are different views of the same.(Mckinsey)

When you frame governance as the practice of making and keeping data ready for high-stakes decisions, experiences and AI, it stops looking like an IT control function. It becomes a strategic enabler.

Governance is not a side project. It is how you protect the organization from slow, poor or risky decisions.

Why “Boiling The Ocean” Fails

A pattern I see repeatedly is the “boil the ocean” response to this problem:

“Let’s fix our entire data estate first. Then we’ll be ready for AI and analytics.”

On paper, that sounds rigorous. In practice, it inflates cost, delays value and drains patience. While the governance team designs the “perfect” framework, business and AI teams move ahead with local workarounds. Technical debt and data debt quietly compound.

At the same time, industry data is sobering: a large majority of AI initiatives either fail or fail to create measurable value, often because data foundations and governance were not addressed early enough.

The alternative is more modest but much more powerful in practice:

  • Start with a small set of high-value use cases—decisions or AI applications that genuinely matter.
  • Use them to expose where your data is not ready.
  • Improve those weak points in a way that benefits both the use case and the broader data practice.
  • Turn what you learn into standards, platforms and shared services.

In other words, governance should grow from use cases, not the other way around. That is how you avoid boiling the ocean and still move the enterprise forward.

Preparing Data For Use Cases: The 6 Cs Of Data Readiness

Once you commit to a use-case-driven approach, another question appears:

Is our data actually ready for this AI or analytics use case?

In many organizations, the honest answer is: not fully.

That’s why I like to give teams a simple lens—the 6 Cs of Data Readiness. These six capabilities determine whether data is genuinely fit for purpose for analytics and AI.

6 C’s of Data Readiness

1. Collect – capture the right data at the right moment

Most organizations don’t suffer from a lack of data; they suffer from too much of the wrong data and not enough of the right data. Collection should begin with the use case: Which decision are we supporting? Which signals does this model or agent truly need? That lens leads you to instrument the right touchpoints across journeys and channels—and be intentional about what not to collect, so you don’t accumulate cost, risk and noise for no reason.

2. Classify – know what the data actually is

Once data arrives, the first question is: What is this? Is it personal, sensitive, confidential, public, critical or disposable? Classification separates “handle with extreme care” from “safe to use more broadly”. In a modern environment, this cannot rely only on manual spreadsheets. Patterns, rules and AI-assisted classification help you tag data continuously. Without that, teams are guessing—and guessing is not a defensible governance strategy when AI is in the loop.

3. Clean – resolve everyday chaos before it reaches the boardroom

Different spellings, missing values, inconsistent definitions of the same metric quietly erode trust. Cleaning is not a one-time project. In a healthy setup, data quality checks are embedded in pipelines; anomalies are flagged early; ownership is clear. A simple test: when a senior leader asks, “Can I trust this number?”, does the room hesitate or answer with confidence? That hesitation carries a real cost in delays, rework and cautious decision-making.

4. Connect – bring data together without creating new chaos

High-value decisions and AI use cases almost never live in one system. They need data stitched across products, channels, processes and sometimes partners.

Connection is about joining those pieces safely—across on-prem and cloud, batch and real-time, internal and external sources. Done well, it unlocks new insight and new AI behaviour. Done poorly, it leads to shadow copies, conflicting logic and another generation of data debt.

5. Comply – build rules into the way work happens

Compliance is not only a legal requirement; it is also a signal of respect for customers, patients, employees and partners. To comply effectively, access, usage and retention rules must be built into systems, roles and workflows. People should not need to interpret 30-page policy documents every time they touch data. If your model of compliance relies purely on manual vigilance and good intentions, it will struggle under the scale of self-service and AI.

6. Catalog – make trusted data discoverable and usable

Even when good data exists, many teams simply don’t know it’s there. They rebuild pipelines, recreate logic or pull from the wrong source. A living, well-used catalog changes that. It shows what datasets and data products exist, what they mean, who owns them, how fresh they are, and how to request access. For many users, the catalog becomes the everyday “face” of data and AI governance.

The 6 Cs give business, data and technology teams a shared language. Instead of debating governance in the abstract, they can sit together and ask:

For this specific decision or AI use case, which of our 6 Cs are strong, and which are weak?

That is where governance starts turning into a practical, prioritized plan.

Culture And Operating Model: Shared Accountability, Not Central Policing

Even with the right frameworks, data governance fails if it is seen as “someone else’s job”.

Data now touches every part of the enterprise. Every function creates it, consumes it and relies on it. Yet the foundational work of preparing and maintaining that data estate is often pushed into a central office or treated as an afterthought.

How should data and AI governance be structured?

A more realistic model is to treat data and AI as a shared accountability.

  • Business teams own the meaning, quality and responsible use of the data they generate.
  • Data and technology teams own the platforms, automation and enablement.
  • Risk, security and legal shape the guardrails and stay involved from the start.

I think of this as SAFE governance: Shared Accountability For Everyone.

In a SAFE Governance model, central teams define policy, platform and cross-domain visibility; domains own definitions, data quality, local controls and AI use-case delivery. Governance stops being a gate that says “no” and becomes a way to build trusted data products, models and agents together.

Can AI Help With Governance Itself?

There is a quiet irony in how many organizations approach this topic. They want governed data so that they can do more with AI—but they don’t use AI to help govern data.

Today, AI and automation can meaningfully reduce governance friction:

  • Classifying and tagging sensitive or regulated data at scale
  • Detecting data quality issues in real time
  • Mapping lineage across systems, pipelines and models
  • Enforcing policy-as-code so access and usage rules are applied consistently

Surveys show that leaders feel intense pressure to deliver AI value, yet a large majority also say they need a significant overhaul of their data strategies—and many report low confidence in AI outputs because the underlying data cannot be trusted.

The intent is not to replace human judgment. It is to free specialists from repetitive checks so they can focus on design, decisions and value creation. In that sense, AI is not just a consumer of governed data; it can become an enabler of better governance.

From Compliance To Confidence: A Different Ending

In the age of AI, governance cannot live only in the compliance corner. It has become central to confidence:

  • Confidence that the numbers behind a decision are sound.
  • Confidence that models and agents behave in ways the organization can explain and defend.
  • Confidence that people’s data is being used with care and respect.

If there is one practical way to start, it is this:

Pick a single, high-stakes AI or analytics use case. Bring business, data and technology leaders into the same room and ask:

  • Which of our 6 Cs are weak for this?
  • What will it take—people, process, platform—to strengthen them?
  • Who owns that change, and by when?

Do that well a few times, and the story changes.

Governance stops being a checklist. It becomes the quiet infrastructure of trust that lets your AI ambitions move faster, not slower.

And over time, it becomes what it was always meant to be: not “data governance vs. everyone” but data governance for everyone— including the AI systems you are trusting with your next decade of growth.

Views: 5.7K

467

Leave a Reply

Your email address will not be published. Required fields are marked *

You must log in to view your testimonials.

Strong Testimonials form submission spinner.
Tech Updates
Coaching/Services
One-to-One Sessions
0 characters out of 399
rating fields