AI & Domain Intelligence Requires Ontology
AI requires on context - context is built on a foundation of ontology & semantics
Companies are investing heavily in AI without understanding what makes it work. There is no artificial intelligence applied to business without the scaffolding for domain intelligence.
Without that scaffolding, what companies call “AI” is pattern matching on data they don’t understand. The algorithms execute and produce outputs, but those outputs aren’t reliable—they’re disconnected from how the business actually functions.
Domain intelligence requires three things, built in order:
Ontology – the building blocks of common language. What exists in your domain and how those things relate.
Semantics – the language layer. How we talk about those things and what terms actually mean.
Context – situational applied meaning. How to correctly interpret any data point within the web of business relationships.
You cannot have context without semantics. You cannot have semantics without ontology. And you cannot have domain intelligence without context.
Most companies skip this scaffolding entirely. They point AI at their data and hope it figures things out. You get models that can’t explain themselves, metrics that don’t reconcile, and insights that don’t translate into action.
The Problem Starts With Language
When someone says “sales are down 5%,” without context, that statement is useless. Five percent of what? Over what period? Which metric definition? Gross sales or net? Same-store or total? Including delivery?
When a model flags a store as “high risk,” what factors drove that assessment? Traffic decline? Margin compression? Labor issues? Competitive pressure? Without the ability to trace back through defined business concepts, the prediction is a black box.
Most businesses store this definitional work in analysts’ heads or buried in documentation that’s perpetually outdated. Every dashboard, model, and report recreates its own version of what terms mean. Systems can’t integrate. Metrics don’t reconcile. This isn’t a data quality problem—it’s a foundational problem. You’re building intelligence on top of ambiguous language and undefined concepts.
Ontology: The Building Blocks
Ontology establishes what exists in your business domain and how those things relate. These are the building blocks—the common language your entire organization can reference.
In a restaurant business, an ontology defines:
What kinds of things exist: Customer, Store, MenuItem, Order, Market, Campaign, Channel
How they’re described: Order total amount, MenuItem category, Customer segment, Store opening date
How they relate: Customer places Order; Order contains MenuItem; Store is located in Market; Campaign targets Customer segment through Channel
This is fundamentally different from how most companies organize their data. Most data systems focus on storage logistics—where information lives, how it’s filed away. An ontology focuses on business logic—what things actually are and how they work together.
An ontology captures the business constraints. For example, it doesn’t just note that orders and stores are connected—it specifies that an Order must belong to exactly one Store, can contain one or many MenuItems, and may or may not be linked to a known Customer.
Semantics: The Language Layer
Semantics defines how the messy reality of your business maps to those building blocks. It’s the translation layer between how people and systems actually talk and the formal ontology.
It resolves naming variations. Different systems call the same thing by different names: “cust_id”, “customer_key”, “user_id” all map to Customer identifier. Different teams use different terms: “ticket”, “check”, “tab” all mean Order.
It provides precise business definitions. “Comp sales” has one definition: revenue from stores open at least 13 months, excluding new openings and closures, for a specified comparison period. “Active customer” means a customer who has placed at least one order in the past 90 days.
It formalizes business logic. How do you calculate “true incremental lift” from a promotion? What makes a store “capacity constrained”? What defines “at-risk customer”? These are specific rules expressed in terms of the ontology.
When the POS system calls it a “check” and the analytics team calls it an “order” and the finance team calls it a “transaction,” semantics ensures they’re all referencing the same ontology building block.
Context: Situational Applied Meaning
Context is what allows you to correctly interpret any specific data point. Ontology and semantics create the scaffolding; context is what happens when you apply that scaffolding to real events.
An order stops being just a transaction ID, timestamp, and dollar amount. Through the lens of ontology and semantics, it becomes:
An Order placed by a specific Customer at a specific Store, using a particular Channel (delivery, dine-in, takeout), during a given Daypart, for a set of MenuItems in defined Categories, under active Campaigns, subject to specific operational constraints.
Every data point gets interpreted through the ontology and semantics, giving it full situational meaning within the web of business relationships.
This makes it possible to answer “Why did this store’s sales drop 5%?” You can trace which business factors changed—menu mix, customer segment composition, competitive intrusion, labor constraints, promotional activity—because every piece of data is contextualized within a formal model of how your business operates.
The MCP Misunderstanding
Model Context Protocol (MCP) is getting a lot of attention as a new way to connect AI to tools and data. It’s powerful, but there’s fundamental confusion about what it actually solves.
MCP doesn’t replace your existing APIs. It doesn’t sit in the same layer as data pipelines. APIs and pipelines move data between systems. MCP standardizes how models discover and call tools and data sources, many of which are APIs under the hood.
The mistake is treating MCP as a generic data integration layer—as if wiring more sources through it will magically make AI smarter. That misses the point and usually makes things worse.
If you use MCP to expose data that lacks clear ontology and semantics, you’re not fixing integration. You’re injecting chaos. The model now has more ambiguous data to guess about, delivered through a clean protocol that makes everything look correct on the surface.
If “customer,” “sales,” and “store” mean different things in each source, MCP simply gives the model faster access to inconsistency. The weaker your base definitions, the less reliable every output becomes.
MCP is only useful when the context it exposes is structurally well-defined. The protocol cannot create meaning, resolve ambiguity, or build ontology and semantics for you.
From Scaffolding to Domain Intelligence
Once you have this scaffolding in place, you can build actual domain intelligence—the ability to answer complex, domain-specific questions and act on them:
Which stores are underperforming relative to local demand and cost structure?
Which customers are at emerging risk of churn?
Which promotions generate true incremental profit after controlling for cannibalization?
These questions are only computable if you have the scaffolding.
Integration happens. POS systems, loyalty apps, marketing platforms, inventory systems, labor scheduling, and finance all map to the same ontology building blocks. This only works because you know what each piece of information means and how it fits into the defined structure.
Consistent metrics emerge. Once “same-store sales,” “visit frequency,” “basket size,” or “customer lifetime value” are defined semantically, every dashboard and model can reuse them. Instead of each analyst manually calculating metrics their own way, they’re built once with clear business logic and reused everywhere.
Inference becomes possible. Systems can infer higher-level business states: a customer is “at risk” if they meet certain behavioral conditions; a store is “capacity constrained” if throughput and wait times exceed thresholds given staffing and kitchen configuration.
Natural language gets grounded. “Which locations are bleeding margin on delivery?” becomes a query about Stores with high delivery Channel volume and low gross margin versus peers—specific concepts with precise definitions.
Explanation works. “Store 123 is high risk” gets backed by specific factors: sales trends for specific MenuItem categories, incident history by type, competitive pressure from nearby Stores, labor constraint patterns. These are ontology concepts anyone in the business can understand.
Why This Is Non-Negotiable
Without ontology and semantics, you cannot create context. Without context, you cannot have domain intelligence. Without domain intelligence, AI is just expensive computation on ambiguous data.
Consider what happens when you try to measure incremental lift from a promotion without this scaffolding. Ten different analysts will produce ten different numbers because “incremental lift” means something different to each of them. Is it top-line sales increase? Net of baseline trends? Controlled for cannibalization? Adjusted for discount leakage?
With the scaffolding in place, “incremental sales from campaign” has one semantic definition: uplift versus statistically matched controls, net of baseline trends. That definition references specific ontology concepts: Campaign, Store, Customer, Order, Baseline. Everyone uses the same calculation. The metric is reusable and trustworthy.
The Practical Reality
Most companies don’t have this scaffolding. They have data warehouses full of information where the same concept gets labeled differently depending on which system created it. They have business intelligence tools that let analysts create their own definitions. They have tribal knowledge about what metrics actually mean.
Every analysis becomes a bespoke project. What should be repeatable—calculating customer lifetime value, measuring promotional lift, identifying at-risk locations—gets rebuilt from scratch each time because there’s no shared foundation to build on.
This works at small scale. It breaks at enterprise scale.
The companies building real domain intelligence—the ones moving beyond dashboards and basic predictive models to systems that can reason about their business—are investing in the scaffolding first.
They’re creating formal ontologies that define what exists in their domain. They’re building semantic layers that map messy reality to those ontologies. They’re ensuring every data point gets full context.
AI applied to business requires it. Without it, you’re running algorithms on data you don’t fully understand.




This article comes at the perfect time, truly articulating the core issue. Your consistent emphasis on ontology, semantics, and context as essential scaffolding for domain intelligence is so spot on. It clearly explains why so many "AI" projects are disconected black boxes. This clarity is invaluable.
Great read! How are you seeing powerful and robust semantic and context layers actually applied?
Semantics: just metadata added or using something like dbt semantic layer product?
Context: just adding / RAGing in docs etc?