Register for "Seven Steps to Get Started with Agentic AI" - Thursday, January 22, 1:00 PM EST

What the 2026 Data and AI Predictions Reveal Beneath the Surface

Abdul Fahad Noori

January 19, 2026

ABSTRACT: The 2026 data and AI predictions point to more than new technologies — they expose pressure on long-standing foundations. Drawing on a practitioner-led discussion, this article explores how semantics, governance, platforms, interoperability, and engineering discipline are being tested as AI accelerates change, and what that means for data and analytics leaders preparing for 2026. 

Introduction: Listening more closely to what’s changing

Predictions often circulate faster than understanding. By the time a new forecast gains attention, many of the forces behind it have already been shaping decisions, constraints, and trade-offs inside real organizations — occasionally emerging quickly, but more often taking shape quietly over years rather than months.

That dynamic came into focus during a recent 2026 outlook discussion with industry experts across data, analytics, and engineering, including Carlos Bossy, Wayne Eckerson, David Booth, Dave Wells, Sean Hewitt, & Elliott Cordo. Rather than introducing entirely new ideas, the conversation repeatedly returned to how several long-running themes are now intersecting in more visible ways. Questions around interfaces, platforms, semantics, governance, and engineering discipline have been evolving for years, often in parallel.

Artificial intelligence brings these threads into closer contact. It accelerates their interaction and raises the stakes — particularly as agentic systems depend on shared meaning, governance, interoperability, and engineering rigor to operate safely at scale. At the same time, the discussion highlighted how different practitioners interpret and prioritize these shifts from distinct architectural, organizational, and operational vantage points.

The 2026 predictions themselves have already been shared elsewhere. Read the announcement here. This article takes a different approach. It reflects on what surfaced when those predictions were examined through discussion — when participants challenged assumptions, refined ideas, and placed familiar debates in a new context.

What follows is a perspective shaped by dialogue, intended to help data and analytics leaders understand what is changing, why those changes feel more consequential heading into 2026, and how to think about preparedness — not as a reaction to individual trends, but as an effort to strengthen the foundations that now carry more weight as AI compresses timelines.

1. Dashboards shift toward conversational BI 

Prediction lens 

The first 2026 prediction pointed to a gradual shift in how people interact with analytics. As language models and generative interfaces mature, dashboards may no longer serve as the primary entry point for insight. Instead, users increasingly expect to ask questions directly and receive explanations, comparisons, and visuals generated in response. 

The prediction did not suggest dashboards would disappear, but that their role would evolve as conversational interaction becomes more natural. 

What surfaced in the discussion 

A consistent thread across the discussion was how conversational access is changing the role dashboards play, without displacing their value entirely. Asking questions in plain language — “What happened last quarter?” or “Why did this metric change?” — aligns more closely with how business users think than navigating filters or pre-built views. 

From this perspective, generative BI was framed less as chart automation and more as contextual interpretation. The value lies in comparison, explanation, and narrative — not just in rendering visuals, but in helping users understand relationships and drivers. 

At the same time, the discussion widened to include the continuing role of dashboards. Rather than being displaced, dashboards were described as changing functionFor executives in particular, stable and familiar views still provide quick orientation. The difference is that dashboards increasingly serve as reference points, while deeper exploration happens through other modes of interaction. 

As the conversation progressed, attention shifted away from interfaces altogether. Whether insights are delivered through dashboards, conversational tools, or APIs, they rely on shared metric definitions and business logic. Without consistent semantics, new interaction models simply surface existing inconsistencies more quickly. 

What emerged was not a debate about replacing dashboards, but a recognition that interaction models are expanding — and that the underlying clarity of meaning now plays a larger role in determining whether those interactions are useful. 

Panel perspectives 

  • David Booth  Highlighted a shift in how users initiate analysis, signaling that analytics teams should expect more question-driven interaction and less reliance on predefined navigation paths. 

  • Wayne Eckerson  Reinforced the ongoing value of executive dashboards as orienting tools, underscoring the importance of maintaining trusted views alongside new interfaces. 

  • Dave Wells  Drew attention to the need for analytics environments that support multiple interaction modes simultaneously, rather than forcing a single experience. 

  • Sean Hewitt  Framed conversational analytics as an explanation challenge, pointing to semantics and metadata as prerequisites for trustworthy AI-assisted insight. 

  • Elliott Cordo  Noted that consistency and familiarity remain important organizational anchors, particularly as new interaction models are introduced. 

  • Carlos Bossy  Recast dashboards as a personalization issue, suggesting that long-standing one-size-fits-all approaches were always a limiting factor. 

What this means for data and analytics leaders 

As conversational access becomes more common, analytics teams are designing a broader set of ways for users to engage with insight. Dashboards, conversational tools, and embedded analytics increasingly coexist, each serving different moments of decision-making. For leaders, this expands the scope of responsibility: selecting appropriate interfaces, while also ensuring those experiences remain aligned so users can move between them without encountering different answers or interpretations. 

2. Data platforms consolidate into “data clouds” 

Prediction lens 

The second 2026 prediction focused on continued consolidation across the data and analytics ecosystem. Platforms that once specialized in storage, analytics, or engineering are increasingly expanding to support pipelines, governance, AI workloads, and data sharing within a single environment. 

The discussion framed this shift as a reflection of how teams already operate when data, compute, and tooling converge — with consolidation emerging less from formal mandates and more from the pull of simplicity, proximity, and ease of use. 

What surfaced in the discussion 

As the panel explored this idea, the conversation grounded itself in a familiar historical pattern. Over time, most data tools tend to follow the same trajectory: they begin with a narrow focus, then steadily accumulate adjacent capabilities until they function as platforms. What felt different in this discussion was not the pattern itself, but the pace at which it is accelerating under AI-driven workloads. 

From an operational perspective, consolidation increasingly reflects where work actually happens. When data, compute, orchestration, permissions, and AI services live together, teams naturally gravitate toward that environment.  

For many organizations — particularly mid-sized ones — consolidation is experienced as a way to reduce friction, with fewer integrations to manage, fewer handoffs between tools, and fewer failure points across the data lifecycle. 

This practical framing led to a broader architectural question during the discussion. As open table formats like Iceberg gain traction, does platform consolidation still matter in the same way? 

The conversation did not settle on a definitive answer. Open formats were acknowledged as meaningful progress — lowering barriers and improving interoperability — but not as a full counterweight to consolidation. Skills, workflows, governance processes, and organizational habits remain deeply intertwined with platforms, creating forms of gravity that extend beyond storage formats alone. 

As the discussion continued, attention shifted to a quieter but persistent constraint: metadata. While platforms consolidate infrastructure and services, many organizations still operate with fragmented catalogs, lineage tools, and semantic definitions spread across environments. Consolidation at the platform level does not automatically resolve this fragmentation, and in some cases can obscure it further if coordination is not intentional. 

What emerged was a shared recognition that consolidation is already underway, driven by convenience and proximity to data. Whether it simplifies complexity or merely relocates it increasingly depends on how well meaning, metadata, and governance keep pace. 

Panel perspectives  

  • Carlos Bossy  Described consolidation as a recurring industry pattern, especially as platforms expand to support AI and operational workloads alongside analytics. 

  • Elliott Cordo  Emphasized the practical appeal of platform-native services, particularly when permissions, compute, and data access are aligned in one environment. 

  • Wayne Eckerson  Raised the question of whether open formats meaningfully reduce lock-in, shifting attention beyond storage to broader switching costs. 

  • Dave Wells  Pointed to metadata fragmentation as the unresolved issue, noting that many organizations still manage multiple catalogs and lineage systems in parallel. 

  • Sean Hewitt  Connected consolidation pressure to AI readiness, arguing that as metadata becomes foundational for AI, fragmented environments become harder to sustain. 

What this means for data and analytics leaders  

As data and analytics capabilities concentrate within fewer platforms, more architectural and governance decisions are shaped by platform defaults rather than explicit design choices. Over time, this changes how visibility into definitions, controls, and reuse is experienced across teams. Leadership attention increasingly shifts toward understanding where decisions are being made implicitly, and how those choices shape consistency and autonomy as platforms absorb more responsibility. 

3. Universal semantic layers gain traction — conceptually, not physically 

Prediction lens 

The third 2026 prediction focused on renewed interest in the idea of a universal semantic layer — a shared understanding of metrics, entities, and business logic that can be reused across analytics, operational systems, and AI-driven experiences. 

Rather than predicting the emergence of a single new platform or product, the prediction pointed to a growing recognition that semantic consistency is becoming foundational as organizations scale analytics and AI simultaneously. 

What surfaced in the discussion 

During the discussion, the conversation distinguished between agreement on intent and caution around implementation. Conceptually, the appeal of a universal semantic layer was widely shared. As analytics expands beyond dashboards into APIs, applications, and AI agents, the need for consistent meaning becomes harder to ignore. 

At the same time, the discussion repeatedly returned to the practical reality that semantics already exist in many places. BI tools, transformation layers, database views, knowledge graphs, and now AI agents all carry their own semantic representations. Attempting to collapse these into a single physical layer was not framed as a realistic or desirable outcome. 

Instead, the conversation emphasized that semantics tend to serve different purposes depending on where they live. A BI semantic model optimizes for reporting and analysis. A knowledge graph supports relationships and inference. Data products and APIs encode contractual meaning. AI agents increasingly rely on lightweight semantic contracts to interact safely with data. Each plays a role, but none replaces the others. 

Team size and organizational maturity also surfaced as important factors. Smaller teams may benefit from tighter consolidation of semantic definitions. Larger organizations often require multiple semantic components that align conceptually but operate independently. In that context, the challenge is not eliminating semantic layers but coordinating them. 

Language itself became part of the discussion. Terms like “enterprise” or “universal” can trigger expectations of large, multi-year initiatives. Several perspectives suggested that semantics gain adoption more effectively when they emerge from concrete use cases and are assembled incrementally, rather than declared upfront as an enterprise mandate. 

What emerged was a reframing of the prediction itself. The future does not point toward one physical semantic layer, but toward an enterprise semantic model — expressed through multiple, coordinated components that evolve as needs expand. 

Panel perspectives  

  • Wayne Eckerson  Framed the idea of a universal semantic layer as a conceptual necessity, while questioning how it could realistically be implemented across diverse systems. 

  • Dave Wells  Drew a clear distinction between conceptual universality and physical centralization, emphasizing that multiple semantic components are often required for different architectural roles. 

  • Sean Hewitt  Highlighted the growing cost of unmanaged semantics, noting that additional layers increase complexity unless they are intentionally coordinated. 

  • Carlos Bossy  Warned against introducing “enterprise” semantics as a top-down initiative, suggesting that use-case-driven adoption builds credibility and momentum more effectively. 

  • Elliott Cordo  Pointed to data contracts and agent interfaces as emerging forms of “micro-semantics” that need alignment with broader enterprise meaning. 

What this means for data and analytics leaders 

As semantic usage spreads across analytics, applications, and AI, it increasingly constrains architectural choices rather than enabling them. Decisions about where meaning is defined, duplicated, or translated shape how easily systems can evolve later. Leaders don’t need to converge semantics immediately, but they do need visibility into where semantic decisions are being made — because those decisions quietly determine how flexible the organization will be as new use cases emerge. 

4. AI governance becomes mandatory — and data governance becomes the foundation 

Prediction lens 

The fourth 2026 prediction focused on the growing inevitability of AI governance. As organizations move from experimentation to production use of AI, questions around trust, accountability, security, and operational control move from the margins to the center. 

Rather than framing AI governance as a brand-new discipline, the prediction suggested that it would increasingly act as a forcing function — exposing gaps in existing data governance practices that organizations could previously afford to overlook. 

What surfaced in the discussion 

The discussion revealed how AI governance moves governance concerns out of policy conversations and into daily operational decisions, as models and agents surface gaps through real usage rather than formal review. Many AI failures discussed were not framed as technical breakdowns, but as governance failures — unclear ownership, weak metadata, inconsistent definitions, limited testing, and insufficient operational controls. 

Metadata emerged as a central theme. Issues such as hallucinations, unreliable AI-generated reports, and inconsistent outputs were repeatedly traced back to missing or ambiguous context rather than flawed models. When AI systems lack clear definitions, lineage, or usage constraints, they amplify uncertainty instead of containing it. 

At the same time, the discussion made clear that AI introduces governance pressure even in organizations with established data practices. Decisions about how AI outputs should be tested, monitored, and trusted — especially when behavior is probabilistic rather than deterministic — cannot remain informal. What may have been acceptable as best-effort governance in traditional analytics becomes a visible risk once AI systems are embedded in operational workflows. 

Participants also noted that many organizations already possess governance assets that can support AI — semantic models embedded in BI tools, catalogs, access controls, and quality checks — but these assets are often fragmented and inconsistently applied. One example raised during the discussion illustrated this gap clearly. Semantic models built for analytics — particularly in mature BI environments — often already encode a large portion of the business logic AI systems need. In some cases, these models can be programmatically adapted for AI use, covering a significant share of semantic requirements without starting from scratch. What became apparent is that many organizations already have more semantic structure in place than they tend to recognize. 

The discussion ultimately converged on a clear framing: AI governance does not replace data governance. Instead, it accelerates the moment when weaknesses in data governance, metadata management, testing practices, and security controls can no longer be deferred. 

Panel perspectives 

  • Sean Hewitt  Framed AI governance as an accelerator, emphasizing that most AI failures stem from weak data governance foundations rather than model behavior. 

  • David Booth  Observed that many AI hallucinations are better understood as metadata problems, where missing context leads to misleading outputs. 

  • Carlos Bossy  Highlighted that organizations often already have usable semantic and metadata assets inside BI and analytics platforms, even if they are not yet connected for AI use. 

  • Elliott Cordo  Emphasized how AI systems increase exposure around security, permissions, and sensitive data, raising the stakes of weak operational controls. 

  • Dave Wells  Reinforced that AI governance builds on data governance principles, even as it forces them to be operationalized more rigorously. 

What this means for data and analytics leaders 

AI governance acts as an accelerator rather than a replacement for existing practices. As AI systems move into production, gaps in data governance, metadata management, testing, and security surface much faster and with greater impact than before. Organizations that have deferred these foundations find them exposed quickly, while those that have invested in them are able to scale AI with fewer surprises. The practical implication is not to invent parallel governance structures, but to mature existing ones at the pace that AI now demands. 

5. Interoperability becomes the future of data management 

Prediction lens 

The fifth 2026 prediction shifted attention from integration as a default strategy to interoperability as a longer-term necessity. As organizations operate across increasingly SaaS-heavy and operationally distributed environments, the prediction suggested that copying data everywhere is becoming harder to justify — economically and operationally. 

Rather than replacing integration outright, the prediction positioned interoperability as a complementary approach that reduces fragility as systems change. 

What surfaced in the discussion 

The conversation reframed a long-standing habit. For years, integration has meant moving data into a central place, reshaping it to fit a common schema, and maintaining pipelines as systems evolve. This approach still works in many scenarios — but it was described as increasingly brittle in environments where applications change frequently and ownership is decentralized. 

Interoperability, by contrast, surfaced as a different mindset. Instead of enforcing uniform schemas everywhere, it relies on semantic translation — allowing systems to retain local autonomy while participating in a shared understanding of meaning. This reduces the need to copy data preemptively and limits breakage when upstream systems change. 

Several perspectives emphasized that this shift is not philosophical, but practical. As organizations adopt more SaaS platforms, operational systems, and external data sources, the cost of duplication grows — not just in storage, but in data quality management, reconciliation, and ongoing maintenance. Every copied dataset introduces another surface area for drift. 

Semantics emerged again as the enabling layer. Without shared definitions, interoperability collapses into confusion. With them, organizations can query across systems, expose data through products or APIs, and support AI use cases without replicating entire datasets. Integration remains useful — but no longer sufficient on its own. 

The discussion also surfaced a behavioral pattern. Many teams default to copying data because it feels safer and more familiar. Interoperability requires greater confidence in definitions, contracts, and governance — but pays off by reducing long-term operational drag. 

What emerged was not a rejection of integration, but a recalibration. As environments become more distributed, interoperability becomes the strategy that scales — while integration becomes a targeted tool rather than a universal solution. 

Panel perspectives  

  • Dave Wells  Framed interoperability as a response to the limitations of schema-heavy integration, particularly in environments where systems evolve independently. 

  • Elliott Cordo  Connected interoperability to the rise of data contracts, noting that shared semantics allow systems to interact without constant copying. 

  • Carlos Bossy  Observed that as semantic models mature, users increasingly query meaning rather than raw tables, reducing the need for physical consolidation. 

  • Sean Hewitt  Highlighted how less duplication reduces downstream data quality overhead, making governance easier to sustain over time. 

What this means for data and analytics leaders 

The cost of copying data increasingly shows up as operational drag — reconciliation, quality management, and pipeline maintenance. Interoperability shifts some of that effort away from duplication and toward semantic translation, changing how costs accumulate over time. Leadership attention increasingly sits with understanding where data movement creates clarity and speed, and where it quietly adds long-term overhead. 

6. AI accelerates the convergence of data engineering and software engineering 

Prediction lens 

The sixth 2026 prediction focused on a shift that has been building quietly for years but is now becoming unavoidable. As AI-driven workloads move closer to the core of business operations, the line between data engineering and software engineering continues to blur. 

Rather than introducing new responsibilities, AI raises expectations — pushing data teams toward the same standards of reliability, testing, and operational discipline that software teams have long been held to. 

What surfaced in the discussion 

Historically, data engineering evolved with a strong emphasis on pipelines and delivery speed, often without the same rigor around testing, deployment, and failure handling that software engineering developed over decades. 

AI workloads change that equation. When data systems directly influence recommendations, personalization, customer interactions, and automated decisions, the tolerance for silent failures or delayed corrections drops sharply. In this context, data pipelines behave less like back-office utilities and more like production software. 

The discussion also highlighted how AI increases both pressure and opportunity for data teams. On one hand, AI workloads demand stronger engineering discipline — versioning, CI/CD, observability, and controlled change management. On the other, AI-assisted tooling is beginning to lower the barrier for adopting these practices, helping data engineers expand their skill sets and operate with greater consistency. 

Several perspectives emphasized that this convergence is not about turning data engineers into full-stack developers. Instead, it reflects a shift in expectations: data products are increasingly treated as long-lived assets with uptime requirements, user impact, and operational accountability. 

What emerged was not a call for new roles, but a recognition that the definition of “data engineering” itself is changing — shaped less by tooling and more by how data systems are expected to behave in production. 

Panel perspectives 

  • Elliott Cordo  Framed AI as the catalyst that raises reliability and operational expectations for data systems, aligning them more closely with software workloads. 

  • Dave Wells  Emphasized that data engineering increasingly involves building data products, not just pipelines, bringing software engineering principles to the forefront. 

  • Carlos Bossy  Observed that this convergence has been overdue, noting that software engineering matured earlier while data engineering is now catching up under AI-driven pressure. 

What this means for data and analytics leaders 

Reliability, testing, and observability are becoming standard expectations for data systems that support AI-driven use cases. As these systems play a more direct role in business operations, data engineering work increasingly carries production-level accountability for behavior under change, failure, and scale. Over time, this influences how teams are staffed, evaluated, and supported, as data systems take on characteristics long associated with production software. 

Preparing for 2026: when AI compresses timelines 

Across the discussion, one technology shift stands apart in how strongly it shapes the rest: AI. More than introducing new capabilities, AI changes the tempo of data and analytics work. Issues that once surfaced gradually now appear quickly and visibly, as AI systems depend on clear meaning, reliable metadata, disciplined testing, and strong security controls from the start. 

As organizations expand conversational analytics, deploy AI into operational workflows, and consolidate platforms, long-standing assumptions around data definitions, governance, interoperability, and engineering practices are placed under immediate strain. What previously functioned as background infrastructure increasingly determines whether AI-enabled initiatives scale smoothly or stall under uncertainty. 

A notable theme from the conversation is that many organizations already possess elements of the foundations they need — semantic models embedded in BI tools, governance rules, access controls, and operational processes developed over time. What AI changes is not the need to invent these foundations anew, but the urgency with which they must hold together across systems, teams, and use cases. 

This is where Datalere works with data and analytics leaders. Datalere helps organizations understand how existing architectural choices, semantic assets, governance practices, and engineering workflows interact — and where strengthening those connections will matter most as AI accelerates expectations in 2026 and beyond.  

In this context, preparedness rests on the strength of meaning, trust, and operational discipline — foundations that determine how well organizations surface and absorb issues as AI shortens the distance between data decisions and business impact.

Abdul Fahad Noori

Fahad enjoys overseeing all marketing functions ranging from strategy to execution. His areas of expertise include social media, email marketing, online events, blogs, and graphic design. With more than...

Talk to Us
Stay Ahead in a Rapidly Changing World
Our newsletter provides frameworks and guidance to master every facet of data & analytics.
Datalere

Providing modern, comprehensive data solutions so you can turn data into your most powerful asset and stay ahead of the competition.

Learn how we can help your organization create actionable data strategies and highly tailored solutions.

© Datalere, LLC. All rights reserved

383 N Corona St
Denver, CO 80218

Careers