Two Realities Behind Data Engineering Delays

Wayne Eckerson
September 12, 2025

ABSTRACT: This blog explores two contrasting realities in data engineering: teams bogged down by inefficiencies and teams stretched to their limits by high demand. By understanding these dynamics, leaders can separate symptoms from root causes and make smarter decisions about where to invest.
Introduction
A car in first gear will move and build momentum—but if you stay in the same gear and don’t shift up, you won’t get very far. It becomes unstable, noisy, and fuel-inefficient.
We’ve seen the same pattern in data engineering teams.
At a glance, many teams appear busy. But look closer—and you’ll notice vastly different realities. Some are overextended high performers, slowed by platform sprawl or prioritization friction. Others are stuck in rework cycles, blocked by brittle pipelines or legacy systems.
In our recent webinar on evaluating data engineering productivity (link recording), panelists shared real-world examples of both. We unpacked these themes further in an in-depth article on data team productivity (link article)—exploring what productivity really looks like, how to interpret delivery delays, and why use-case-led modernization works best.
This post builds on that conversation and is a practical guide for business leaders:
How do you distinguish between a team that’s productive but under-supported—and one that needs structural change?
Two Realities in Data Engineering
In our experience, no team fits neatly into a single category. But we often see patterns—tendencies that reveal themselves under pressure.
To illustrate the contrast, let’s consider two common profiles: the Blue Team and the Green Team.
Blue Team: Overworked but Capable
For leaders trying to assess team performance, it helps to recognize when a team is operating at a high level—but stretched thin. Here's what that often looks like:
High volume of incoming requests, but maintains code quality and reliability.
Despite being stretched thin, the team consistently delivers stable, production-ready assets.Reuses core models and code and leverages modern tools and automation.
The team builds reusable assets, such as models and semantic layers, which save time later on. They also use standard tooling and automation scripts to reduce rework and keep the focus on meaningful builds.Builds high-quality, instrumented pipelines aligned with business needs.
The pipelines are built using DataOps techniques, incorporating extensive testing and continuous integration/continuous delivery (CI/CD) practices to ensure scalability, quality, and reliability.Most delivery challenges stem from prioritization or capacity—not capability.
They’re effective—but at risk of burnout if strategic decisions overwhelm their capacity to deliver.
Their delivery challenges often stem from sheer workload. In these cases, the opportunity lies in better prioritization and clearer alignment—not process overhaul.
Green Team: Inefficient & Underdeveloped Skills
On the other hand, some teams may appear equally busy, but underlying deficiencies in skills and knowledge create delays and inconsistency. These patterns often point to deeper challenges:
Delivery is consistently delayed or unpredictable.
The team stays busy, but progress slows down due to fragmented workflows and repeated rework.Struggles with design and delivery practices.
Rather than designing the appropriate models and semantics, the team dives in, creating a lot of rework down the road.Relies on manual processes and inconsistent workflows
With no clear standards in place, work varies widely across developers—and so does quality.Delivery issues are systemic, not incidental
The bottlenecks can’t be fixed with more effort—they require process redesign and skill development.
During the webinar, the panelists shared several cautionary use cases. These aren’t neat case studies of “good” vs. “bad” teams. Instead, they reflect the kinds of challenges leaders encounter when delivery systems begin to strain—whether due to scale, legacy friction, or lack of shared standards.
As you read through the next few use cases, ask yourself:
Is this a productive team operating without enough support?
Or are there deeper delivery issues that call for structural change?
Use Case: Platform Sprawl Slows Everyone Down
Carlos Bossy described a case where the platform had grown to 10,000+ tables. Over time, delivery outpaced design. Each new request led to one-off objects, patches, or manual interventions—until productivity itself became a source of drag.
What It Looks Like
Massive table sprawl with inconsistent definitions
Difficulties finding or trusting the right data
Engineering time consumed by duplication and cleanup
What Helps
Protect time for cleanup and internal tooling
Define shared objects and promote reuse
Invest in cataloging and platform-level standards
Questions for Leaders
Do we know how many active data assets exist—and how many are redundant?
Are teams creating reusable components or solving the same problem repeatedly?
Have we made space for refactoring in our delivery plans?
Use Case: Fragile Pipelines and Rework Loops
Joshua Bartels shared that in some environments, 60–80% of team time went into fixing broken pipelines and resolving failures. Delivery velocity dropped, not because of lack of effort, but due to inefficiencies built into the system.
What It Looks Like
Frequent production issues and missed SLAs
No consistency in testing or deployment practices
Engineers patch the same problems repeatedly
What Helps
Enforce test coverage and deployment automation
Simplify and standardize tooling across teams
Create feedback loops between engineering and stakeholders
Questions for Leaders
How much time is spent on rework, support, or incident management?
Do we have testing and deployment standards—and are they enforced?
Is there a clear path to reduce failure rates over time?
Use Case: Legacy Blocks the AI Use Case
Reid Colson described a scenario where the business wanted to apply AI to automate candidate screening. But the applicant tracking system didn’t expose an API, and the surrounding infrastructure wasn’t ready.
What It Looks Like
High-priority requests blocked by missing integrations
Manual workarounds increase engineering workload
Frustration from misaligned expectations between business and tech
What Helps
Set delivery expectations based on technical readiness
Involve engineering early to identify blockers
Build modernization plans around real use cases
Questions for Leaders
Are business goals aligned with the capabilities of our current systems?
Do we involve engineering early enough to surface risks?
Have we prioritized modernization where it will have the most impact?
Making the Distinction Actionable
These use cases reflect common delivery challenges—but with very different root causes.
In some teams, velocity hides underlying friction. In others, delays reveal deeper structural gaps. Recognizing which situation you’re in is the first step toward meaningful improvement. That’s where an outside perspective can help.
At Datalere, we work with data leaders to:
Diagnose delivery bottlenecks
Prioritize high-impact modernization
Build scalable platforms that align with business goals
Want a fresh perspective on your delivery bottlenecks?

Wayne Eckerson
Wayne Eckerson is an internationally recognized thought leader in the business intelligence and analytics field. He is a sought-after consultant and noted speaker who thinks critically, writes clearly and presents...
Talk to UsYou Might Also Like