Productivity in Data Engineering: What Every Data Leader Should Know

Abdul Fahad Noori
September 02, 2025

ABSTRACT: This article explores what productivity means for data engineering teams today—and how business leaders can recognize the symptoms of inefficiency, ask the right questions, and create conditions for meaningful improvement.
Introduction
There’s no clear playbook for measuring the productivity of a data engineering team—especially if you’re a business leader outside the data function. When data development slows down, it’s hard to tell whether the team is underperforming… or simply caught in a web of tool sprawl, legacy pipelines, vague requests, and shifting priorities.
Unlike traditional business functions, there are no easy benchmarks. You can’t just count dashboards or data sets. Output alone doesn’t reflect quality—and speed doesn’t always signal value.
So how do you know if your data engineering team is truly effective?
That’s the question we tackled in a recent Datalere webinar moderated by Wayne Eckerson (Strategic Consultant), featuring Carlos Bossy (President, Datalere), Joshua Bartels (CTO, UDig), and Reid Colson (EVP, UDig). Together, they explored what engineering productivity looks like—from symptoms of underperformance to meaningful KPIs, hiring decisions, and the evolving role of AI.
As companies seek to upgrade their AI capabilities, this topic is more urgent than ever. Let’s go ahead and unpack the insights.
1. Spotting the Symptoms of Low Productivity
You don’t need to be an engineer to sense that something’s off. If your team is always busy but struggling to deliver outcomes, it’s worth looking closer. The signs of low productivity in data engineering are rarely dramatic—but they accumulate quietly until systems start to buckle.
One of the earliest red flags is maintenance overload. Engineers spend most of their time managing breakages, adjusting pipelines, or handling small-scale requests. There’s very little net new development. Everything becomes reactive. This leads directly into firefighting as a default mode. Instead of focusing on architecture or solving root issues, teams are stuck applying manual fixes. Problems recur because there’s no time to build automation or invest in better design. Engineers move fast to patch things up—but that motion isn’t momentum.
“Motion isn’t momentum. Just because you’re doing something doesn’t mean it’s the right thing.” - Joshua Bartels (CTO, UDig)
Another subtle indicator is endless refactoring. If the team is constantly reworking the same pipelines or tables, it likely means the original designs weren’t resilient. Good engineering allows for iteration, but constant rewrites usually signal underlying fragility or misalignment.
The problem worsens with tool sprawl—especially in organizations that have grown through acquisitions or siloed teams. Engineers bring their own preferences, and soon you’re managing five different ways to ingest or transform data. The stack becomes bloated and fragmented, complicating hiring, slowing development, and making onboarding painful.
Finally, there’s the issue of team disconnect. When engineering, analytics, and business functions aren’t in sync, requests get lost in translation. Engineers might deliver technically sound work that doesn’t match what the business actually needs. That gap often reveals itself late—in adoption metrics, rework cycles, or stakeholder frustration.
Seeing the signals but not sure where to start?
Sometimes it just takes an outside lens to separate the noise from the signal. Our team works with data leaders to identify high-leverage opportunities hidden in plain sight.
2. What Does ‘Good’ Look Like?
If those are the red flags, what does a high-performing data engineering team actually look like?
It starts with reusability. Productive teams don’t reinvent the wheel every time. They build foundational assets—pipelines, code modules, data models—that can be applied across use cases. Each new initiative draws from this base, reinforcing architecture rather than working around it. When reuse is missing, it often points to deeper issues—misalignment, lack of design maturity, or fractured communication.
Sturdy pipelines are another clear marker. These are systems that do more than just function—they hold up under change. A team that transitions from firefighting mode to a 60:40 development-to-maintenance balance signals progress. It shows that their time is no longer consumed by breakages, but invested in building for the future.
“You want to build things that don’t need to be rebuilt every quarter.” - Carlos Bossy (President, Datalere)
Use case-driven development. High-functioning teams also modernize with intention. Instead of ripping out legacy systems in one sweep, they tackle change use case by use case. New pipelines are designed to replace the old organically, retiring technical debt in parallel with delivery. That pace keeps the business moving while raising the bar with every iteration.
Tool standards. Tooling choices reflect this same clarity. Effective teams operate with a lean, purposeful stack. They know which tools serve which needs and don’t bring in additional tools simply because they are familiar with them. Finally, they don’t default to custom builds when a managed service can handle the job.
What ties all this together is strategic engineering judgement. The most productive teams don’t just move fast, they move effectively. They ask the right questions and know when to optimize, refactor, and leave things alone. The result is a system that keeps working without getting in its own way.
3. Questions Business Leaders Should Ask
Even with the right indicators and a capable team, pinpointing where productivity is breaking down can still be challenging. Business leaders don’t need to be experts in data engineering—but the questions they ask can shape the clarity and performance of the entire function.
Start with the basics: “Can you walk me through your current workload?” It’s a simple prompt, but it often surfaces the invisible strain—routine fixes, small-scale asks, and other low-leverage work that rarely shows up on dashboards. Understanding how engineers spend their time opens the door to better prioritization.
Then explore: “What’s getting in your way?” Whether it’s technical debt, lack of documentation, or slow access to data sources, every team faces friction. Asking directly can reveal blockers that are easy to overlook but sometimes surprisingly easy to fix.
“One of the most valuable questions leaders can ask is: What’s stopping you from getting more done?” - Joshua Bartels (CTO, UDig)
If project timelines feel sluggish, business leaders might ask, “If this takes four months today, how can we do it in three?”. This can shed light on what’s slowing things down. It might lead to conversations about tooling, handoffs, or resource constraints—but the focus stays on uncovering what's holding the team back, rather than pushing for arbitrary acceleration.
Business leaders should also ask: “Is the business using what we build?” It’s a helpful way to check whether engineering effort into meaningful outcomes. Sometimes, deliverables are technically sound but miss the mark on business relevance. This question brings attention back to value, not just activity.
When asked with curiosity, not judgment, these questions become powerful tools. They help leaders support their teams more effectively, and they create the kind of trust where issues can be addressed before they turn into major bottlenecks.
4. KPIs That Matter
Business leaders should also require data engineering teams to define performance and effectiveness metrics. These can shed light on process quality, resilience, and alignment with business needs.
Our panelists described a few useful KPIs—others may be equally valuable depending on your goals, structure, and stage of maturity. The key is to identify metrics that reflect meaningful progress.
“Analytics on analytics is often overlooked, but it reveals whether your investments are actually being used.” - Joshua Bartels (CTO, UDig)
A. Cycle Time
Cycle time measures how long it takes to fulfill a typical data request—from intake to delivery. It reveals how efficiently the team moves work through the system. Long timelines often point to complexity, unclear requirements, or dependency issues. Consistently shorter cycle times suggest process clarity and strong internal coordination.
B. Incident Frequency and Resolution Time
These metrics reflect system stability. Frequent breakages or long downtimes indicate brittle architecture or manual dependencies. Fewer incidents—and faster resolution when they do occur—signal maturity in design, testing, and operational readiness.
C. Internal Customer Satisfaction
Whether through formal surveys or informal feedback, this metric reflects how well engineering outputs match stakeholder expectations. Teams that receive fewer rework requests or deliver work that’s adopted quickly are generally more aligned with business priorities.
D. Self-Service Enablement
This metric looks at how often business users can access the data they need without engineering support. A high degree of self-service indicates that the team is building scalable, user-friendly systems—reducing friction and multiplying impact.
E. Analytics on Analytics
This KPI measures how often analytic assets—like dashboards, reports, or datasets—are actually used across the organization. High usage suggests that outputs are meaningful, discoverable, and embedded into decision-making. Low usage may indicate gaps in alignment, communication, or platform usability.
5. Knowing When to Hire — or Let Go
During the webinar, Wayne asked a direct question: Have you ever had to let go of a data engineer? All three panelists said yes. Their answers highlighted a few recurring themes—clues that leaders can use to assess whether someone is adding to the team’s momentum or quietly holding it back.
The panelists mentioned a skills mismatch. Sometimes an engineer is technically capable but not suited for the scope of the role. They may excel at execution but struggle with architectural thinking. Or they may be experienced in a particular platform but out of step with the team’s current stack or needs.
Then there are soft skills that are often harder to quantify than technical outputs, but they’re critical to building effective teams. The panel highlighted emotional intelligence, communication, and collaboration as essential qualities—especially as data engineers work more closely with business stakeholders.
Engineers who can engage productively in meetings, understand the broader business context, and explain trade-offs in plain language are far more likely to build solutions that get adopted.
Planning ability emerged as a key differentiator. Being busy isn’t the same as being strategic. Engineers who can articulate what they’re doing, why they’re doing it, and what comes next tend to scale better over time. They’re also easier to align with broader business priorities.
“If there’s no plan for how data engineers work, they’re just writing code.” - Carlos Bossy (President, Datalere)
Finally, adaptability matters more than ever. As platforms evolve and AI reshapes data workflows, engineers need to stay curious and flexible. A willingness to learn, unlearn, and engage with change is just as important as technical ability.
Recognizing these signals early doesn’t always mean someone needs to be let go. In some cases, coaching, reskilling, or a shift in responsibilities is enough. But where the gap is too broad—or where issues persist—leaders need to act to protect the broader health and trajectory of the team.
Hiring won’t solve a broken system.
We guide leaders through tough decisions—when to upskill, when to scale, and when to step back and rethink team structure. Whether you're planning a reorg or building from scratch, we help you get it right.
Section 6: Modernization Through Use Cases
Many modernization efforts stall because they start with the system. A more effective approach is to start with the problem. Across the discussion, the panelists emphasized the importance of use-case-led modernization—where tangible needs shape the platform, not the other way around. This section explores how teams can modernize incrementally by prioritizing valuable use cases and letting the rest follow.
Start with the Smallest Valuable Piece
Rather than trying to modernize an entire data platform upfront, Carlos Bossy suggested building just enough to support a single, high-priority use case. Once that slice is delivered successfully, the team can reuse and extend the architecture to support future use cases. This minimum viable platform (MVP) approach ensures that every modernization step is tied to measurable business value.
Use AI to Prioritize and Accelerate Execution
AI can help teams go beyond guesswork when deciding which use cases to tackle first. By evaluating dozens of potential opportunities, it’s possible to score them based on feasibility, expected impact, urgency, and alignment with business goals. This creates a modernization roadmap grounded in evidence.
Once a use case is selected, AI also plays a role in execution—automating code scaffolding, speeding up documentation, and reducing manual rework. These accelerators help teams modernize without sacrificing velocity.
Let Legacy Retire Itself
Rather than rewriting or replacing legacy systems all at once, teams should phase them out gradually. Every successful use case becomes an opportunity to decommission a redundant or inefficient piece of the old system. Over time, technical debt is reduced not by mandate but through organic progression.
“In some cases, we didn’t eliminate technical debt—we just made it irrelevant by outpacing it.” - Carlos Bossy (President, Datalere)
Choosing the Right Use Cases
Throughout the conversation, several common traits emerged among use cases that are strong candidates for driving modernization:
They address real inefficiencies — processes that are slow, manual, or error-prone.
They have clear ownership — stakeholders who can provide feedback and validate success.
They align with strategic goals — such as revenue growth, compliance, or operational efficiency.
They are reusable or scalable — solving one problem opens the door to solving others.
They create space to retire legacy components — reducing maintenance overhead.
When modernization is framed this way—as a response to opportunity rather than a top-down initiative—it becomes sustainable, scalable, and aligned with business momentum.
Section 7: The Role of AI and Automation
Toward the close of the discussion, Wayne asked the panel: “How do you see AI impacting data engineering in the next 12 months?”
Rather than leap into speculation, the panelists focused on tangible changes already underway. Their responses reflected both curiosity and hands-on experience with early-stage implementations.
Josh Bartels introduced the concept of GenBI—generative business intelligence. He pointed out that while GenAI has already transformed how we work with text and images, the next wave will focus on enabling business users to interact with data more naturally. “The more context we can give models to understand schemas, relationships, metadata, and lineage,” he noted, “the easier it becomes to ask business questions in natural language and get accurate answers.” Though fully automated dashboards might still be aspirational, Josh anticipates significant gains in accessibility—particularly for non-technical users seeking basic metrics or summaries from trusted data models.
Carlos Bossy emphasized that AI integration will increasingly become a default part of every new data pipeline. “As I’m moving data through my pipeline, I’m adding value to it with AI,” he said. This could include attaching narrative summaries, embedding forecasts, or enriching datasets with descriptive tags. He illustrated this with a scenario: “We just updated our financials data—now let the pipeline automatically generate a forecast at the end.” In this vision, AI doesn’t just support pipeline execution—it enhances the value of data in-flight.
Reid Colson shifted the lens to metadata and governance. Reflecting on the lack of progress in business metadata capture over the past two decades, he expressed hope that AI could help bridge that gap. “I’d really like to see advancement around capturing and providing the business context in the metadata—and making that a lot easier for everybody to do,” he said. This includes using AI to scan emails or documents and extract meaning—surfacing tribal knowledge that often gets lost when employees leave.
Wayne added a supporting example: a small consultancy using ChatGPT to scrape internal documentation and emails to build the semantic layer for a new data warehouse—automating nearly 60% of the foundational work.
Together, these perspectives offer a grounded view of where AI is headed: enabling better access, embedding intelligence into data flows, and closing long-standing gaps in context and understanding.
AI won’t replace engineers—but it will reshape how they work.
If you're rethinking your team’s role in the AI-powered future, we can help. From evaluating automation opportunities to selecting the right tools and models, we bring the clarity needed to move with confidence
Conclusion: A New Kind of Productivity Conversation
The conversation around data engineering productivity is changing. It’s no longer just a question of speed or output—but of alignment, purpose, and long-term impact.
Across this discussion, one point stood out: engineering teams don’t succeed because they deliver more. They succeed when they deliver what matters—consistently, reliably, and with a clear connection to business goals.
That shift demands more from leadership. It means moving beyond surface metrics and investing in the conditions that allow teams to thrive. Leaders must define clear priorities, enable structured planning, and ensure that productivity is measured not just by how quickly something is done, but by how well it serves its intended purpose.
AI and automation may accelerate delivery. But without thoughtful direction, they can also magnify inefficiencies or distract from what truly matters. As the surface area of data work expands, so does the need for sharper focus.
This conversation is a call to make that shift—to rethink what productivity means in today’s data environment, and to build teams that are not just faster, but more focused, capable, and resilient.
This article was written by Abdul Fahad Noori with the assistance of ChatGPT, based on the webinar discussion. It was reviewed and edited by Wayne Eckerson.

Abdul Fahad Noori
Fahad enjoys overseeing all marketing functions ranging from strategy to execution. His areas of expertise include social media, email marketing, online events, blogs, and graphic design. With more than...
Talk to UsYou Might Also Like