Overview
Across large, complex supply chains, transformation initiatives often stall long before technology becomes the limiting factor. According to Andy Devlin, the real constraint usually sits further upstream — in how organisations define decisions, structure data, and set expectations about what data can realistically support.
Drawing on direct experience building analytics and data capabilities inside a global manufacturing environment, Andy argues that supply-chain change rarely fails because tools are inadequate. Instead, it falters because data foundations are misaligned with the decisions they are meant to enable — leading to over-engineered architectures, slow progress, and diminishing returns.
This perspective explores why decision-led data foundations matter, how “boiling the ocean” becomes the default failure mode, and what a more pragmatic path forward looks like.
From operating reality to data reality
Andy’s starting point is deceptively simple: supply chains touch everything — procurement, planning, manufacturing, logistics, finance, and customers. Yet data is rarely structured around how decisions actually flow across those domains.
What often happens instead is that organisations attempt to impose a single, global data model before they have clarity on decision ownership, decision frequency, or decision criticality. The result is a growing gap between operational reality and analytical ambition.
In practice, this shows up as highly capable platforms sitting on top of fragmented hierarchies, inconsistent part definitions, and unresolved questions about who is allowed to see — or act on — which data. The data exists, but it cannot be reliably orchestrated into decisions.
Why “visibility” is rarely a single requirement
A recurring theme in Andy’s experience is how easily broad concepts like visibility or insight obscure underlying complexity.
What sounds like a single requirement — “give buyers visibility” — quickly expands into multiple decision contexts once examined properly. Visibility for whom? At what level? With what refresh rate? Masking which commercial data? Across which regions and product families?
Without resolving these questions first, organisations end up designing data pipelines and integrations that attempt to serve every possible interpretation simultaneously. This is where data programmes quietly shift from enabling decisions to accumulating technical debt.
The persistence of Excel is a data signal
Despite extensive investment in ERP, TMS, analytics platforms, and partner portals, Andy notes that Excel remains the primary decision tool for thousands of buyers. This is not a cultural failure — it is a data-model failure.
Spreadsheets persist because they allow individuals to reconcile data from multiple systems in ways central platforms cannot yet support. Each macro or pivot table reflects a locally optimised decision flow that has never been formally recognised or orchestrated.
The consequence is that critical operational logic becomes decentralised and invisible. Executives cannot see it, governance cannot scale it, and analytics teams cannot standardise it — not because people resist change, but because the data foundation was never built around those decisions in the first place.
Data foundations as the true pacing factor
Andy is explicit that data, not software, sets the pace of transformation.
AI, machine learning, and advanced analytics all assume a level of data accuracy and structure that most organisations do not yet have. The mistake is interpreting this as a reason to delay action altogether — or, conversely, as justification for multi-year data-lake programmes with no near-term business impact.
Instead, Andy describes progress as inherently iterative. Data foundations improve when they are anchored to real decision needs, refined through repeated use, and allowed to mature over time. Expecting architectural perfection upfront only guarantees slow delivery and eroded confidence.
Avoiding the “boil the ocean” trap
A key implication of Andy’s perspective is that data strategy should start small — but deliberately.
Rather than attempting to harmonise every data domain at once, organisations should prioritise a limited number of high-value decision flows. These become the proving ground for data quality, ownership, and orchestration patterns.
Over time, those patterns can be reused and extended. But trying to design for every future possibility upfront almost always leads to over-engineering — and to programmes that deliver infrastructure long before they deliver decisions.
Orchestration over integration
Andy’s experience also highlights the limits of traditional integration-led approaches. Point-to-point interfaces can move data, but they rarely support evolving decision needs.
What matters more is orchestration: how data is shaped, translated, and surfaced differently for different decision contexts — without hard-coding fragile assumptions into the architecture. This requires clarity on decision rights and ownership as much as technical capability.
Without that clarity, even well-funded data initiatives struggle to adapt when priorities shift.
Measuring progress through decisions, not architecture
Finally, Andy stresses the importance of measuring progress through decision impact rather than architectural milestones.
Waiting for “complete” data foundations before expecting value sets unrealistic expectations and delays learning. By contrast, tracking how specific decisions improve — faster responses, clearer trade-offs, reduced manual effort — allows organisations to demonstrate ROI early while still building toward longer-term capability.
In this framing, imperfect data is not a blocker. It is a starting point.
Closing reflection
Supply-chain change does not stall because organisations lack ambition or access to technology. It stalls when data foundations are built without a clear link to decisions, ownership, and operational reality.
Andy’s experience suggests a more grounded alternative: decision-led data foundations, deliberate sequencing, and orchestration patterns that grow with the business — rather than trying to out-run it.
Andy is hosting a 60-minute online discussion on this topic at 11.00 GMT on Thursday 12th February and is participating in the "Data Orchestration without Boiling the Ocean" panel at the Spring 2026 meeting in London on 29th April (see links below).
