Pattern synthesis

From Use Case to Value: Where Should You Start with AI in Your Supply Chain?

Most supply chain functions have AI ambitions. Far fewer have a reliable method for identifying which ones are worth pursuing — or a business case that will get the foundational work funded.
Published:
 
May 15, 2026
Author & Contributors:
 

Most supply chain functions are somewhere on the AI journey, but very few are clear on where that journey should take them. A follow-up BPC discussion hosted by Andrew Dalziel from Infor, the second in a series on AI investment in supply chain, surfaced something that the first session only touched on: the problem isn't a lack of AI ambition. It's a lack of a reliable method for identifying which ambitions are worth pursuing, and a persistent difficulty building the business case for the foundational work that makes any of the more interesting ambitions viable.

Andrew is a sponsor of BestPractice.Club.

The gap between exploring and doing

A quick poll at the start of the session asked participants to characterise their current AI journey: exploring, piloting, scaling, or not started. The spread was roughly what you'd expect, with the majority somewhere between exploring and early pilots. What was more revealing was Andrew's observation about what "exploring" typically means in practice. In most organisations, it means the company has an AI policy, people are using ChatGPT or Copilot for research and document drafting, and leadership has identified AI as a strategic priority. It does not, in most cases, mean that AI is being applied operationally to optimise processes, decisions or outcomes.

The gap between having an AI policy and having an AI use case with a measurable return is where most organisations are currently stuck, and it's a gap that doesn't close on its own. McKinsey research cited in the session puts it in fairly blunt terms: only around 12% of companies have identified genuinely revenue-generating AI opportunities. The rest are exploring in the looser sense, waiting for clarity that isn't going to arrive without more deliberate work. Gartner's 2025 Hype Cycle for Supply Chain Planning Technologies captures the same dynamic from a different angle, placing generative AI in the trough of disillusionment as organisations struggle to convert pilot-stage interest into production-ready results.

The question that followed naturally from this was the one the session was designed to address: if you're in that majority, where do you actually start?

Start from the problem, not the technology

The answer Andrew kept returning to, and which practitioners in the room largely endorsed from their own experience, is deceptively simple: start from the problem. Identify the high-impact, high-value business problems first, then work back to whether and how AI might address them. The difficulty is that most organisations are not currently being asked what problems they have that AI might help with. They're being asked what they're doing with AI. Those two questions pull in opposite directions, and the second tends to produce projects that are harder to justify once the initial enthusiasm settles.

One of the more concrete examples of how to operationalise the problem-first approach came through the discussion of process mining. Before deploying AI to optimise anything, Andrew's starting point with customers is typically to use process mining to make visible where the actual problems are: where processes diverge from their intended design, where bottlenecks accumulate, where non-conformance is normalised because nobody has been able to see it clearly. Process mining doesn't require AI to be useful, and many of the findings it surfaces can be addressed through discipline and training rather than technology. But it creates a reliable map of where the high-value problems actually live, which makes the subsequent question of where to apply AI considerably less speculative.

The principle carries beyond process mining. A practitioner with supply chain data domain responsibility across a large multi-ERP environment described a maturity-led approach to use case identification: businesses at earlier stages of their data and process journey get use cases focused on data quality and basic connectivity; businesses further along get demand forecasting and machine learning pilots; the most advanced get agentic approaches to root cause analysis and connected decision support. The sequencing isn't arbitrary. It reflects a realistic assessment of what each business can actually absorb and what will generate a return at their current state, rather than what looks most impressive in a vendor demonstration.

The data governance business case problem

The use case that generated the most candid discussion was also the most foundational: master data quality. It topped the priority poll, and the reason is intuitive. You can't do much that's meaningful with AI if the data feeding it is inconsistent, duplicated or structurally unreliable. The problem is not that organisations don't recognise this. It's that they consistently struggle to fund the work.

A practitioner leading a master data and planning parameter governance initiative described the challenge with unusual precision. There is, he noted, broad internal agreement that the data problem is real. The difficulty is translating that agreement into a funded business case. The value of clean data is largely defensive: fewer delayed orders, less manual correction effort, fewer downstream planning failures. But "fewer things that went wrong" is a hard argument to make to a finance director. The delayed orders still got resolved eventually. The revenue still came in. The people correcting the data are still employed. The counterfactual, a version of the business where the data was clean and those costs didn't exist, is not easily expressed in terms a CFO will act on.

Andrew's response to this, drawing on experience with a German manufacturing customer, pointed toward inventory reduction as a more tractable financial argument. Cleaning master data, removing duplicates and resolving item aliases, reduced inventory holding by 5% at that customer. That's a working capital argument, which tends to land differently with finance than a time-saving or error-reduction argument, because it shows up on the balance sheet rather than requiring an estimate of avoided cost. For organisations struggling to get data governance funded on operational efficiency grounds, the inventory angle is often a more viable route into the same investment.

The deeper structural point, which practitioners in the room recognised, is that data governance business cases tend to fail not because the value isn't there but because it's expressed in the wrong terms for the audience that needs to approve it. The value is real; the translation is broken.

Where AI helps and where it doesn't

A thread worth pulling on concerned where AI is currently reliable and where it reaches its limits. Demand forecasting was the live example. Several practitioners had run machine learning pilots on demand data and found that the results were promising for high-volume, regular items but significantly less reliable for intermittent, low-volume or highly volatile ones. The runners worked. The strangers and aliens, in the vocabulary familiar to most planning teams, did not.

This isn't a criticism of the technology. It reflects something structural about where AI is and isn't well suited to the problem. Clean feedback loops, high data volumes and bounded decisions are the conditions under which AI generates reliable returns. Long horizons, sparse data and judgment-intensive exceptions are the conditions under which human expertise still carries more weight. The honest version of an AI use case evaluation starts by asking which of those conditions apply to the problem in question, rather than by assuming that AI will handle everything and discovering the limits in production.

Andrew's framing on this was direct: the fact that AI can be applied to a problem doesn't mean it should be the first tool you reach for, and the fact that a pilot looks promising on the easy cases doesn't mean the production deployment will look the same. Embedding AI into a process requires thinking carefully about what the process looks like around the AI, including the manual review steps, the exception handling, the governance over model outputs, and the path back into the systems that run the operation day to day. That infrastructure is consistently underestimated in early pilots, and its absence is one of the main reasons promising experiments don't reach production. IDC research suggests 88% of AI proofs of concept never make it that far, with organisational readiness around data, process and infrastructure cited as the primary cause.

A more useful question

The first BPC discussion on AI investment asked what has to be true before the investment is worth making. That framing was useful for organisations that had already identified a use case and were working out whether to commit. This session was addressing the earlier and, for most organisations, more pressing problem: how do you identify the right use case in the first place, and how do you build a case for the foundational work that makes any use case viable?

The honest answer, which the discussion kept circling back to, is that it requires more deliberate effort than most organisations are currently putting in. Not more AI content, not more vendor demonstrations, not a broader scan of what others are doing. A more disciplined internal process for asking which problems are actually worth solving, what conditions need to be in place before AI can address them reliably, and how to express the value of foundational work in terms that will get it funded.

The organisations seeing returns on their AI investments, the cases Andrew described, shared a common characteristic: they had found a genuine problem first. The technology came second. That sequencing sounds obvious and is surprisingly rare.

This discussion was the second in BestPractice.Club's series on AI investment in supply chain, hosted by Andrew Dalziel from Infor. The first article in the series is What Has to Be True Before Your AI Investment Is Worth Making?