What Has to Be True Before Your AI Investment Is Worth Making?
Most organisations in supply chain are doing something with AI. Boards and senior leadership teams have identified it as a strategic priority, and that signal has reached planning and operations functions clearly enough. The pressure is real, though it often arrives ahead of a clear operational application. Senior leaders who recognise AI as essential to remain competitive are not always close to the specifics of where it fits, which use cases are ready, or what conditions need to be in place before the investment is worth making.
That gap was visible throughout a BPC discussion on 21 April, hosted by Andrew Dalziel from Infor. The session was structured around three concrete AI use cases — master data quality, sales price optimisation, and yield optimisation in food manufacturing — drawn from Infor's work with customers across Europe and North America. The cases were specific, quantified, and practitioner-facing. Andrew is a sponsor of BestPractice.Club.
The use cases gave the room something to react to, and the discussion that followed was more revealing about where organisations actually are with this than the cases themselves.
The gap between the factory floor and the planning office
One of the senior supply chain leaders in the room, responsible for both digital agenda and planning at a large food business, described a situation that several others seemed to recognise. She could see considerably more actionable AI opportunity on the factory floor than in supply chain planning, despite actively looking for both. Yield control, maintenance scheduling, sensor integration, operator standardisation: the factory use cases were tractable, the value was visible, and the path from experiment to production was navigable. Planning felt different. The problems were just as real — inventory optimisation, safety stock calibration, service level trade-offs — but the connection between those problems and what AI could do about them was harder to see and harder to fund.
This reflects something structural about where AI is currently more and less mature across the supply chain. Manufacturing and warehousing use cases tend to involve tighter feedback loops, cleaner data, and more bounded decisions. Planning involves longer horizons, more variables, more human judgment embedded in the process, and often messier underlying data. The gap matters for how organisations think about sequencing their investment, and it is worth being honest about rather than treating the whole of supply chain as equally ready.

Starting from the problem
A participant made a point that is easy to agree with and harder to act on: the discipline should be to start from the problem and work back to the technology, not to arrive with a technology and search for problems that fit it. It is a principle that most practitioners would endorse in the abstract, and yet the reason sessions built around concrete, production-deployed use cases attract the interest they do is that genuine examples of AI generating a measurable return — beyond the pilot stage — remain scarce enough to be worth seeking out. The problem-first instinct and the appetite for proven examples reflect the same underlying frustration with an environment full of AI noise and short on operational evidence.
The difficulty in practice is that most organisations are not currently being asked what problems they have that AI might help with. They are being asked what they are doing with AI. Those questions pull in different directions, and the second tends to produce projects that are harder to justify once the initial enthusiasm settles — and, in some cases, that leave things in a worse state than if the project had never started. Sandipan Bhaumik, whose AgentBuild newsletter tracks practical AI implementation across enterprise settings, has written that the hidden costs of poorly scoped AI tend to show up in customer experience and operational trust long before they appear in any project dashboard.
Research from IDC puts the scale of the problem in blunt terms: 88% of AI proofs of concept never reach production, with a significant share attributed to initiatives launched under board-level pressure without a clear business case behind them.
The use cases Andrew presented were largely the product of organisations that had found a genuine problem first. Pricing accuracy at a tight-margin distributor where every percentage point of recovered margin compounds quickly across volume. Data remediation that would otherwise absorb seven years of analyst time. Yield variance in a dairy where a single percentage point of improvement is worth half a million euros annually. In each case the value was identifiable before the technology was selected, and that sequencing is visible in the outcomes.
The ROI translation problem
Several practitioners in the room described a version of the same situation: data work done, governance in place, a promising tool trialled, and then an inability to convert the results into a funded, scalable production process. The frustration was not with the technology. It was with what sits between a working experiment and a justifiable investment.
Time savings are real but do not automatically translate into headcount reductions. Efficiency gains require something to be done with freed capacity before they appear in a business case. The jump from a manually fed pilot to an automated, embedded operational process turns out to involve considerably more infrastructure than the initial experiment suggests, including how decisions get executed back into the systems that run the business day to day.
This is where the business case gets genuinely difficult: not in showing that value exists, but in building an argument that a finance director will fund at the scale needed to make the value stick.
What the discussion left open
The closing exchange circled around something worth putting more directly. Several of the organisations Andrew described had sequenced this well and were seeing returns that justified the effort quickly. They had a clear problem, data infrastructure sufficient to sustain a production process rather than just a pilot, and value expressible in terms a CFO would recognise — margin impact, working capital, revenue rather than time saved. They also had a realistic path from experiment to embedded operation before they started.
The organisations that are struggling tend to have started from the other end: a mandate to act on AI, promising experiments, and a persistent difficulty getting to something fundable and scalable. The gap between those two positions is not primarily a technology question. It is a question about what conditions need to be in place before the investment is worth committing to.
That question does not have a universal answer, which is partly why it keeps coming up. The variables — data maturity, problem clarity, organisational readiness, the complexity of the decisions being automated — differ enough between companies, and between different parts of the same supply chain, that the case studies only get you so far. At some point the work is internal, and it involves sitting with the question rather than reaching for the next example of someone else who got it right.
This discussion was part of BestPractice.Club's Spring 2026 programme. The next session takes place on 29 April in London. Details at bestpractice.club/upcoming-sessions.
