The second strand of the recent discussion hosted by Andy Devlin focused less on ownership and more on readiness. The question on the table was straightforward: how do you know when your data is good enough to support AI or more advanced analytics?
Andy framed the issue early on by bringing the group back to fundamentals.
“What is the problem you’re trying to solve? Who are you solving it for?”
That question shaped the rest of the exchange.
A benchmark of 95 per cent data accuracy was mentioned, reflecting the sort of threshold often cited in industry conversations. No one dismissed the importance of data quality. What several contributors questioned was whether such a number means anything without context.
Accuracy requirements vary by decision. Regulatory reporting, financial close and compliance processes may demand extremely high thresholds. Operational visibility — for example, tracking shipments — may depend more on timeliness and completeness. Planning processes may rely more heavily on consistency of definition than on perfectly clean historical data.
One participant described a visibility initiative that began with very limited digital tracking coverage. Rather than attempting to standardise every shipment flow globally, the team defined a narrower subset of flows and improved coverage significantly within a few months. The broader ambition remained in place, but the immediate test was whether the data was sufficient to support that specific operational use case.
Andy cautioned against trying to solve everything at once.
“You can’t boil the ocean”
The comment was not about lowering standards. It was about sequencing.
Another distinction raised during the session was between having data somewhere in the organisation and having it available in usable form at the moment of decision.
Orders may exist in an ERP. Shipment updates may sit in carrier portals. Forecast inputs may be maintained by commercial teams. The practical question is whether the relevant planner, buyer or manager can access the right information at the right time in a format that supports action.
This difference between existence and usability is often where assumptions break down.
The discussion also addressed nuance. Different stakeholders frequently require slightly different versions of what appears to be the same dataset. Sales might be defined in gross or net terms. Returns may or may not be included. One team may need daily granularity; another may work at weekly level. Attempting to incorporate all of those variations into an initial build increases complexity and slows delivery.
Several participants described agreeing on a shared foundation first and layering refinements once the core use case was stable.
Andy also differentiated between executional and mitigation-oriented intelligence. Executional intelligence supports managing current activity — shipment status, purchase order confirmations, inventory positions. Mitigation intelligence aims to anticipate shortages or risk before they occur. The second typically requires broader cross-functional consistency and integration.
For leaders testing their own assumptions about data readiness, the discussion suggests a few practical checks:
- Is the decision you are trying to improve clearly defined?
- Are you measuring readiness against that specific decision, rather than against a generic enterprise-wide benchmark?
- Is the data usable at the point of decision, not simply present somewhere in the system landscape?
- Have you limited initial scope to something that can be governed effectively?
- Are you attempting executional improvement and predictive mitigation simultaneously?
None of these questions eliminate the need for longer-term data harmonisation. They do help clarify whether a particular use case is viable now.
The session did not challenge the potential of AI. It challenged the way readiness is sometimes framed. Starting with a clearly defined use case allows readiness to be assessed relative to that decision. Expanding scope gradually allows governance and definitions to stabilise over time.
For leaders at the test assumptions stage, that shift in framing may be more useful than any single accuracy threshold.
