Lessons from a Year of Manufacturing AI Consulting


As the year ends, I’ve been reflecting on the projects I’ve been involved with over the past twelve months. Working with Australian manufacturers on AI initiatives has taught me some lessons worth sharing.

These aren’t theoretical insights. They’re patterns I’ve observed repeatedly across different companies, industries, and project types.

Lesson 1: The problem definition matters more than the technology

The projects that succeeded started with clear problem definitions. “We want to reduce unplanned downtime on Line 4 because it’s costing us $X per incident.” Specific, measurable, tied to business value.

The projects that struggled started with technology. “We want to implement AI.” When I’d ask why, the answers were vague—competitor pressure, executive enthusiasm, general belief that they should.

Technology without a clear problem to solve tends to find problems that don’t actually need solving. Time and money get spent on solutions looking for applications.

When I engage with a new client now, the first question isn’t about their technical environment. It’s “What specific problem are you trying to solve, and how much is it worth to solve it?”

Lesson 2: Data readiness is almost always overestimated

Nearly every client I worked with believed their data situation was better than it actually was.

“We have years of historical data.” (It was in spreadsheets, inconsistent, with significant gaps.)

“All our machines are connected.” (To a monitoring system that only captures basic status.)

“Our ERP has everything.” (High-level records, not the granular process data needed for AI.)

The data assessment phase consistently takes longer and reveals more issues than expected. But it’s essential work. AI built on shaky data foundations either fails outright or produces unreliable results.

For 2026, my recommendation to any manufacturer considering AI: commission a data assessment first. Understand what you actually have versus what you need. This saves enormous frustration later.

Lesson 3: Champions make or break projects

The correlation between project success and having a strong internal champion is nearly perfect in my experience.

Champions aren’t just senior sponsors who sign cheques. They’re people who understand the problem deeply, who push through obstacles, who solve the day-to-day issues that would otherwise stall progress.

When a champion leaves mid-project (promotion, departure, reassignment), projects often die even if everything else is going well. The institutional energy disappears.

I now ask specifically about champions early in engagements. Who is this? What’s their tenure likely to be? What happens if they leave? If the answers are worrying, I’m more cautious about project ambitions.

Lesson 4: Operators are the key stakeholders nobody consults

Projects designed in offices without operator input often fail on the factory floor.

The people who actually run equipment understand things that don’t show up in data or specifications. They know workarounds, failure modes, process quirks. They’ll determine whether AI recommendations get followed or ignored.

The successful implementations I saw in 2025 involved operators from the beginning—not just as consultees but as genuine participants in defining requirements and testing solutions.

In one memorable project, an operator’s question in an early workshop completely changed the approach. The expensive plan the engineering team had developed wouldn’t have worked because of a practical constraint only operators knew about.

Lesson 5: Quick wins build momentum; long slogs lose support

Projects that delivered visible value within 3-6 months built organisational support for continued investment. Projects that required 18 months before showing any results struggled to maintain attention and funding.

This doesn’t mean only doing easy projects. It means structuring ambitious projects to deliver intermediate value.

For example: a predictive maintenance project might take 18 months to reach full predictive capability. But month 3 can deliver improved equipment visibility. Month 6 can deliver anomaly alerts. Month 12 can deliver initial predictions for highest-priority equipment. Each milestone demonstrates progress and builds confidence.

Planning for intermediate deliverables isn’t just project management hygiene. It’s essential for maintaining the organisational support that makes success possible.

Lesson 6: “AI” is often the wrong framing

Many problems framed as AI opportunities were better solved with simpler approaches.

A client wanted AI for quality prediction. After analysis, the bigger opportunity was process standardisation—fixing the variation that caused quality problems rather than predicting outcomes from chaotic inputs.

Another client wanted AI scheduling optimisation. The real issue was data accuracy in their planning system. Better data made their existing scheduler work fine.

AI is powerful when applied to genuinely complex problems with good data foundations. But it’s not the answer to everything. Sometimes the answer is process improvement, data cleanup, or better use of existing tools.

I’ve learned to push back on AI framing when the underlying need might be better addressed differently. It doesn’t win consultant popularity contests, but it serves clients better.

Lesson 7: Ongoing operations are underestimated

Many projects focused heavily on implementation and lightly on what happens after.

Who monitors model performance? Who retrains when results degrade? Who handles exceptions? Who manages vendor relationships? Who ensures ongoing alignment with business needs?

These questions were often answered with “we’ll figure it out.” Projects that didn’t figure it out saw great implementations slowly degrade because nobody owned them.

Building operational capabilities—processes, skills, responsibilities—is as important as building the technical solution.

Lesson 8: The best technology isn’t always the right technology

I saw several projects choose sophisticated AI approaches when simpler methods would have worked as well or better. Neural networks when decision trees would do. Custom development when off-the-shelf platforms were suitable.

The appeal of advanced technology is understandable. But complexity has costs: harder to understand, harder to maintain, harder to troubleshoot, harder to explain.

The right approach is the simplest one that solves the problem. Sometimes that’s cutting-edge AI. Often it’s not.

Lesson 9: Australian manufacturers are more capable than they think

Despite the sometimes-discouraging tone of industry surveys about Australian manufacturing technology adoption, I found plenty of capability.

Engineers who taught themselves machine learning. Operators who built their own monitoring tools. Maintenance teams who’d implemented predictive approaches without calling it AI.

The official adoption statistics miss grassroots capability. There’s more going on at Australian manufacturers than aggregate numbers suggest.

This ground-up capability is a foundation to build on. Supporting and expanding what people are already doing often works better than imposing top-down AI initiatives.

Looking ahead

2025 was a year of learning—both what works and what doesn’t. Australian manufacturing AI is maturing, with more realistic expectations, better-defined projects, and growing capability.

The manufacturers I’m most optimistic about are those who approach AI pragmatically: specific problems, solid data foundations, strong champions, operator involvement, patience for the journey.

They’re not expecting AI to magically transform their operations. They’re building capability step by step, learning as they go, delivering value incrementally.

That’s not the sexy story of revolutionary transformation. But it’s the story that actually works.