Technology Transformation

ERP implementation crisis

This is a pre-completed example output — use it to understand the style and depth, then run your own analysis.

My understanding of your challenge

You’re in the uncomfortable middle ground where the programme is too far along to stop, but not delivering enough confidence to keep momentum. The implementation has become a visible symbol of risk: timelines slipping, costs rising, and adoption lagging, with senior stakeholders starting to question whether the programme is worth it. The underlying tension is that leadership wants proof of value quickly, while the work required to fix root causes (data, process design, change adoption, governance) is inherently multi‑stream and can’t be solved by a single ‘technical fix’. At the same time, you’re likely dealing with change fatigue, capability gaps, and a partner dynamic that may be misaligned with your success. What makes this urgent is that once executive confidence collapses, programmes like this can enter a death spiral: governance becomes reactive, scope thrash increases, and the organisation stops investing attention in adoption — which is exactly what would have made the investment pay back.

Initial diagnosis & what this really is

This looks less like a ‘technology delivery’ problem and more like a programme leadership and operating model problem: unclear decision rights, a weak value narrative, and an adoption system that has not been designed with the organisation’s reality in mind. In similar ERP situations, the delivery plan often assumes clean data, stable processes, and a receptive organisation — and those assumptions rarely hold. It’s commonly mistaken for a data migration problem or a partner performance problem. Those may be true symptoms, but treating them as the primary diagnosis leads to over-investing in technical workarounds while under-investing in the organisational mechanisms that actually determine value (process ownership, behavioural change, training, incentives, and executive sponsorship). Across large transformations, the market pattern is clear: the winners stabilise governance, narrow to a credible ‘minimum valuable release’, and rebuild trust through a small number of tangible wins — not by rewriting the entire plan.

Key risks & failure modes to be aware of

The most common failure mode is ‘scope thrash’: reacting to every stakeholder concern by expanding requirements, which makes delivery slower and adoption worse. Another frequent trap is confusing project activity with business impact — the programme reports progress (testing complete, build complete), while operational teams feel no improvement and quietly disengage. Well-intentioned efforts fail when leadership attention drifts too early to the ‘next phase’ (rollout, optimisation) before the basics are in place: process ownership, decision rights, and a credible change narrative. In many organisations, the programme becomes a battleground between IT, Finance, and Operations rather than a shared business change. A final risk is partner dependency: if the partner’s incentives are tied to billable work rather than outcomes, you can end up buying complexity.

Suggested strategic approach

The strategic move is to shift from ‘delivery mode’ to ‘value and control mode’. That means: (1) re‑establishing clarity on what success looks like in the next 8–12 weeks, (2) tightening governance and decision rights, and (3) designing adoption as a system rather than an afterthought. Sequence matters. Start by stabilising: reduce uncertainty, stop scope churn, and create a single truth on plan, risks, and decisions. Then design: define the minimum valuable release, confirm process ownership, and align data standards. Only then should you push acceleration — otherwise you accelerate instability. This is also where deeper benchmarking (programme maturity, partner performance patterns, and adoption leading indicators) would materially improve decision quality.

Indicative timeline of activities

Phase 1: Diagnose & Align

Focus of activity:

Rapid programme reset: decision rights, plan reality-check, top risks, and a clear near-term value narrative.

Intended outcome:

A stabilised programme with an agreed minimum viable path, explicit trade-offs, and regained executive confidence.

Phase 2: Design & Decide

Focus of activity:

Lock process ownership, data standards, rollout approach, and ‘adoption mechanics’ (training, comms, incentives).

Intended outcome:

A coherent design that the business actually owns — and a delivery plan that matches organisational capacity.

Phase 3: Mobilise & Execute

Focus of activity:

Execute the minimum valuable release, measure adoption, and iterate based on real operational feedback.

Intended outcome:

Tangible operational improvements, measurable adoption, and momentum to expand scope safely.

Early KPIs & signals to track

Adoption readiness score

Adoption signal

A weekly view of training completion, process sign-off, super-user coverage, and local readiness — not just technical readiness.

Decision latency

Decision-quality metric

How long critical programme decisions take (days/weeks). Rising latency is an early signal of governance failure.

Value proof points

Progress signal

2–3 measurable outcomes tied to the next release (cycle time, error rate, close time) to rebuild executive trust.

30 / 60 / 90-day way forward

Next 30 days — Clarity & alignment

Reset the programme with a short, structured ‘stabilisation sprint’. Clarify decision rights, confirm the true critical path, and aggressively reduce scope to a minimum valuable release that can credibly land. Create a simple executive narrative: what we will deliver in the next 8–12 weeks, what we will not deliver yet, and why. Run a focused root-cause review across three streams: data, process ownership, and adoption. The goal is not to document everything — it’s to identify the handful of blockers driving delay and low confidence.

Next 60 days — Decisions & design

Move into design and decision-making: lock the operating model for rollout (who owns processes, who signs off, who supports). Define adoption mechanics: training plan, super-user network, comms cadence, and local change ownership. This is also the point to renegotiate partner ways-of-working if needed: outcome-oriented governance, transparent burn vs progress, and explicit quality gates.

Next 90 days — Mobilisation & early execution

Mobilise execution against the minimum valuable release. Treat early rollout as a learning loop: measure adoption, run hypercare, and feed operational issues back into configuration and training. As credibility returns, expand scope only when leading indicators are stable — otherwise you reintroduce volatility.

Questions a consultant would ask next

These questions expose assumptions, highlight decision points, and signal where deeper work is required:

  1. 1.What is the minimum ‘value-shaped’ release you can deliver in 8–12 weeks that executives and users will actually feel?
  2. 2.Where are decision rights currently unclear (scope, process design, data standards, cutover), and what is the impact of that ambiguity?
  3. 3.Which processes truly drive the business case, and are those process owners actively engaged — or passively consulted?
  4. 4.What are the top 3 reasons users are not adopting today (capability, incentive, workflow fit, trust), and what evidence do we have?
  5. 5.How aligned are the partner’s incentives with outcomes vs billable work, and what governance changes would shift that?

What a deeper plan would unlock

A deeper plan would turn this into an execution-ready workplan by stream (process, data, technology, change), with named owners, governance, and quality gates — designed around your organisation’s actual capacity to absorb change. It would also add competitive / sector benchmarks for programme recovery patterns, plus a simple executive pack: decision log, trade-offs, risks, and weekly confidence indicators. That’s the difference between ‘stabilising’ and reliably landing value.