HomeApproachServicesAboutWritingContact
Back to Writing

Why most automation projects fail before the first build.

The failure mode is almost always the same. Not technical, not strategic. Something quieter, and harder to fix once it's set in.

FIG. 1 · DIAGNOSIS

If you've spent any time around automation projects, you start to notice a pattern. The ones that fail rarely fail because the technology didn't work. They fail before anyone wrote a single line of code.

The pattern looks like this. A team identifies something that takes too long. Someone scopes a project to fix it. The project ships, technically successful. The script runs, the workflow fires, the report generates. Six months later, nobody is using it. Or they're using it, but the time savings never materialised. Or they're using it, but somehow the team is just as busy as before.

This happens often enough that it's become its own quiet phenomenon. And the cause is almost always the same: nobody asked the right question first.

The question that didn't get asked

The right question is not "what do we want to automate?" That's the question most projects start with, and it's the wrong starting point. People are surprisingly bad at identifying what's worth automating. They tend to point at the most annoying things, the things that came up in the last meeting, or the things that look impressively automatable. None of these are reliably the same as the things that, once automated, would actually change the team's economics.

The right question is closer to: "where is time being lost in ways nobody is tracking?"

This is a different kind of question. It can't be answered in a meeting. It requires sitting with the people doing the actual work. Not their managers, not the documentation, not the org chart's version of how things happen. The actual work, in the actual rhythm it gets done.

"The friction is invisible because it's evenly distributed. Everyone is mildly inconvenienced. No single thing is broken enough to warrant attention."

What you find when you sit there is rarely what anyone predicted. The dramatic pain points. The ones people complain about. Often turn out to be edge cases, fired once a quarter, costing maybe an afternoon. The real cost is usually somewhere quieter: a fifteen-minute task happening forty times a week. A handoff between two teams that adds two days to every cycle. A weekly report nobody reads but everyone produces.

Why the question gets skipped

It gets skipped because it's slower. Asking it well takes a week or two of attention before you've earned the right to recommend anything. Most consulting engagements aren't structured to allow that. Most internal initiatives aren't either.

What replaces it, usually, is a kind of theatre of discovery. A workshop where stakeholders list pain points. A survey of "tasks you'd like to automate." An audit of existing tools. These produce documents that look like analysis, but they don't produce understanding, because they're capturing what people think about their work, not how their work actually happens.

And the gap between those two things is usually where automation projects go to die.

The compounding effect

Once you build against a misdiagnosed problem, several things happen at once. The system you build solves something. Usually whatever was easiest to define from the surface-level discovery. But it doesn't solve the thing that was actually costing the team time, because that thing was never on the list. So time savings don't materialise. The team is still busy.

Meanwhile, the system you built has now created its own maintenance load. Someone has to monitor it, fix it when integrations break, train new hires on it, document its quirks. So the team is now slightly busier than before, in subtly different ways. The friction has moved, not reduced.

If you're looking at this from outside, you might conclude that automation doesn't work for this team. The opposite is usually true. The team has a lot of automatable work. It just isn't the work that got automated.

In practice

One client we audited had built three internal automation tools over two years. None of them were used regularly. When we mapped the team's actual workflows, the highest-value automation opportunity. Generating client status reports, wasn't on anyone's previous wishlist. It had simply become invisible because everyone had been doing it forever.

The habit that prevents this

The fix is unglamorous. Sit with the team. Watch them work. Take notes on what actually consumes time, not what people say consumes time. Map the workflows that exist, not the ones the org chart suggests. Find the fifteen-minute tasks happening forty times a week. Find the handoffs that add days to every cycle. Find the things that have been done so long nobody questions them anymore.

This is harder than it sounds, because the people doing the work have stopped seeing the friction. They've adapted. The forty-time-a-week task feels like just part of the job now. The two-day handoff feels like the natural pace of things. You have to come in fresh enough to notice what they've stopped noticing, while taking them seriously enough to learn from their adaptations.

Once you've done this, and it usually takes a week or two, almost everything else gets easier. The automation roadmap writes itself. The build estimates are reliable. The team uses what you build, because what you build was scoped against the actual texture of their work.


What this means in practice

If you're considering an automation project, the first hour you spend should not be on tools. It should be on understanding. If a vendor or consultant proposes a build before they've genuinely understood your operations, that's the warning sign. Not because they're acting in bad faith. Most of them aren't, but because the structure of how they work doesn't allow for the slow, careful diagnosis that prevents the failure mode described above.

The slow part isn't optional. It's where the value is created. Everything else is just execution.

Most automation projects fail before the first build because the diagnosis was rushed. Get the diagnosis right and almost nothing else matters very much.

Want a careful diagnosis?

Our Operations Audit is structured around the slow, careful kind of discovery this post describes. One to two weeks. You leave with a roadmap that's yours to keep.

See how the audit works