When not to automate.
Six categories of work where automation does more harm than good. We turn down builds in all six of them.
Six categories of work where automation does more harm than good. We turn down builds in all six of them.
Most of what we publish is about how to identify the work worth automating. This post is about the opposite: the work that should stay human, and the patterns that signal you're about to automate something you shouldn't.
We turn down builds in each of the six categories below. Sometimes this is unwelcome news to the prospect. It's still the right call.
Some processes look automatable on the surface, but their value is in the judgment being applied. The reading of nuance, the catching of edge cases, the experienced sense that something is off. Automating these doesn't speed them up. It removes the thing that made them work.
A senior account manager reviewing a contract for client-specific quirks. A clinician reading a chart with three years of context behind it. A senior engineer reviewing a design document. These look like text-processing tasks. They aren't. They're judgment tasks where the text is just the medium.
The test: if you replaced the experienced person with someone junior using a flowchart, would the output be the same? If no, automation will remove the experienced person's input without replacing what made it valuable.
The personalised note from your account manager. The thoughtful intro email a partner writes when connecting two people. The candid debrief after a customer meeting. These are technically "messages", and language models can write messages, but their actual function is signalling that a human spent time thinking about you specifically.
Automating these breaks the signal. Even if the recipient doesn't consciously detect it, something subtle changes about the relationship. We've seen teams automate their personal touches and watch their renewal rates drift down without ever connecting the two.
"Automating relationship signals doesn't make them faster. It makes them stop being relationship signals."
Some decisions have a small upside if right and a large downside if wrong. Compliance reviews, security approvals, certain medical decisions, certain financial decisions. Automating these means accepting a small efficiency gain in exchange for occasional catastrophic failure modes, and the math almost never works.
You can use automation to support these decisions: surfacing relevant context, flagging anomalies, drafting first passes for human review. What you can't do is replace the human in the loop, because the asymmetry of consequences requires a real person carrying real accountability.
Sometimes a team has been so frustrated with a manual process that they've built three layers of workaround scripts and macros over the years. Each layer made the previous step slightly less painful but added its own brittleness. You can spot these because the team will describe the workflow with phrases like "the script that fixes the script" or "the spreadsheet we built to fix what the system gives us."
Automating on top of this stack just adds another layer. The fix isn't more automation. It's burning the existing stack down and rebuilding the underlying process from first principles. We've turned down builds where the right answer was "your three workarounds are themselves the problem; until you address them, automating around them will make things worse."
Some processes are slow on purpose. Approval workflows that require multiple people exist not despite the fact that they're slow, but because they're slow. The friction creates space for second thoughts, dissent, or new information to surface. Automating them away can be technically possible and practically disastrous.
This isn't an argument for keeping all friction. It's an argument for distinguishing between friction that's accidental (worth removing) and friction that's load-bearing (worth keeping). Most teams have both, and they look identical from the outside.
If a team can't clearly explain what a process does, why it exists, and what its inputs and outputs are, they're not ready to automate it. Automating an unclear process locks the unclear-ness in. Worse, it makes the unclear-ness invisible, because now nobody is doing it manually anymore, the institutional understanding of why each step exists evaporates inside a year.
The right move here is to map the process clearly first, in detail, with the people who actually do it. This sometimes reveals that pieces of it can be eliminated entirely, or that the process should be redesigned before automation is even considered. Build on understanding, not on its absence.
Each of these categories has the same underlying property: the work looks like it could be automated, and the technical means to automate it usually exist. What's missing is a reason to. Automation is a tool. Like every tool, it has cases where it shouldn't be reached for, even when it could be.
It would be commercially better for us to never write a post like this. Telling prospects "here are six reasons we might decline your project" is not a standard marketing move.
We do it because the alternative. Taking on builds that fall into these categories and delivering them well, is worse. For us, because we end up with engagements we can't be proud of. For the client, because they end up with systems that quietly degrade their operations in ways that are hard to attribute and hard to undo.
If your situation falls into one of the six above, we'd rather know early. Sometimes the right answer is automation. Often it isn't. The honest version of the conversation has both possibilities on the table from the start.
The first call is free. Send a few sentences about what's slowing your team down, and we'll reply with honest thoughts on whether this is the kind of work we can help with.
Start a conversation