The 10% You Should Never Automate

11 min read

Everyone’s asking the wrong question about AI.

They want to know what AI can do. Which tasks it can handle. How much they can automate. But this is like asking how much of your diet you can replace with ‘soylent green’. The answer is: probably less than you should.

The better question is what you shouldn’t let AI do. Not because it can’t, but because you shouldn’t.

A recent MIT study tracked 54 people writing essays over four months. One group used ChatGPT, another used traditional web search, and a third worked unaided. The AI users got faster at producing essays. Their output looked good. But something unexpected showed up in their brain scans.

Using EEG to measure neural activity, researchers found that AI users’ brains literally worked less over time. Not just during the task—that would be expected. But their capacity for independent work deteriorated. When asked to write without AI in month four, their brains couldn’t simply switch back. Neural connectivity remained suppressed. They couldn’t quote from essays they’d written minutes earlier. They’d lost something fundamental.

The researchers called this “cognitive offloading.” Your brain, sensing it has external support, reduces its own effort. The scaffolding becomes load-bearing. Remove it, and things collapse.

This isn’t a Luddite argument against AI. It’s an argument for being precise about what we automate. Most automation advice focuses on capability—what AI can do. But capability isn’t the constraint anymore. The constraint is knowing what you’re optimizing for, and what cognitive abilities you want to keep.

The Joy Audit

List everything you do in a typical week. Rate each task from 1-10 on how much satisfaction, meaning, or joy it brings you.

You’ll immediately see two categories emerge:

High joy - The work that makes you feel alive. Strategic thinking. Solving hard problems. Mentoring someone. Creating something new. The moments where you lose track of time.

Low joy - The work that drains you. Status reports. Expense reports. Meeting notes. Email management. The tasks you delay, rush through, or resent.

Most people automate randomly. They grab whatever’s easy or whatever tool they discover first. This is backwards. You should be fiercely protective of the high-joy work and ruthlessly eliminate the low-joy work. This is similar to Marie Kondo’s advice applied to knowledge work—keeping things that ‘spark joy’, and tossing the rest.

The framework:

JoyAI Can Do It?Action
HighNoProtect
HighYesProtect anyway (your 10%)
LowYesAutomate
LowNoRedesign or delegate

That second category surprises people. Why protect work that AI could handle? Because some work is valuable not for the output but for the process. An engineer who loves debugging might automate bug reports but never the debugging itself. A writer might automate research but not the writing because of the human touch that engages readers who can easily call out ‘AI dump’ from a thoughtfully edited article. A manager might automate scheduling but not the one-on-ones.

The joy and ‘human touch’ is the point. But there’s a second reason to protect this 10%: it’s where your cognitive capacity lives.

The Cognitive Cost of Offloading

Recent research from MIT’s Media Lab adds an urgent dimension to this question of what to automate. In a four-month study, researchers had participants write essays using three different approaches: with AI assistance (ChatGPT), with traditional web search, or with no external tools at all.

The results were striking. While AI users initially produced essays faster and with better structure, something more troubling emerged over time. Using EEG to measure brain activity, researchers found that AI-assisted writers showed significantly weaker neural connectivity patterns compared to those who wrote unaided. The brain activity literally scaled down with the amount of external support—strongest in the unaided group, intermediate for web search, and weakest for AI users.

But the most revealing finding came in the fourth session. Participants who had been using AI for three months were asked to write without it. Their brains couldn’t simply switch back. Neural connectivity remained suppressed. They struggled to quote from essays they’d written just minutes earlier. Many had adopted AI-specific vocabulary patterns even when writing on their own. The cognitive scaffolding they’d been relying on had actually reshaped how their brains worked.

The researchers called this “cognitive offloading”—the brain adapting to external support by reducing its own effort. It’s not that these participants became worse writers overall. They could still produce decent essays. But they’d lost something more fundamental: the neural networks associated with deep engagement, memory encoding, and independent reasoning had weakened from disuse.

This isn’t a reason to avoid AI. But it is a reason to be extremely selective about what we offload to it.

The MIT study revealed three distinct cognitive impacts based on what task you’re automating:

Cognitive Load TypeDefinitionEffect of AI Automation
GermaneBuilding mental schemas, deep understandingImpairs learning and retention
AI for scientific reasoning produced lower-quality thinking vs. traditional search, which required active integration of diverse sources
ExtraneousFiltering irrelevant information, navigating poorly designed interfacesEffective without penalty
Participants saved time and mental energy
IntrinsicInherent complexity of the materialMixed effects
AI helps manage complexity only if users maintain active engagement rather than passive acceptance

The pattern is clear: automate the friction, protect the thinking.

This maps directly onto our earlier frameworks. The Joy Audit helps you identify which tasks engage your neural networks in meaningful ways—the ones where cognitive load is actually building capacity rather than just burning energy. The 92% Rule keeps you engaged enough to maintain those neural pathways while still gaining efficiency.

But the cognitive offloading research adds a temporal dimension we haven’t considered: the effects compound over time. Every month you rely on AI for a task, your brain becomes a bit less capable of doing it independently. This isn’t necessarily bad—we’ve been offloading arithmetic to calculators for decades. But it means the decision about what to automate isn’t just about current efficiency. It’s about what cognitive capabilities you want to maintain long-term.

The 92% Rule

Let’s assume current AI systems operate at about 92-93% accuracy. Most people see this as a limitation—something that will improve over time. But that gap between 92% and 100% is where the value lives.

If AI were 100% accurate, you’d trust it completely and never review its work. You’d lose the feedback loop. If it were 60% accurate, you couldn’t trust it at all. But at 92%? You can save enormous time while staying engaged enough to catch the important edge cases.

This suggests a different way to think about automation:

Accuracy RequiredApproachExamples
100%Human owns it, AI supports• Final decisions on critical matters
• Legal or financial commitments
• Anything where you’d get fired for being wrong
95-99%AI drafts, human refines• Code for production
• Client presentations
• Important documents
90-95%AI executes, human spot-checks• Internal reports
• Meeting summaries
• Research synthesis
Below 90%Fully automate• Transcription
• Data entry
• Calendar management

Different work has different accuracy thresholds. You don’t need the same precision for meeting notes as you do for a product launch decision. But most people treat all work the same—either doing it themselves or delegating it completely.

The collaboration zone between 90-99% is where valuable work happens. AI provides the first draft, the structure, the grunt work. You provide the judgment, the nuance, the final 5-8% that makes it actually good.

And here’s where it connects to cognitive offloading: that final 8% to reach 100% typically requires 80% of the human effort. With AI handling the first 92%, you can concentrate your effort entirely on the crucial last mile—the judgment and context that actually matters. You’re not doing less work. You’re concentrating your effort where it has the most impact.

Combining the Frameworks

When you overlay joy, accuracy requirements, and cognitive impact, you get a clear decision matrix:

CategoryWhat to DoExamples
Protect
High joy + 100% accuracy + builds cognitive capacity
Keep it entirelyStrategic decisions, creative work that defines your value, mentoring, problems you find fascinating
Collaborate
High joy + 95-99% accuracy + maintains engagement
AI assists, you own itComplex analysis, creative exploration, writing in your voice, research synthesis
Delegate
Low joy + 90-95% accuracy + minimal cognitive value
AI executes, you spot-checkRoutine reports, templated communication, project tracking
Automate
Low joy + <90% accuracy + no cognitive benefit
Fully hands-offScheduling, data entry, transcription, repetitive tasks

The MIT study revealed why this matters. Participants who used AI as a “supplement”—handling grunt work while maintaining engagement—showed better outcomes than those who used it as a “substitute.” The supplement users maintained neural connectivity and felt ownership of their work. The substitute users showed cognitive decline.

The key insight: the 10% you protect might generate 90% of your actual value. Strategic insights. Unexpected connections. Creative breakthroughs. These don’t happen while you’re writing status reports. They happen during work you find genuinely engaging—and that engagement keeps your cognitive capacity sharp.

The Pareto Inversion

The classic Pareto principle tells you where to focus effort: find the 20% of work that drives 80% of results. But AI enables something more interesting—what I think of as the Pareto Inversion.

Instead of finding the productive minority, you’re identifying the meaningful minority and protecting it fiercely. The 10% of work that makes you feel alive, that uses your unique judgment, that you’d do even if no one paid you. Not because it’s the most productive 10%, but because it’s the most human 10%.

The other 90%? That’s where AI lives. Not because it’s unimportant—some of it is quite important. But because it doesn’t require your unique human qualities. It requires consistency, speed, pattern matching, synthesis. Things AI does well.

This is fundamentally different from traditional delegation or outsourcing. When you delegate to another human, you’re still thinking in terms of complete handoffs. This person owns emails. That person owns reports. But with AI at 92% accuracy, you’re not handing off complete tasks. You’re keeping yourself in the loop for the parts that matter while letting AI handle the parts that don’t.

The Pareto principle traditionally made you more productive. The Pareto Inversion makes you more human. You’re not just getting more done—you’re concentrating your distinctly human capabilities on work that actually benefits from them.

And here’s the interesting part: the 10% you protect might generate 90% of your actual value. A strategic insight that redirects a quarter’s work. A relationship you build that opens unexpected doors. A moment of creative breakthrough that changes your product direction. These don’t happen while you’re writing status reports. They happen during the work you find genuinely engaging.

What This Means in Practice

The pattern among successful AI adopters is clear: they automate strategically, not randomly.

Consider a data scientist who automated all data cleaning and experiment tracking—low joy, high automation potential. This freed up time for algorithm design and mentoring. Productivity went up, but more importantly, she kept doing the mathematical reasoning herself. The part that kept her analytical muscles strong.

Or a founder who automated status updates and email management but kept strategic decisions and customer conversations. He got back 10 hours a week for product strategy. When asked why he didn’t use AI to draft strategy documents, his answer was revealing: “If I let AI do the strategic thinking, even as a first draft, I stop being a strategist. The thinking is the point.”

The research backs this up. Higher-competence learners used AI strategically—revisiting and synthesizing information to build knowledge structures. Lower-competence learners used it passively—accepting outputs without integration. Over time, the passive users showed weakened cognitive capacity for deep understanding.

The warning sign? When AI use starts feeling effortless. That’s when cognitive offloading is happening.

The Real Goal

Most automation advice assumes the goal is to do more work. To be more productive. To scale yourself. While this might be an important objective, it is equally import to do better work. To spend time on things that matter. To feel stimulated. To feel like your days are meaningful rather than merely busy.

AI can’t create meaning. But it can create space for it—if you’re deliberate about what you protect.

The question isn’t how much you can automate. It’s what you’re trying to protect. What’s the 10% of your work that (a) makes you feel most alive (b) uses your unique judgment (c) keeps your mind sharp (d) you’d do even if no one paid you?

That’s what you should never automate. Everything else? That’s fair game.

Acknowledgments

Key concepts adapted from:

  • Arthur Brooks’s Joy Audit framework (AI Advantage Summit, 2025)
  • Mark Benioff’s observations on the 92-93% accuracy collaboration zone (AI Advantage Summit, 2025)
  • Kosmyna, N., et al. (2025). “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” MIT Media Lab.

Subscribe

All the latest posts directly in your inbox.