Is anybody else bored of talking about AI
aiartificial intelligenceai fatiguecognitive loadhuman creativitytech industrymicrosoftsteve wozniakautomationproductivityinnovationmental exhaustion

Is anybody else bored of talking about AI

AI Fatigue: We Built This Mess, Now We Fix It.

I'm tired of talking about AI. And if you're honest, you probably are too. It's Wednesday, March 25, 2026, and the industry feels like it's drowning in a flood of AI-generated noise. We've gone from "this changes everything" to "can we just make it stop?" in what feels like a blink. The initial hype cycle promised a productivity revolution, but what we got was a cognitive overload and a mountain of content that often feels bland, repetitive, and utterly devoid of soul.

The problem isn't just that there's too much AI. It's that much of it is bad AI, poorly integrated, and pushed with marketing fluff that ignores the actual human cost. Mainstream news outlets are calling it "AI fatigue" and "brain fry," and they're not wrong. People are mentally exhausted from constantly supervising these tools, correcting their errors, and sifting through the dross. Even big players like Microsoft are quietly scaling back some AI integrations because the public perception has shifted from wonder to weariness. We're seeing a collective sigh of frustration on Reddit and Hacker News – a feeling that AI devalues genuine human creativity and the simple joy of discovery.

A person with their head in their hands, surrounded by glowing screens displaying abstract AI-generated text and images, in a dimly lit office, shallow depth of field, cool blue ambient light
Person with their head in their hands, surrounded

The Cognitive Load is a Failure Mode

Here's the thing: we built systems that optimize for statistical likelihood, not for meaning or passion. That's a fundamental failure mode when you're trying to generate anything that requires nuance or genuine insight. The models are trained on vast datasets, yes, but their output often falls victim to the "Gaussian Fallacy"—it trends towards the average, making it inherently boring. It's like asking a committee to write a poem; you get something technically correct but emotionally flat.

The real drain comes from the interaction model. We're not just using AI; we're managing it. Think about your typical AI-assisted workflow:

  1. Human Initiates: You ask the AI for something.
  2. AI Generates: It spits out a first draft.
  3. Human Evaluates: You read it, immediately spot the hallucinations, the blandness, the outright errors. (I've seen PRs this week that literally don't compile because the bot hallucinated a library).
  4. Human Corrects/Refines: You spend time editing, prompting again, trying to steer it back to reality or inject some actual personality.
  5. Iterate: Repeat steps 2-4, often multiple times, until the output is barely acceptable.

This isn't augmentation; it's a constant, low-level cognitive battle. You're not creating; you're curating and correcting. That constant supervision, the need to apply critical thinking to something that should be helping you, leads to "AI brain fry." It's a blast radius of mental exhaustion. The causal linkage to human creativity is weak because the model found correlation, not mechanism. Steve Wozniak isn't wrong when he calls current AI tools "unimpressive" and "disappointing." They are, if your expectation is true innovation rather than just sophisticated pattern matching.

Reclaiming Our Sanity: Intentionality Over Automation

So, what do we do? We stop pretending every problem needs an "AI solution." We pivot from blind automation to intentional augmentation.

  1. Filter the Noise: As individuals, we need to be ruthless about what AI content we consume and create. Prioritize human-made creations. Seek out the genuine, the passionate, the unique.
  2. Build for Purpose, Not Hype: As engineers, we need to ask: Does this specific AI application genuinely solve a problem, or is it just a feature tacked on because "AI"? If it adds more cognitive load than it removes, it's a net negative. Focus on truly valuable applications where AI excels, like pattern recognition in massive datasets or highly repetitive tasks, not creative endeavors that demand soul.
  3. Cultivate Critical Thinking: This is non-negotiable. We have to teach ourselves and our teams to be skeptical, to question AI output, and to understand its inherent limitations. The "black box" isn't magic; it's statistics.
  4. Prioritize Human Ingenuity: AI should be a tool that frees us to be more creative, more innovative, more human—not a crutch that makes us lazy or a treadmill that grinds us down. The joy of discovery, the spark of an original idea, the passion in a well-crafted sentence—these are things AI cannot replicate. We need to protect and foster them.

The era of "AI for AI's sake" is over. It has to be. We need to build systems that respect human well-being and creativity, not just chase the next valuation. The correction is coming, and it's going to be about quality, utility, and sanity.

Alex Chen
Alex Chen
A battle-hardened engineer who prioritizes stability over features. Writes detailed, code-heavy deep dives.