my ai did that
Sep 10, 2025

introduction
there's a new shrug slowly creeping into mainstream verbiage: "Sorry, my AI did that."
at first it sounds like a joke, a way to deflect blame for a clumsy email, a weird design job, a mis-fired reply. But give it a few years - this phrase will define an era.
I think it'll be a linguistic slip that signals something larger: responsibility itself leaking out of human hands.
my ai-did the banner for this blog
the old dance of blame
we've seen this before, the bureaucrat says "It's policy." The employee says "The system wouldn't let me." The driver says "The GPS told me to turn."
we have always outsourced responsibility to tools and structures. but the accountability still was in our hands. ai just gives us a smoother, shinier version of the same move.
the difference this time is speed and invisibility.
phase one: blame the bot
right now, ai is still novel. people still announce it: "I used ChatGPT for this report." Companies brag: "Our support is AI-powered."
the marketing machine thrives on novelty - but when things go wrong, novelty doubles as cover. "Not me, the AI."
this is the phase where ai stands as an external scapegoat - a tool you can point to.
phase two: invisible normal
soon, AI will vanish into the default. Just as no one says "I used a calculator" when they file taxes or calculate budgets or split bills, no one will say "I used an AI" when they write, negotiate, or decide.
we are already almost here.
delegation becomes normalized. the fingerprint of AI fades, and responsiblity quietly evaporates.
when ai is background, not foreground, we collectively stop noticing when we've handed it the steering wheel - and when our peers have handed it the steering wheel.
phase three: rewrite
here's where i think the cultural shift unfolds:
marketing: campaign gaffes and manipulations will be blamed on "the AI intern" or the "automated system". A new layer of deniability emerges.
business: companies already use phrases like "the algorithm decided". Soon, CEOs will treat model bias as impersonal accidents of infrastrcuture.
personal: misunderstandings in chats will be shrugged off with "my auto-reply did that". ghosting, apologies, arguments, conversations, will all have a built-in scapegoat.
across domains, agency itself is diluted. the line between human intention and machine mediation blurs, and society adapts by caring less about the distinction - think about it, don't we already see some of this around us?
parallels:
there are many examples of responsibility dissolving into systems:
bureaucracy diffused responsibility across forms, signatures, and offices. Max Weber called this the "iron cage" of modernity - a world where nobody is directly responsible because everyone is just following procedure.
standardised testing: education once rested on teacher judgement. with standardized testing, responsibility for evaluating talent and worth was delgated to machine-readable scores. students' ftureu became tied to bubbles on a sheet. thereby avoiding personal responsibility for subjective calls. normalized externalized, mechanized decision processes. i'm not saying standardized tests are bad (debatable). i'm saying we delegated responsibility.
what this means for TRUST:
trust used to anchor our relationships with people. you knew who wrote the book, delivered the order, made the decisions. as AI becomes normalized - we don't know where trust will have to be anchored.
paradoxically, we may also stop caring about authorship altogether - if everyone is using AI, maybe attribution becomes irrelevant. what matters is not who said it, but whether it works. responsibility dissolves not just because people shrug it off, but because culture no longer expects it - this feels like a far off future.
with all this said, AI also quietly redistributes opportunity. a student from a small town can polish a college essay to the same standard as a school kid with expensive tutors. in a way lowering the gate.
there is upside, let's be real. delegation can be a relief.
it kills friction. it levels skill floors. it 10x capacity.
the real problem is "My AI did that." erases ownership. Relief without guardrails becoms deniability.
here's my take - making AI use visible at the exact moment it matters, to the person relying on it and to the person affected by it should be a workflow rule.
when an ai-tool materially shaped a message/decision/image/video - surface that fact.
make ai-visible - that's the idea. be it metadata, one-line footers. clearly marked bot responses. machine-readable receipts.
the bottom line is:
"My AI did that." will become normal. The cultural mistake will be pretending that it doesn't matter who helped. Make the help visible, keep the human accountable, enjoy the upside. Normalize competence.