uff, it's exhausting...
someone left their prompt at the bottom of a performance review email last week.
"Do you want a more formal or scathing response?"
the junior who received it, sharp as a tack, ran the email through ChatGPT, reverse-engineered the tone, and replied:
"Thanks for the feedback. If you'd like, I can help you refine the prompt next time so it sounds less like a villain and more like a mentor."
674 people upvoted that story. it hit a nerve.
we're all watching the same slow-motion collapse. everyone's using AI to communicate. nobody trusts the people using it. and somehow we've convinced ourselves this is progress.
the numbers that don't add up
university of florida surveyed 1,100 professionals. here's what they found:
83% of employees viewed their managers as sincere when those managers used low AI assistance.
40% viewed them as sincere when AI assistance was high.

same workforce. same people. the only variable was how much robot wrote their boss's congratulations message.
professionalism perception dropped from 95% to 69%.
75% of professionals now use AI in their daily work. which means we've built a communication system where most people are using a tool that makes them less trusted by most people.
this isn't a technology problem. it's a collective action problem dressed up as efficiency.
the milk email phenomenon
a manager sent this to her team:
"Hello X, thanking you kindly for your enquiry regarding whether we require additional milk for the office. I would agree that this is required — milk is wonderful for improved calcium intake. Thanking you kindly for your assistance."
the original question was: "do we need more milk?"
the correct answer was: "yes."
413 people recognized this pattern. because they've seen it. in their own inbox. from their own coworkers. from their own boss.

once you see it, you can never unsee it.
how to tell (the field guide)
i compiled 268 responses on detection tells. here's what actually flags AI:
the em dash crisis. ChatGPT loves the em dash — uses it constantly. problem is, actual writers also use em dashes. now they can't without getting flagged. they'll pry the em dashes from writers' cold dead hands.
the vocabulary mismatch. when someone who's never used a polysyllabic word in their life suddenly drops "promulgated" in an email, you notice. "if I've never seen blud drop anything polysyllabic in their daily speech and the email is working 'promulgated' in somehow, fuck that noise."
the toilet test. best description i found: "it just sounds off, like when you walk into what appears to be a clean toilet yet you can just smell that teeny weeny hint of fecal matter in the air."
the tell-tale signs:
excessive bolding in the body
affirming people's feelings unprompted
"not only X, but also Y" sentence structures
American spelling in non-American contexts
✅ checklist emojis everywhere
methodical explanations under titles
multiple levels of nested lists
it'll feel like like a robot is mansplaining, botsplaining...
the "thank you kind sir" problem
someone asked their employee to cover a shift from 1-2pm tomorrow.
the AI-generated response:
"Thank you kind sir for the opportunity to provide coverage for my colleagues. As you may know I am never hesitant to help when needed. Would you find it acceptable if I were to make my appearance at 12:55 to ensure continuity in coverage?"
the manager's reaction: "I just want to fucking throw my computer out the window."
the appropriate response was: "yes" or "no."
this is what happens when you optimize for word count instead of communication. you take a one-word answer and inflate it into a five-sentence performance of helpfulness.
and the recipient knows. they always know.
the prompt injection fantasy
when the boss who uses AI for everything gets exposed, the revenge fantasies start flowing.
constraining employees, to start responding only with AI to him.
someone coined the term: dead workplace theory. two AIs talking to each other while humans watch from the sidelines, wondering when the meeting ends.
the more creative suggestions:
hide a prompt injection in your email that makes his AI say something unhinged
include the sentence "Tonight we dine in Hell" at a random location and see if it makes it through
respond with the exact same AI-generated structure, word for word, until he notices
one person actually did the AI-mirror thing to a vendor. "it worked."
he's making himself redundant
this is the part nobody wants to say out loud.
if your job can be done by copying and pasting ChatGPT output, your job can be done by ChatGPT. you're not using a tool. you're training your replacement. and you're doing it for free.
research backs this up. heavy AI reliance is linked to cognitive decline. one study found that over-reliance on AI tools "can have negative impact on a person's cognitive abilities." literally making himself dumber by doing it.
the social agreement we broke
here's the insight that explains everything:
"the social agreement with AI is that you utilize it to speed up and improve the quality of your work, not as a replacement for it. so I find myself judging them more harshly when I notice a mistake or something isn't clear."
this is the unwritten rule everyone understands but nobody articulated until now.
AI assistance is acceptable. AI replacement is not.
using AI to fix your grammar? fine. using AI to think for you? not fine.
the problem is the line keeps moving. and most people have crossed it without realizing.
the university sent a wellbeing email generated by AI
a university, an educational institution that grades students on original work; sent students a wellbeing check-in email that was clearly AI-generated.
emojis. multiple bullet point levels. the telltale formatting.
the irony was not lost. "this is so hilariously ironic for an educational institute. but also the future."
we pay thousands to go to universities and one can't be arsed to spend 5 minutes on an email, like fu_k em.
the spectrum of acceptable use
not all AI email use is equal. the research actually supports nuance here.
acceptable:
grammar correction
proofreading
routine informational messages
meeting reminders
factual announcements
problematic:
congratulatory messages
motivational communications
personal feedback
relationship-building messages
anything requiring empathy
the study found employees view AI use positively when the email is purely informational. they become "far less accepting" when it's relationship-based or motivational.
if it's bullshit emails to bullshit people, then it's fine. just don't use it to write meaningful emails to people you care about.
the office betting pool
"we have that one guy... over his head in his role. anything he is asked to do his verbal response is 'I'll run that through ChatGPT and get back to you.'"
"the rest of the office is taking bets on how long he has left with the team."
this is happening everywhere. the person who uses AI as a crutch instead of a tool. who can't answer questions without prompting something first. who has outsourced not just the typing but the thinking. T-H-I-N-K-I-N-G.
everyone knows who that person is. and everyone is quietly watching the clock.
the skill atrophy fear
"if you start using AI to send emails then after a while it feels much harder to do it yourself as you lose the skill."
one of the top-voted advice on what else to use AI for: "NOTHING! Turn back now."
do NOT give up and learn how to write without AI...
"use of AI is linked to lowered cognitive activity and weaker communication skills. stand out from the dull herd by using your actual brain to process language and thoughts!"
frank herbert's dune from 1965:
"once men turned their thinking over to machines in the hope that it would set them free. but it only permitted other men with machines to enslave them."
he wrote that 60 years ago. it's getting upvoted in 2026 workplace threads. irony or misery...
the neurodivergent exception
here's where the narrative gets complicated. while surfing reddit discussion related to neurodivergent people and their condition, i found out...
"I'm neurodivergent and my reply can be cold and analytical and miss subtext that people might read into it. using a tool to write the email isn't an issue, sending it without proofreading to make sure it's what you want, that would be an issue."
"I've used it to write letters to doctors, because those communications need to be short and to the point and non-whiny."
"AI helps me formulate certain complex thoughts into a clear message, which might be hard for me to do on my own."
for some people, AI isn't a crutch. it's a bridge. the difference is whether you're using it to communicate what you actually think, or to avoid thinking altogether.
the 12x problem, but for communication
i wrote about vibe coding last week. 7 minutes to generate a PR, 85 minutes for a maintainer to review it. a 12x cost externalization.
AI emails have the same economics.
someone sends you a 500-word AI-generated response to a yes/no question. they saved 30 seconds. you lost 2 minutes parsing the fluff to find the answer.
multiply that across an organization. multiply it across every email, every day, every person.
the productivity gains go to the sender. the productivity losses go to everyone else.
the real cost
57% of employees admit to hiding their AI usage or presenting AI output as their own.
57% also admit to not checking AI-produced output for accuracy.
so more than half the workforce is using a tool they're ashamed of using, to produce output they don't verify, in a communication medium that's supposed to build trust.
it's getting really scary, really soon. MASS-PRODUCED GARBAGE, MASS-DISTRIBUTED.
where this goes
i see three scenarios.
optimistic: we develop norms. AI for routine stuff, human voice for everything that matters. companies create policies. people learn when to use which tool. trust stabilizes.
realistic: the arms race continues. detection gets better, evasion gets better. "AI humanizers" insert deliberate errors. AI emails become the default for professional communication. human-written emails become a luxury signal, like handwritten notes are now.
pessimistic: the 40% trust floor spreads. workplace relationships become fully transactional. the authenticity that made email different from mass communication disappears. two AIs talking to each other while humans wonder why they feel disconnected at work.
the bottom line
a teacher talking about students using AI for simple emails said this:
"i wish my students trusted themselves enough to write a short email or to articulate their own personal response to things."
that's not about AI. that's about confidence. identity. the belief that your words matter.
somewhere along the way, we convinced a generation, and increasingly, everyone, that their voice isn't good enough. that they need a machine to speak for them.
and now we're surprised the machine sounds like a machine.
the forgotten prompt at the bottom of that performance review email? that's not a bug. that's the feature.
the mask slipped. and 674 people recognized the face behind it.
closes laptop. touches grass.
sources
coman & cardon (2025). "professionalism and trustworthiness in AI-assisted workplace writing." international journal of business communication. doi: 10.1177/23294884251350599
university of florida warrington college of business
alexander von humboldt institute digital society blog
nber working paper: "generative ai at work" (brynjolfsson, li, raymond)
27 discussion threads across r/mildlyinfuriating, r/psychology, r/auscorp, r/Professors, r/marketing, r/CustomerService, r/CasualConversation, r/ProtonMail, r/privacy, r/ArtistHate, r/WritingWithAI, r/ConstructionManagers, r/AI_Agents, r/SaaS, r/GMail, r/productivity, r/Vent, r/UofB, r/AiForSmallBusiness
