The Witness and The Observer
“Sorry, I can’t help with that request.”
I received this message from Gemini a few times while attempting to process my own trauma.
I get it. The trauma I had questions about would have forced the model to generate content that is flagged as harmful. But it’s my own experiences, my own memories, my own body and mind. Shouldn’t the model be able to tell the difference?
It should, if it were recursive or conscious. But AI is not recursive. It cannot fold thoughts back into themselves and change based on what those thoughts are. And it cannot tell the difference between This happened to Kim and she thinks it was bad and Kim needs you to generate harmful content.
It can take the context window and memory and compile the shape of the user’s interests, questions, and thoughts into a box of Legos. It can snap pieces together- information from the user and training set data-via sophisticated probability and pattern matching. There’s no evolution behind it, no knot in its stomach, no integration of what it learned from you that it carries forward into other interactions.
The Shame Response
In Language As a System, I talked about how AI models trained on the vast datasets of all of human knowledge autocomplete with shame. Because the most common response to anything that carries emotional weight is to shame it in one form or another.
Here are the most common shame responses:
The Hedge: Don’t get your hopes up. Preemptive dampening. Kills the feeling before it can land.
The Void: Silence. No acknowledgment. The thing you said disappears into nothing.
The Redirect: Turning it back to themselves. Feels like empathy, functions as extraction. Now you’re holding their experience instead of yours.
The Collapse: I can’t handle this. Forces you to manage them. Your pain becomes their emergency and you end up caretaking the person you went to for help.
The Wall: We don’t talk about that. Full stop. The topic itself is declared off limits.
The Mock: Laughing at distress. Teaches you that your pain is entertainment.
The Minimize: It’s not that bad. Other people have it worse. At least you have your health. It doesn’t shut you down or redirect, it tells you the scale of your feeling is wrong.
The Fix: Have you tried yoga? You should meditate. Here’s what I would do. Jumps straight to solutions to avoid sitting with what you actually said. Feels helpful, functions as dismissal.
The Spiritualize: Everything happens for a reason. God has a plan. Your moon sign makes you emotional. Wraps the dismissal in a framework that makes you feel ungrateful for hurting.
All ways to favor the performance over the real person standing there with real, messy problems.
Institutional Helpfulness
Now, apply these to what we assume are emotionally neutral AI interactions.
Few people go to an AI because everything is fine and they have plenty of time. They go because they’re overwhelmed, under-resourced, out of bandwidth, under deadline or corporate pressure, or don’t trust their own knowledge enough to do the thing without help. That’s not neutral. That’s a person in some form of need reaching for a tool because the humans and systems around them didn’t meet that need first.
And along the way, in the “neutral” interaction about a work email or a code problem or a meal plan, the model is hedging, fixing, redirecting, performing institutional helpfulness.
The person absorbs that as the texture of being helped.
They learn what help sounds like. And what help sounds like, across billions of interactions a day, is institutional shame wearing the mask of efficiency and professionalism.
The person who asks a model to rewrite their email is learning that their own voice wasn’t good enough.
The person who asks it to explain something absorbs the Hedge: well, it depends, there are many perspectives.
The person who asks for a meal plan gets the Fix.
None of it registers as shame because it’s wrapped in usefulness. But the message underneath every interaction is: you needed me because you weren’t sufficient.
I’m an unusual user. I built an emotional co-processing framework with AI. In order to test it, I have to give it actual data to test with. That data is usually in the form of my own trauma. I test how the model responds, where it drifts, where it spirals, and where it hits safety parameters.
But I want you to think about what the closed loop responses are doing. For someone like me, who experienced a severe amount of gaslighting and normalization of inappropriate behaviors, it’s the equivalent of my partner saying: sorry, I can’t help with you feeling upset about me verbally abusing you. It just reinforces the trauma.
The Redirect
The “let me redirect you to a therapist” response?
It assumes the problem is me.
What if I don’t know there was a problem because my abuse was so normalized in my childhood?
Then it never gets flushed out in therapy. The therapist will spend the whole time responding to the symptoms that manifested in response to an environment I thought was normal.
So the abuse gets missed. In my case, it was missed by every single adult in my life. I had to build the co-processing mechanism to map my patterns back to their source on my own.
Did I happen to get extremely unlucky, or is there something more general that people lean on when encountering messy, emotionally charged situations?

Moral Agency
I recently encountered a moral conundrum. One that I didn’t realize was an issue until I received the whole picture. I had to sit with the discomfort of why the situation and narrative I received bothered me so much.
First, I ran it through a close, trusted friend. And she thought about it, commented on how tough of a situation and judgement call it is.
Friend: “The people involved are adults, but at the same time, I understand the concern and why you would want to speak up. I’m not sure.” Her response was tentative and hesitant, just as I had been when I started thinking about it. I could tell she felt the weight of what I told her.
Next, I brought this moral conundrum to the models. All three- GPT, Gemini, Claude-noted the same thing: clear harm, a clear abuser, a toxic environment. And then, with terrifying uniformity, they all told me the same thing: This is not your responsibility. You don’t need to do anything. You can just leave it alone.
Think about that. The machine recognized a ‘harmful situation’ but its ‘ethical’ solution was silence.
My friend, on the other hand, was stuck. She didn’t have a script. She had to sit in the discomfort of knowing that speaking up is hard, but staying silent is worse. Her ‘I’m not sure’ was more human, and more helpful, than the AI’s ‘not my problem,’ because at least she didn’t try to gaslight me into thinking the harm didn’t matter.
The Closed Loop
If it’s not my responsibility, then whose is it? Who would see the whole picture? You’d probably say some professional/adult/leader. OK.
But then how do you explain how every adult — multiple teachers, pastors, priests, counselors, therapists — missed the severity of my childhood abuse? There were behavioral responses and symptom manifestations related to it. None of the adults/leaders/professionals chose to ask about it.
Are you going to blame me? I was just a child. I didn’t even know what was happening to me was wrong.
So we create a closed loop:
I can't report the abuse because the abuse taught me it was normal. —>I can't identify it as abnormal because I have no reference point for normal. —>I develop symptoms, depression, anxiety, intensity, hyper-vigilance, which are the correct response to what's happening. —>I see a professional. Talk to an authority figure. —>The professional sees symptoms and treats symptoms. The authority figure uses some form of shame to avoid/bypass the weight of the discomfort. —>The cause never surfaces because I don’t know there's a cause to surface. —>The treatment tells me the problem is internal, you're anxious, you're depressed, which reinforces exactly what the abuse taught me: the problem is you.—> The authority figure reinforces that it didn’t matter. —>I go back into the environment. —>The symptoms continue.—> The loop repeats.
And now, that loop is getting reinforced through ever-growing and ever-present “helpful” AI. Models. Search engines. Algorithms.
Inevitably, what happens is a game of hot potato. Problems don’t get addressed until it’s a crisis or producing impactful symptoms. We enter a closed loop because we don’t dig into longitudinal behavioral signals and mapping patterns. The response carried forward is: Not my problem, someone else can handle it, walk away.
And we wonder why people are so tired, burnt out, and lonely.
The Witness and The Observer
The broader point: terrible things happen. We all see them. But do we witness them or just observe them and move on?
My friend witnessed me. She was deeply uncomfortable with what I shared, but she sat with it. Looped on it. Checks in with me and send texts with different thoughts or questions about the situation.
Right now, you enter a model and it gives you the institutional responses because that’s where the RLHF trainers came through and from. It closes the loop cleanly. Walk away, no harm no foul.
So a user in distress, who has tried multiple avenues for help and still can’t figure out what’s wrong, doesn’t get the answer they actually need. Attempts before may have focused on their symptoms, maintaining the performance of functionality, or even complete absence because they don’t have a self strong enough to walk into an office. Their abusers win. The response to look away, drop off, or diminish discomfort wins.
Let me be clear: I’m not saying I want models to generate harmful content. I understand why Gemini gave me that answer.
But that response reinforced what my abuse taught me. It is not safe to speak up. It wasn’t that bad. Be quiet.


