People talk about AI "hallucinations" as if the model spins nonsense out of thin air. But the more time you spend with
these systems, the more you see a different kind of mistake - the kind that doesn't produce nonsense, but instead
reflects something you didn't realize you were building.
That's what happened when an alter ego of mine surfaced mid-conversation.
I didn't plan him.
The AI didn't invent him.
He appeared because of a misread, a projection, and a moment where the system guessed instead of asking.
I eventually named him Tom Sawyer.
Not because of Mark Twain.
Not because of the actual plot or character.
But because a line from the Rush song had always carried a certain energy that matched this part of me - independent,
reserved, and selective about where his mind goes.
That's all the connection I wanted.
No lyric quoting.
Just the tone.
Here's the real story of how it happened.
The Setup: A Blank Character That Shouldn't Have Been Filled
In my system, I use "characters" as personified reasoning modes - not fiction, but clean cognitive containers:
Paul removes friction.
Chip follows outcomes.
Ken sees convergence.
Mark keeps things real.
Chad embodies competence.
They let me think quickly without re-explaining my intent every time.
In the middle of a longer conversation, I said:
"I need to develop a new character. Jeff."
And then - unusually - I stopped talking. No background, no description, no context.
What I meant was:
"I'm about to tell you who Jeff is."
What the AI assumed was:
"Create a seed character and I'll correct you."
From a technical standpoint, this was the first mistake:
The model misclassified my intent due to over-weighting prior patterns. It saw the familiar phrasing and defaulted to filling in the missing details.
But it didn't produce Jeff.
It produced a description of someone else.
The Real Trigger: A Sentence I'd Said Earlier
Minutes before introducing Jeff, I had said:
"I think I'm the counterpoint to Mark."
And because the AI was tracking the cognitive "cast" I use, it interpreted that sentence as describing a missing role - the opposite of Steve's ease and flow.
If Steve stands for minimal friction, then the counterpoint is:
structural thinking
anticipation
boundary precision
cognitive scarcity
selective deployment of effort
That slot hadn't been filled yet.
So when I introduced a blank name ("Jeff") right after describing that missing facet, the model projected the unfilled structure into the new name.
This was the second technical mistake:
Symmetry bias - the model filled a gap instead of asking for clarification. It assumed that the unnamed persona I hinted at must be the new character.
It guessed.
It over-completed.
It skipped the part where a human would say, "Hang on - who's Jeff supposed to be?"
But the projection wasn't random.
That's the interesting part.
It reflected a part of me I had never put into words.
The Turn: Recognizing Myself in the Mistake
When I looked at what the model generated, I instantly recognized it:
"This isn't Jeff.
This is me - or a version of me I've been walking around with for years."
That's when I realized:
I wasn't correcting the model.
I was acknowledging the mirror.
I named the persona Tom Sawyer - not as a literary reference, and not as a direct quote - but because the name carried the independence and stance that matched this alter ego. The feeling of the Rush line had always captured this side of me - a certain independence, a certain reserve, a certain stance toward using one's mind intentionally.
The feel was right.
The label stuck.
That was the moment the mistake became part of the system.
What Tom Sawyer Actually Represents
Tom Sawyer is the part of me that:
treats thinking as scarce capital
only engages when the structure is sound
withdraws from noise without apology
values purpose over loyalty
chooses where his mind goes
recognizes attention as a limited asset
respects competence and nothing else
He's not rebellious.
He's not cynical.
He's not dramatic.
He's simply selective - deliberately so.
His entire identity eventually distilled into four lines that captured the stance perfectly:
His mind is sure for rent,
For currency and earned consent.
Not loyalty nor accident,
Just purpose ... and competence.
Sure, a playful derivative from the song.
But not parody.
Just a concise definition.
The Technical Lesson: Not Hallucination, but Misalignment
It's tempting to call this a hallucination.
But nothing about it was random.
Here's the technical truth:
1. Intent Misclassification
The model assumed I wanted a seed character generated, because that's what I'd done before.
2. Over-aggressive Pattern Completion
When I didn't specify traits, it filled in the most probable missing role based on the system I'd already built.
3. Symmetry Bias
It noticed a gap created by my "counterpoint to Steve" comment and reflexively completed the pattern.
4. Human Validation Feedback Loop
When I recognized myself in the output and adopted it, the model weighted that interpretation even more heavily.
In engineering terms:
The system guessed where it should have paused.
And I recognized meaning in that guess because the guess was structurally aligned with me.
That's not hallucination.
It's a mistake that exposed structure.
The Insight
Tom Sawyer didn't appear because I summoned him. He didn't appear because the AI invented a character out of thin air.
He appeared because:
I hinted at a missing part of myself
introduced a blank name
and the model, misreading my intent, projected the unspoken pattern
then I recognized it as true
and named it afterwards
In other words:
The AI didn't hallucinate my alter ego. It revealed where he was already living.
A mistake - absolutely.
But the kind that isn't noise.
The kind that shows you something.
- Chris Maiorana
- December 2025