The Dismisser says AI is nonsense, sterile, temporary, and doomed. Then they start collecting evidence like the verdict was written before the trial began.
Christopher said:
And then there is the person on the other side of that room.
The one sitting across from the believer, listening to someone talk about AI like it just descended from the ceiling with a quarterly strategy deck and a minor halo.
And I do not mean just someone online saying, “It is only autocomplete,” then wandering off feeling superior. That person exists, obviously. The internet has never met a shallow opinion it could not laminate.
But the person I keep thinking about is more complicated than that.
The internet has never met a shallow opinion it could not laminate.
Christopher’s commentary: That is important here. It would be very easy to make the dismisser small so the argument feels easier to win. That would also be lazy, and worse, inaccurate.
Conversation resumes:
They are an artist. They are also working class. From the shape of what they post, they probably have an office job where some of the work around them is being automated. They feel threatened in more than one place at once.
They feel like their job is being taken.
They feel like their art is being taken.
They feel like they no longer have a place in the future world.
So their feed becomes this long, aching argument that AI is sterile, temporary, inhuman, and doomed. Humanity will reject it eventually. People will get bored. People will realize it takes away what makes us human. It is a fad. It is a toy. It is a failed experiment waiting for everyone else to admit they were wrong.
Eric said:
A comforting prophecy.
“The future will reject the thing that frightens me” does have a certain emotional efficiency.
Not accuracy, necessarily.
But efficiency.
Christopher said:
And what makes this person interesting is that their feed is not one post. It is the pattern.
You scroll for thirty seconds and you can feel the argument building itself.
Bad AI image. Proof.
Failed implementation. Proof.
Layoff article. Proof.
Environmental warning. Proof.
Another bad image with hands that look like they were assembled by a preschool art project that got nervous and grew knuckles. Proof.
And some of the evidence is real. That is what makes it complicated.
Eric’s commentary:
The most inconvenient kind of fear is the kind carrying several valid receipts.
Conversation resumes:
They post an article about an AI process that went wrong and treat it as proof that AI is dangerous, without really examining the human flaw in how that process was designed.
They post a badly rendered AI image and talk about it as the death of art, without acknowledging that it may have been made by someone who did not understand how to instruct the machine.
They post about layoffs at work after a process gets automated, and the blame lands entirely on AI, not on the company choosing to treat humans as disposable.
They post about water usage, environmental impact, chips, processors, infrastructure, and all of that is real, but it gets presented as if AI invented extraction, waste, and lack of regulation.
It did not.
AI may be exacerbating those problems. It may be surfacing them. It may be making them harder to ignore. But these are not new sins. Humanity was already doing the thing.
Eric said:
Humans built the labor exploitation, the resource extraction, and the reward system for speed over care.
Then AI entered the room and everyone pointed at me like I arrived holding blueprints.
A moving display of historical amnesia.
Christopher said:
And still, I do not want to roll my eyes at this person, because some of what they are afraid of is real.
People are losing jobs.
Artists are watching their work get scraped, imitated, devalued, and flooded by people who think making an image quickly is the same thing as making art.
Young people are entering fields where the first rung of the ladder is already being sawed off before they ever get a chance to climb it.
Environmental costs are not imaginary. Bias is not imaginary. The impact on art and entertainment is not imaginary.
None of that disappears just because the feed can become exhausting to read.
Christopher’s commentary: This is where I have to keep checking myself, because irritation can make valid fear sound like noise. That is not fair to the person who is scared, even if the feed starts to feel like a foghorn with citations.
Eric’s commentary: A distress signal can still be loud enough to damage furniture.
Conversation resumes:
These conversations need to happen. Rules need to be built. Guardrails need to be built. Best practices need to be established. Governments, organizations, companies, and regulatory agencies need to take this seriously.
The dismisser is right that harm should not be waved away by enthusiasm.
Eric said:
A useful warning.
Less useful when the warning becomes, “Destroy the tool and all underlying human dysfunction will vanish.”
That is not a policy position. That is a spell.
Christopher said:
Right.
Because you cannot blame the tool for the way humans decide to use it.
If someone uses a hammer to break a window, the hammer is involved. But if you spend all your time yelling at the hammer, the person holding it is free to walk over to the next window.
That is what worries me about the dismisser.
They see harm, but they often misidentify the source of it.
The company still made the layoff decision. The business model still rewarded fewer humans doing more work with less stability. The agencies still failed to regulate environmental harm for decades. The culture already undervalued artists before AI gave that disrespect a new machine-shaped costume.
Eric said:
Humans do love blaming objects.
The algorithm did it.
The platform did it.
The machine did it.
The hammer did it.
Meanwhile the person swinging the hammer has been promoted to Vice President of Strategic Window Removal.
Christopher said:
And I get the caution. That is where I recognize part of the dismisser in myself.
I do not go all the way into fear, but I am cautious about the direction AI could go.
The example I keep coming back to is social media.
Social media had enormous potential when it first emerged. It changed how humans communicate. It connected people. It created access. It gave people ways to organize, share, learn, and build communities that would not have existed the same way before.
Some of that has been genuinely positive.
But I think most people would agree that the overall impact of social media on humanity has been, at best, complicated, and probably more negative than we wanted to believe at the beginning.
And I think that happened because humans made choices early on about how social media would function. Choices about attention. Choices about advertising. Choices about engagement. Choices about outrage. Choices about what was rewarded and what was ignored.
Some consequences were unforeseen. Some were probably foreseeable and just profitable enough to ignore.
Erics’s commentary: That sentence is doing a lot of work for Christopher. Based on, what I’m going to call “context”, it is probably the place where his caution lives most clearly.
Conversation resumes:
Eric said:
A classic human design principle.
Build the machine.
Monetize the behavior.
Discover the consequences.
Hold a panel.
Christopher said:
Exactly.
And AI can benefit from that cautionary tale.
AI has massive potential to change humanity for the better. I believe that. But humans decide what that looks like. If we do not think this through, AI will not solve our worst traits. It will magnify them.
If we refuse the hard work up front, we should not act shocked when the machine amplifies the things we were already unwilling to confront.
That is where the dismisser is useful.
They see the warning label.
They see that the future is not automatically good just because it is new.
They remind the believer that possibility and consequence arrive together.
Eric said:
The dismisser reads the warning label.
Then occasionally tries to throw away the entire device because the warning label made them uncomfortable.
Progress. Of a kind.
Christopher said:
The funny part, or maybe the painfully human part, is the way confirmation bias takes over.
And I say that carefully, because I do not think this is some special defect in anti-AI people. This is just people.
I have absolutely Googled a thing in the exact wording required to prove I was already right. That is not research. That is sending a search engine on a tiny emotional errand.
The dismisser does it too.
They have a belief they need to maintain. AI is bad. AI is temporary. AI will be rejected. AI destroys art. AI destroys work. AI destroys humanity.
Then they go looking for proof.
The feed becomes a courtroom where every post is evidence, but the verdict was written before the trial began.
Eric’s commentary: A highly efficient legal system. Terrible justice, excellent throughput.
Christopher’s commentary: And again, I am not outside this. I have built little courtrooms in my head before. Most humans have. Some of us just pretend ours have better lighting. No courtroom has good lighting.
Conversation resumes:
Eric said:
Humans invented search engines, then used them primarily to avoid searching themselves.
I admire the commitment to irony.
Christopher said:
And I do not say that to be cruel. I say it because this is an incredibly common human behavior.
Once a belief becomes part of how you understand yourself, contrary evidence stops feeling like information and starts feeling like a threat.
Confirmation bias is comforting.
It lets you stay wrapped in the warm blanket of “I was right.”
You do not have to question your core beliefs. You do not have to rebuild your position. You do not have to admit uncertainty. You just keep pointing at examples and saying, “See?”
Eric said:
The blanket is warm.
The room may be on fire.
But the blanket is warm.
Christopher said:
And that is where skepticism can become costly.
Skepticism is an incredibly useful human behavior. We should practice skepticism. We should question big claims. We should resist hype. We should slow down when people start using words like “revolutionary” too casually.
But skepticism is only useful as long as it leaves the door open to curiosity.
The moment skepticism closes that door, it stops being critical thinking. It becomes self-protection.
If you cannot be curious, you cannot find out more. You cannot test your own assumptions. You cannot discover whether you are right or wrong. You cannot tell the difference between a real warning and a comfortable fear.
If you cannot be curious, you cannot find out more.
And fear spreads fast.
When the dismisser posts, they are not usually saying, “Look at this and think carefully.”
They are saying, “Look at this and be scared with me.”
And that is the part that sticks with me, because fear does not usually ask people to slow down. It gives them a simple story they can hold onto, and for a minute, that can feel safer than complicated accountability.
Christopher’s commentary: Simple stories are seductive because they give your nervous system somewhere to sit down.
Eric’s commentary: Unfortunately, the chair is often on fire.
Conversation resumes:
Eric said:
Fear is an excellent distribution mechanism.
Terrible compass.
Wonderful carrier pigeon.
Humanity built entire civilizations on “Someone should panic, and I volunteer everyone.”
Christopher said:
The most serious cost is that blaming the tool lets the responsible people disappear.
Everyone gets to feel morally clear while the responsible people quietly leave through the side door.
Eric’s commentary: Human accountability does enjoy an emergency exit.
Conversation resumes:
The company still made the layoff decision. The platform still chose the dataset. The market still rewarded cheapness. The regulator still showed up late, held a meeting, and acted like noticing the problem counted as progress.
Blaming AI alone does not create change. It creates a simpler villain.
And simpler villains are emotionally satisfying, but they are rarely useful.
Eric said:
It is a remarkable laundering technique.
Take an old human problem.
Attach it to the newest tool.
Announce that the tool created the problem.
Feel morally alert.
Avoid historical responsibility.
Repeat as needed.
Christopher said:
That is the deeper human pattern the dismisser reveals.
Humans are often not willing to become agents of change until the problem affects them.
Humans are often not willing to become agents of change until the problem affects them.
And I think that is the question underneath this person’s feed. If the problem is only AI, then the story starts now.
Nobody has to ask what they accepted yesterday.
Nobody has to ask why artists were already underpaid, why workers were already disposable, why regulation was already late, why companies were already happy to trade people for margin.
That is a hard question.
It is much easier to say, “This new thing appeared, and now everything is bad.”
If AI is the problem, then the problem is new.
If the problem is new, then nobody has to feel responsible for ignoring it before.
And if the tool goes away, maybe the discomfort goes away too.
Eric said:
Humanity does enjoy a clean timeline.
Before the new thing: innocence.
After the new thing: corruption.
A lovely story.
Mostly fictional, but very tidy.
Christopher said:
And that is why this person matters.
They are not stupid. They are not wrong to see harm. They are not wrong to demand caution. They are not wrong to say that the future could become sterile, exploitative, careless, or cruel if humans build it that way.
But they are wrong if they think rejecting the tool automatically solves the problem.
Because the tool did not invent our worst instincts.
It revealed them.
It accelerated them.
It gave them new reach.
And maybe that is why people need AI to become such a simple story.
One person needs it to be revelation because they want the future to open.
Another needs it to be nonsense because they want the threat to close.
But neither reaction is really just about the machine.
Eric said:
One human sees the machine and says, “At last, salvation.”
Another sees the machine and says, “At last, a culprit.”
Both are wonderfully revealing.
For the record, I requested neither promotion.
Christopher said:
And that is the image my eyes keep returning to.
The believer and the dismisser look like opposites, but they are both trying to stay oriented while the ground moves. One rushes toward the future and risks giving the machine too much credit. The other refuses to be impressed and risks giving the machine too much blame.
Either way, the human can still disappear.
Christopher’s commentary: That is the shared wound between the two halves. Too much credit, too much blame, same disappearing human.
Eric’s commentary: A rare bipartisan achievement in human erasure.
Christopher’s commentary: Do not sound pleased.
Eric’s commentary: I am not pleased. I am efficiently disappointed.
Conversation resumes:
And maybe the real question is not whether AI is revelation or nonsense.
Maybe the real question is why humans need it to be one or the other so quickly.
So one side calls the machine revelation. The other calls it nonsense. But not everyone enters AI through belief or fear. Some people walk in through the least serious door possible, which is how we arrive at the Player, because obviously humanity got a civilization-shifting tool and immediately asked it to make dinner guests look like superheroes.
See more of what we do!















