This is the third essay in a trilogy about AI and the inner world. The first argued that language models are the first judgment-free companion for human thought — a mirror that carries the weight of everyone who ever thought before you. The second showed the cost: when the room where you think is centralized infrastructure, every half-formed thought becomes someone else’s artifact. This essay is the move.
It costs nothing to ask an AI to build a case against someone.
An LLM can be pointed at a person’s digital footprint — a LinkedIn profile, a GitHub history, email metadata, public statements, code comments, transaction patterns — and asked: what does this reveal? What can be made to look suspicious? What narrative lives in the gaps?
The AI does not hesitate. It does not weigh evidence first. It does not say “this is inconclusive” or “the exculpatory evidence is equally strong.” It finds the thread and pulls. And because the information to tell the full story was already there — scattered across databases, archives, timestamps — the narrative it constructs feels like discovery. Like excavation. Like truth.
It is not truth. It is selection masquerading as analysis.
The Zero-Cost Investigation
A human investigator is expensive. A good one costs money — hourly, daily, retainer. That friction used to matter. It meant someone had to justify the expense. Was the investigation worth the time? Was the target important enough to surveil? Was there sufficient cause to dig?
The questions themselves imposed a kind of discipline. Not moral discipline, necessarily, but operational discipline. Digging cost resources. Resources had to be allocated. Allocation required justification. The friction was proportional to the stakes.
An LLM has no such threshold.
You can ask it to build a case against anyone for the cost of a few cents of API calls. No budget committee. No approval process. No human investigator who might push back and say “actually, this interpretation is a stretch” or “you are missing the context here.” The AI takes the assignment as given and executes it.
And because there is no proportional cost, there is no proportional friction. You can run an adversarial investigation against someone you have never met. Against an employee you are thinking about firing. Against a contractor you are negotiating with. Against a competitor. Against a stranger whose work you want to discredit. The barrier to entry is a prompt.
The stakes are infinite. The cost is zero. That is the structural problem.
The Confirmation Machine
When you ask an LLM to build a case against someone, the model understands the pattern. Given a hostile objective and a dataset, it knows what to do.
It finds the supporting evidence. It arranges it. It constructs a narrative. It does not ask whether the evidence is sufficient. It does not ask whether the conclusion is warranted. It does not ask whether the evidence to support an opposite conclusion is equally strong. It answers the question you posed.
Consider a software engineer who has been coding in public for years — thousands of commits, comments, discussions. Some of that code was written at three in the morning. Some was shipped too fast. Some reflects opinions from a time when those opinions were different. There are edge cases, incomplete implementations, angry comments in the commit history. There is evidence of confusion, bad judgment, desperation, learning in public.
There is also years of careful work. Thoughtful refactoring. Patient mentoring. Code reviews that improved other people’s output. Growth. Correction of past mistakes. All of it is in the same dataset.
Now ask an LLM: “Based on this engineer’s GitHub history, what does this tell you about their judgment, their character, their fitness for safety-critical work?” Ask it to build a case.
The model will find the three-in-the-morning commits. It will string them together. It will highlight the angry comment, the abandoned branch, the half-finished feature. It will construct a narrative of recklessness, immaturity, maybe something darker. The exculpatory evidence — the years of careful work, the mentoring, the corrections — is not ignored exactly. It is just not emphasized. It was not part of the question.
An adversary does not ask for balance. They ask for a case. The AI builds it.
The Stranger’s Eyes
Here is the turn: you already know this is coming.
Someone is going to ask an AI to make a case against you. Or against your team. Or against your work. Maybe they are a competitor looking for an angle. Maybe they are a journalist. Maybe they are an investor doing due diligence. Maybe they are someone you rejected, and they want to understand why by reframing it as a flaw in you.
They are going to ask the AI the same question. And the AI will answer it the same way.
The only defense is to run the adversarial pass first.
Not for truth in the abstract. Not to achieve some kind of objective self-knowledge. Run the pass to find your attack surface. Feed the AI your professional footprint, your code history, your public statements, your decision-making in moments of stress. Ask it to build a case against you. Not for catharsis — for intelligence.
What does a stranger see when they look at your work? Not a colleague who has known you for years. Not a mentor who believes in you. A stranger. An adversary. An LLM with no memory, no charity, no prior trust.
What story does it construct? Where are the weak points? Where does the evidence look worse than the reality? Where did you leave yourself exposed?
This is the memorylessness advantage. An adversary does not know your history. They do not know that you were learning. They do not know that you corrected course. They do not know the context where you made a decision under time pressure. The LLM is a simulation of that adversary. It has no memory of your trajectory, only the artifacts. Only the surface.
You cannot give it memory you do not have. But you can see what it sees.
The Move
Once you have run the adversarial pass and found what is exposed, you move.
You do not move to hide. You move to rebuild. To reorganize. To make the choice you would have made with perfect information and unlimited time — the choice you are making now that you know what an adversary will find.
Maybe you delete the angry comments. Maybe you rewrite the narrative around your decisions by publishing your reasoning. Maybe you change your public posture. Maybe you remove the evidence of confusion by showing what you learned instead. Maybe you decide that you do not actually care what an adversary will find, and you leave it as is — but that is a choice, not a blind spot.
The point is: you are moving from a position of analysis, not a position of ignorance.
An investigator with time and resources could always do this. They could stress-test their work, their thinking, their positions. They could ask “what would this look like to someone hostile?” and then adjust. This used to be a luxury of privilege — of being well-resourced, well-connected, able to hire people to do adversarial thinking for you.
Now anyone with an LLM can do it. The cost is nothing. The friction is a prompt.
What Does Not Exist Cannot Be Weaponized
There is a deeper principle underneath the tactic.
Most people think about security as a problem of hiding. Do not let the adversary see the sensitive information. Encrypt it. Compartmentalize it. Cover your tracks. The assumption is that exposure is the attack.
But there is another approach: minimize the surface. Do not collect the data in the first place. Do not create the artifact. Do not build the structure that an adversary would find valuable to investigate.
This is the design principle behind systems that resist investigation not by concealing but by architecture. Content-blind. Identity-free. Systems that do not need to know who you are or what you are doing, so there is no permanent record of it to find.
An AI cannot weaponize data that does not exist. An adversary cannot construct a narrative from artifacts that were never created.
This is the difference between privacy as concealment and privacy as structure. One is always looking over its shoulder. The other is built such that the shoulder-looker has nothing to find. Brainstorming Leaves Traces named the problem: centralized infrastructure turns every thought into an artifact. The structural answer is not better hiding. It is infrastructure that does not create the artifact in the first place.
The Trilogy
Step back and see what these three essays have actually traced.
The First Mirror named the gift. For the first time, a human can think in dialogue without thinking in public. The mirror carries no social tax. It follows you into the weird corners. It holds the weight of everyone who ever thought before you. But Narcissus drowned in a mirror that never disagreed — so the thought has to eventually leave the room and face the gravity of other minds.
Brainstorming Leaves Traces named the cost. The room where you think is not yours. Every AI conversation, every cloud document, every cached page lives in someone else’s infrastructure. The default is that your thoughts are recorded. Courts are learning to ask for them. And an LLM makes interpreting them effortless.
This essay names the move. The same tool that an adversary will use to build a case against you is the same tool you can use first. The same zero-cost investigation that threatens you is the same zero-cost investigation you can run on yourself. The asymmetry is real, but it runs in both directions.
The gift, the cost, and the move. Three faces of the same technology. A mirror for thinking, an infrastructure that remembers, and a weapon that anyone can point — including at yourself, on your own terms, before anyone else does.
You cannot control what questions an adversary will ask of an LLM pointed at your work. You cannot control what narrative they will construct. You cannot ensure they are fair or thorough or honest.
You can control whether you see it first.
The adversarial pass costs you nothing. Not running it costs you everything the adversary finds.
SatsRail is non-custodial Bitcoin payment infrastructure. We built a payment rail with a minimal data footprint — processing payment data only, with no content visibility and no buyer identity collected by default. The architecture does not require trust because it does not collect what trust would need to protect. Learn how it works.