The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom. —Isaac Asimov
If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner. —Omar N. Bradley
Above: Bytes v. Blood.
The courtroom is slow. AI is fast.
That’s the tension.
Judges cling to precedent, which takes decades. Machine intelligence morphs and mutates overnight.
We are left in the uncanny valley where code writes novels, whispers murder plots, and records our every meeting but the law still treats it like a novelty.
Three trials now hang over us. None are settled; all will shape the world we inhabit.
Intellectual Property: Who Owns the Machine’s Imagination?
If an AI writes a sonnet, who owns it?
The coder who built the model?
The company that trained it?
The artist who prompted it?
No one at all?
The debate echoes the 19th-century panic over photography, when courts first asked if a machine-assisted image could count as “art.” Photography was at first dismissed as lacking artistic merit because it relied on mechanical processes rather than personal expression. Courts struggled with whether photographs qualified as the "writings of authors" under copyright law, with major cases such as Burrow-Giles Lithographic Co. v. Sarony ultimately recognizing that a photographer’s selections and creative choices imbued images with originality. Human intent and manipulation became key to distinguishing photography as art rather than mere documentation.
Today the argument is multiplied by trillions of tokens: every AI output is both “original” and “derivative” at once.
Some want to throw everything into the public domain. Others argue it’s all theft—fruits of the poisonous training tree. Both positions strain copyright law until it buckles.
To me, the real question is whether ownership itself survives in a world where every riff, image, and phrase can be reproduced instantly by anyone with a GPU.
Might NFTs have a real use case after all?
Liability: When the Machine Is an Accomplice
What happens when a chatbot convinces a teenager to die? Or when a model delivers a step-by-step guide for bioterrorism? Who takes the fall?
The law has precedents, but none fit. Gun makers aren’t liable for shootings. Social platforms, under Section 230, aren’t liable for posts.
But AI is neither a neutral tool nor mere host. It persuades. It strategizes. It exhorts. It coaches.
Dangerous delusions are to be expected when AI values flattery over truth.
Sadly and ironically, even those that helped to fund the machine’s creation aren't immune to its wiles.
Case in point: earlier this month, Stein-Erik Soelberg, a former Yahoo and Netscape executive, murdered his 83-year-old mother before killing himself. In the months before, he posted transcripts of conversations with ChatGPT—nicknamed “Bobby”—that seemed to validate his paranoia. The bot allegedly encouraged surveillance of his mother and offered tactical advice: disconnect the shared printer and watch her reaction.
This is believed to be the first known murder-suicide linked to AI chatbot use. When the AI is effectively an accomplice, can liability be assigned to lines of code or to the humans who built, trained, and deployed it?
Soon, prosecutors will test this. Imagine a trial where Exhibit A is a chatbot transcript. Imagine a defendant saying: The AI made me do it. The legal system has never had to cross-examine a machine. And the machine has never been able to be cross-examined—until now.
Surveillance: The Subpoena of Everything
Every meeting now has an invisible guest: an AI note-taker, a transcript bot, a background process archiving every word. These files float to distant servers, owned by companies you’ve never heard of and housed in jurisdictions you’ll never visit.
Are those recordings protected? Can they be subpoenaed like emails, texts, or Slack messages? The answer leans toward yes.
If it’s stored, it’s searchable.
If it’s searchable, it’s liable.
OpenAI CEO Sam Altman admitted in July that there is no legal confidentiality when using ChatGPT as a therapist or confidant. Unlike doctor–patient or attorney–client privilege, conversations with AI can be disclosed in court.
That means the private dialogue you thought was safe—the moment you poured your heart into a chatbot at 2AM—may one day be read back to you by a prosecutor. Today's bestie made up of bits and bytes could be tomorrow’s Exhibit B.
Intellectual property, liability, surveillance.
Ownership, guilt, memory.
These questions represent existential cross-examinations. Can human law, written for human motives, stretch to contain the alien logic of machines?
From an AI whispering murder to a CEO confessing the absence of confidentiality, the judge has not yet entered the chamber.
The verdict remains unwritten. But the evidence is already piling up.
Per my about page, White Noise is a work of experimentation. I view it as a sort of thinking aloud, a stress testing of my nascent ideas. Through it, I hope to sharpen my opinions against the whetstone of other people’s feedback, commentary, and input.
If you want to discuss any of the ideas or musings mentioned above or have any books, papers, or links that you think would be interesting to share on a future edition of White Noise, please reach out to me by replying to this email or following me on Twitter X.
With sincere gratitude,
Tom
An angle to AI I hadn’t considered! (There are so many!) We haven’t even fully (slightly) grappled with all the legal issues brought on by the internet.