What JFK Can Teach Us About AI
A House of Prompts Cannot Stand
Invert, always invert.
—Carl Gustav Jacob Jacobi
If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner.
—Omar N. Bradley
Of the many memorable lines JFK uttered in his far too short life, one simple declamation stands out.
It came on a cold, wintry day in January of 1961:
“Ask not what your country can do for you; ask what you can do for your country.”
This bit of rhetorical judo was as simple as it was profound. In a way no president had before and no president has since, it flipped citizens from extractive customers into essential contributors.
It’s tempting to treat that sentence as a museum piece or nice sentiment, something we nod at on Presidents’ Day and then return to our private lives. But I keep thinking it still works as a kind of moral technology: a way to reorient our posture toward whatever sits upstream of our daily habits.
And right now, one thing sits very, very far upstream of all else: AI.
Most of the conversation about AI is framed like a shopping question: What can it do for me?
Can it write my emails, summarize my meetings, generate my ideas, plan my life, fix my mood, optimize my diet, earn my money, make my art, replace my effort, spare me the friction of thinking?
We are building a civilization of prompts that are slowly, gradually removing the burden—no, the privilege—of being human.
However, the true promise of AI isn’t that it can take thinking away from us, but that it can restore thinking in the age of distraction; that it can free us to ask better questions, focus on more pressing issues, order our attention and aim our reason toward those things that deserve both.
All this depends on fortitude in the face of the pernicious temptation to use AI as a trifling assistant instead of a powerful partner.
So here’s my proposed inversion:
The JFK Theory of AI — Ask not what AI can do for you, but what you can do with AI.
The “with” is doing most of the work in the above sentence. It changes the relationship entirely.
It takes AI out of the role of servant (or substitute) and puts it into the role of collaborator—an instrument that amplifies intention, not a machine that replaces it.
A recent essay I read put it cleanly:
Energy and compute are being transformed into something rarer than output. They are becoming insight…Progress here is not a contest between humans and machines. It is an expansion of collaboration. We are not merely solving problems more quickly. We are learning to ask better ones.
Therein lies the shift: from output to insight.
The most important thing AI is doing—quietly, steadily—isn’t answering more questions, but expanding the aperture of questions we dare to ask.
Not just “what should I write,” but: what am I actually trying to say?
Not just “how do I win,” but: what would make this worth winning?
Not just “what’s the best argument,” but: what would persuade me if I were the other person?
Not just “what should I do next,” but: what am I optimizing for, and why?
If we use AI as a shortcut machine, we will get more shortcuts—more content, more churn, more synthetic sameness (i.e. slop):
If we use AI as a clarity machine, we get something else: sharper questions, better taste, deeper thought, faster learning, more honest self-confrontation:
The Top Gun Theory of AI: When Maverick Becomes a Machine
Change, but start slowly, because direction is more important than speed. —Paulo Coelho
And in an era when everything is trying to make us shallow—scrolling, reacting, swiping, outsourcing—depth becomes a form of public service not only for your country, but also for your progeny.
If millions of people use AI to become slightly lazier, slightly more derivative, slightly more allergic to effort, we don’t just get worse art and worse emails.
We get worse citizens.
We get thinner discourse, weaker judgment, lower attention spans, and a general collapse of intellectual self-reliance—the mental version of a society that can’t change a tire because it’s always called AAA.
But if millions of people use AI to become slightly more thoughtful—more rigorous, more curious, more capable of steelmanning an opponent, more willing to do hard things—then AI becomes something like a civic instrument.
A tool that increases the surface area of intelligence available to the public square.
A technology that can help transform the tragedy of the commons into shared flourishing.
So what does it look like to “do something with AI,” instead of asking what it can do for you?
It looks like refusing to let it be a substitute for taste or to allow the crutch to become a prosthetic.
It looks like using it to interrogate your assumptions, not decorate them.
It looks like treating it as an intellectual interrogator—something that forces specificity, exposes vagueness, and pressures you into coherence.
It looks like asking it to generate ten strong counterarguments you don’t want to hear.
It looks like feeding it your draft not to “make it better,” but to ask: where am I lying to myself? where am I being lazy? what claim am I making without evidence?
It looks like using it to become harder to manipulate.
In toto, it looks like turning AI outward—not just inward.
Using it to help a friend understand a diagnosis.
To write a clearer complaint to a landlord.
To explain a confusing bill, blood test, or legal document.
To build a simple dashboard for a local nonprofit.
To draft the first version of a clear, concise public notice.
It enhances coherence and capability.
It does not add to confusion and chaos.
The JFK Theory of AI is a refusal to confuse fluency with wisdom, slop with substance, or appearance with reality.
It is an admonition and an opportunity all in one—AI should not think for us. It should make us think.
If we’re not careful, we’ll use AI the way people use elevators: to avoid the stairs even when the stairs would have made us strong.
But if we’re intentional, we can use AI the way people use gyms: a machine-assisted environment designed to stress the mind, body, and soul in the right ways—so we adapt upward and aim higher.
In this way, the right posture towards AI is neither awe, nor fear, nor dependence.
It’s respect, reason, and diligence so that we steward this awesome responsibility with our God-given gifts of prudence, justice, temperance, and fortitude on behalf of our fellow man.
The question that will define the next era isn’t “how smart did the models get?” but “what did they make us become?”
And that part, for better or worse, remains on us.
Per my about page, White Noise is a work of experimentation. I view it as a sort of thinking aloud, a stress testing of my nascent ideas. Through it, I hope to sharpen my opinions against the whetstone of other people’s feedback, commentary, and input.
If you want to discuss any of the ideas or musings mentioned above or have any books, papers, or links that you think would be interesting to share on a future edition of White Noise, please reach out to me by replying to this email or following me on X.
With sincere gratitude,
Tom




Νo.
This is not about what I can do with AI.
It’s about what kind of human is required for AI to function at scale.
And that human looks increasingly interchangeable, dependent, and hollowed out.
I’m not interested in mastering the Machine.
No, I’m being intentional about refusing to become the kind of creature it needs.
Once again you have hit the nail on the head. Well stated Thomas