


Millions of Americans are asking artificial intelligence deeply personal questions about their health, marriages, and jobs, and they think it’s private, like talking to a doctor.
But it’s not. There’s no national law, and without one, everything they’ve typed could be leaked, sold, or subpoenaed.
Recommended Stories
- The NIH is still trying to silence external dissent
- Trump's trade deals are pay-to-play capitalism
- The EPA's endangerment finding was about control, not the environment
Today, AI is racing toward a crisis of trust. Not because the machines are too powerful, but because Washington hasn’t done the one thing that truly earns public confidence: protect personal liberty.
It’s time for the Trump administration to push for AI’s version of HIPAA: “AIPAA,” the Artificial Intelligence Prompt Accountability Act.
Just like HIPAA created a national standard to protect your private medical records, AIPAA would protect your AI prompt history — the questions you ask, the ideas you explore, and the sensitive decisions you start processing through these tools.
Trump’s AI action plan: Smart on deregulation, quiet on privacy
In July 2025, the Trump administration released America’s AI Action Plan, a serious, pro-innovation road map that recognizes AI as an engine of economic freedom.
It rightly calls for deregulation and accelerating AI deployment across industries. But it says little about the privacy panic brewing around AI prompt data, such as the deeply personal, often medically or legally sensitive questions Americans now type into these tools every day.
Without a clear framework, the risk isn’t overregulation, it’s an overreaction: fear, lawsuits, or leaks that scare Americans away from the tools they need to learn and compete.
When prompting feels like confessing
AI is how people think, quietly, privately, and with more honesty than most realize.
“Do I have a case?” “Is this a symptom of cancer?” “What should I say to my kids if we get divorced?” “How do I write a résumé after twenty years out of the workforce?”
These aren’t search queries. They are digital confessions, and no law says they’re protected right now.
Worse, most Americans don’t know their chats could be discoverable in court, scraped by third-party vendors, or used to train future models without consent. Legal analysts are already warning that AI chats could become evidence in civil litigation or government investigations unless protections are established.
AIPAA: The Artificial Intelligence Prompt Accountability Act
AIPAA would treat all prompt data as private by default, unless users choose to opt out.
AI companies would be required to:
- Treat prompts as confidential communications.
- Obtain clear user consent for data use.
- Disclose third-party access policies.
- Report breaches involving prompt data.
This isn’t red tape; it’s freedom-enhancing clarity. It’s a consumer-first standard that says your thoughts and your learning process belong to you.
Mass adoption requires mass trust
The only way America wins the AI race is by having a population that knows how to use it. That won’t happen through enterprise software alone. It starts at home on a phone in a quiet space where confidence is built.
AI isn’t just the next smartphone. It’s the next printing press — a tool that gives everyday Americans the power to learn, write, build, and share faster than ever.
- Fifty-five percent of Americans now use AI in their personal or professional lives.
- Adoption is outpacing the internet and PCs.
If Americans don’t feel safe using AI, they won’t use it, and therefore, they won’t adapt or compete.
That’s how a workforce gets left behind, not from laziness, but from a lack of trust.
One big, beautiful step left on AI
Following the passage of the Big Beautiful Bill Act, the Trump administration has smartly prioritized innovation. Now it must take the next step: a privacy-first policy that makes consumers feel safe to learn, grow, and question using AI.
Only a Trump-led coalition can draw the right line: not surveillance capitalism and not Euro-style overregulation, just a basic American promise that your thoughts are yours.
Don’t let this become the next nuclear meltdown
We’ve seen this before — a technology with promise, a nervous public, one breach, real or fictional, and decades of mistrust and lost opportunity.
If Americans see their chats leaked, hacked, or dragged into court, it won’t just hurt one company. It could freeze adoption nationwide and derail the upskilling movement we need to stay competitive.
ILLINOIS BECOMES FIRST STATE TO BAN AI THERAPY
Don’t let AI go the way of nuclear power: safe, revolutionary, and sidelined by fear.
With AIPAA, the Trump administration could lead on trust, not just deregulation. It could unleash the most important American learning tool since the first library card.
Bryan Rotella is the managing partner and chief legal strategist of GenCo Legal.