At a major global privacy summit in Washington, D.C. this week, OpenAI CEO Sam Altman offered a controversial stance on the future of artificial intelligence regulation: it’s too soon to implement any serious privacy protections.
Speaking at the International Association of Privacy Professionals (IAPP) Global Summit, Altman argued that AI is moving too fast for policymakers to predict its societal impacts. “Dynamic response is the only way to responsibly figure out the right guardrails for new technology,” Altman claimed. In essence, he said, society should wait for problems to emerge — and then react.
“The right thing to do is to watch this incredible new wave fall out and respond very quickly as the problems emerge,” Altman stated.
Altman acknowledged a profound concern that is already here, not theoretical: many people confide their deepest personal struggles to AI systems like ChatGPT. Yet, unlike conversations with doctors, therapists, or lawyers — which are protected under confidentiality and legal privilege — there are currently no privacy guarantees when someone shares personal information with an AI.
Despite highlighting this vulnerability, Altman offered no formal roadmap for solving it. Instead, he deflected the burden onto “society,” suggesting that it should figure out new frameworks as events unfold.
“We don’t have [privilege] yet for AI systems, and yet people are using it in a similar way,” Altman said. “Society will have to come up with a new sort of framework.”
He doubled down on this reactive model, emphasizing “a very tight feedback loop” — meaning companies will watch how AI impacts users and then make adjustments. Altman was notably evasive when asked directly what “privacy” means to him personally, answering only, “I would be too shy to say that in this room.”
Lawmakers and Regulators Weigh In
Altman’s comments coincided with an ongoing effort by U.S. lawmakers to draft a comprehensive federal privacy bill — one that would likely touch on AI regulation.
Evangelos Razis, a staffer from the House Energy and Commerce Committee, confirmed that an all-Republican working group is now considering AI’s place in a broader data privacy framework. Razis admitted that while the committee is “not gunshy” about addressing AI risks when they’re clear, the overall priority is still to turbocharge American innovation, suggesting regulation will be light-touch, at least initially.
“The stakes of getting a pro-innovation regulatory agreement right are high,” Razis said, adding, “the priority, the presumption, is how can we pour gasoline over the fire of American innovation?”
In other words, the political appetite currently leans toward protecting the explosive growth of AI companies, not necessarily protecting citizens from the privacy erosion, manipulation risks, and exploitation models emerging alongside that growth.
Context: A Dangerous Assumption
Altman’s position embodies a growing philosophy within Silicon Valley — one that says innovation must be allowed to run free even if it outpaces societal safeguards. The assumption that problems can be “quickly” corrected once they appear has already proven naive in past tech cycles, whether it was with social media disinformation, algorithmic bias, deepfake proliferation, or surveillance capitalism.
Each time, regulators were slow, damage was deep, and the companies profiting from these disruptions moved faster than public institutions could react.
The idea that privacy protections should wait until after mass exposure, exploitation, or emotional damage occurs reflects a broader willingness to treat human lives as beta test subjects in a high-stakes technological experiment.
OpenAI — now operating under a quasi-corporate, quasi-nonprofit model with heavy investment ties — has positioned itself as a public steward of safe AI development. Yet, with Altman’s latest comments, it becomes clearer: user privacy is not being treated as a proactive responsibility — it’s being treated as an acceptable casualty of progress.
THE REALIST ANALYSIS
The Real Stakes:
Generative AI systems are already receiving deeply personal confessions, medical questions, legal queries, relationship counseling pleas, and emotional breakdowns — without any enforceable data protection mechanisms. Users are treating AI like a trusted confidante. But legally, AI companies owe them no confidentiality, no rights of erasure, and no human dignity in handling that information. This is not a pending risk — it’s a live, ongoing privacy breach operating under the radar.
The Future Moves:
Expect the first major AI data privacy scandals to erupt in the next 12 to 24 months, involving:
- Sensitive conversations leaked.
- AI systems subpoenaed for private chats.
- Behavioral manipulation using harvested psychological data.
- AI models trained on confidential user queries without consent.
Once again, as in the social media era, the platforms will say, “We never promised privacy” — and users will realize too late that the warnings were buried under the glitz of innovation.

🔥 NOW AVAILABLE! 🔥
📖 INK & FIRE: BOOK 1 📖
A bold and unapologetic collection of poetry that ignites the soul. Ink & Fire dives deep into raw emotions, truth, and the human experience—unfiltered and untamed.
🔥 Kindle Edition 👉 https://a.co/d/9EoGKzh
🔥 Paperback 👉 https://a.co/d/9EoGKzh
🔥 Hardcover Edition 👉 https://a.co/d/0ITmDIB
Get your copy today and experience poetry like never before. #InkAndFire #PoetryUnleashed #FuelTheFire
🚨 NOW AVAILABLE! 🚨
📖 THE INEVITABLE: THE DAWN OF A NEW ERA 📖
A powerful, eye-opening read that challenges the status quo and explores the future unfolding before us. Dive into a journey of truth, change, and the forces shaping our world.
🔥 Kindle Edition 👉 https://a.co/d/0FzX6MH
🔥 Paperback 👉 https://a.co/d/2IsxLof
🔥 Hardcover Edition 👉 https://a.co/d/bz01raP
Get your copy today and be part of the new era. #TheInevitable #TruthUnveiled #NewEra
🚀 NOW AVAILABLE! 🚀
📖 THE FORGOTTEN OUTPOST 📖
The Cold War Moon Base They Swore Never Existed
What if the moon landing was just the cover story?
Dive into the boldest investigation The Realist Juggernaut has ever published—featuring declassified files, ghost missions, whistleblower testimony, and black-budget secrets buried in lunar dust.
🔥 Kindle Edition 👉 https://a.co/d/2Mu03Iu
🛸 Paperback Coming Soon
Discover the base they never wanted you to find. TheForgottenOutpost #RealistJuggernaut #MoonBaseTruth #ColdWarSecrets #Declassified
Help us bring real change! Corporate lobbying has corrupted our system for too long, and it’s time to take action. Please sign and share this petition—your support is crucial in restoring accountability to our government. Every signature counts! Thank you!
https://www.ipetitions.com/petition/restore-our-republic-end-lobbying

Support truth, health, and preparedness by shopping the Alex Jones Store through our link. Every purchase helps sustain independent voices and earns us a 10% share to fuel our mission. Shop now and make a difference!
https://thealexjonesstore.com?sca_ref=7730615.EU54Mw6oyLATer7a



Well, the damage is already happening. I read that about Samsung banning Group-wide the use of ChatGPT because employees mindlessly entered proprietary data in their which then leaked… Once the milk is out of the can…
You’re absolutely right, John. Once that kind of data gets out, there’s no putting the lid back on. Companies are realizing too late that these tools aren’t just productivity boosters — they’re potential liabilities when used carelessly. The Samsung case is a perfect example of why we need strong internal safeguards, not just blind adoption.
What? Privacy rules should wait until after the damage is done? Has the man lost his mind?
I’m curious what the take from some of those at the International Association of Privacy Professionals (IAPP) Global Summit were. Did you hear any blowback, John, or where these comments mostly accepted?
You stated: “Altman’s position embodies a growing philosophy within Silicon Valley — one that says innovation must be allowed to run free even if it outpaces societal safeguards.” I don’t see how anyone can’t see that this is a dangerous philosophy.
I did look up the keynote speakers at the conference and my ignorance on the subject is evident because Altman is the only one I’ve ever heard of. Still, letting the train run down the tracks without any brakes will generally lead to disaster.
Thanks for the thoughtful comment, Chris — you’re exactly right to be concerned.
There was definitely a current of unease running through parts of the International Association of Privacy Professionals (IAPP) Summit. While Sam Altman’s comments didn’t get openly booed or attacked during the session, you could tell — in the tone of follow-up panels and side conversations — that not everyone was comfortable with the “wait for damage, then regulate” philosophy he promoted.
A lot of professionals in that room have spent their careers trying to prevent disasters, not clean them up.
You’re also dead on about the dangerous mindset brewing inside Silicon Valley.
There’s this growing arrogance that society should accommodate innovation rather than innovation being accountable to society.
It’s the equivalent of launching a high-speed train through a city before anyone’s finished laying the tracks — and then acting surprised when it crashes into the neighborhoods.
The truth is:
Once foundational privacy is broken, it can’t really be “unbroken.”
You can pass laws afterward.
You can fine companies afterward.
You can wring your hands afterward.
But the damage — to trust, to individuals, to democracy itself — is already permanent.
You’re not ignorant at all, Chris.
In fact, noticing that Altman was one of the few recognizable names just shows you that this space — privacy, AI ethics, digital human rights — is still being treated like an “inside baseball” issue by elites who prefer it to stay obscure.
That’s by design.
Because the less the public knows, the easier it is for them to drive innovation at reckless speed without anyone demanding brakes.
Hi John and thanks for sharing. It is good to hear that a current of unease was present through parts of the International Association of Privacy Professionals (IAPP) Summit and that not everyone was comfortable with Sam Altman’s ideas.
At the same time the desire of elites to stay obscure is sobering. Thanks again for sharing this. I don’t have any idea where I would have seen it otherwise unless I was actively looking for it.
You’re welcome, Chris —
There was definitely a current of unease at the summit — not loud, but real. You could feel that not everyone was on board with Altman’s “wait for the damage” approach.
And you’re right — the fact that these conversations are happening in quiet corners instead of open platforms is no accident. Obscurity protects control. The less people know, the easier it is to push the agenda before anyone gets a chance to challenge it. Thanks again, Chris! I hope you have a great night. 😎