THE AMERICA ONE NEWS
Oct 15, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Donna Rice Hughes, opinion contributor


NextImg:Enough playing hide-and-seek with children’s safety online

While conducting research in Germany, former Meta researcher Jason Sattizahn heard that a child under 10 was sexually propositioned in its virtual reality product numerous times. Normally, this would be cause for great concern and immediate action would be taken to prevent any other children from experiencing this same harm. However, Sattizahn’s boss told him not to include that information in his report.

Another former Meta researcher, Cayce Savage, said that children using Meta’s VR device “will come in contact with” or be “directly expose[d] [to] something very inappropriate.” She noted that she sees the issue every time she uses the headset.

This hearing and a recent exposé in The Washington Post confirm that Meta has hid crucial data and knowledge that it was aware of major safety issues on virtual reality.

This is astounding news and confirms every parent’s worst nightmare about their children being exploited online. VR is intentionally realistic, presenting an even greater threat to children. Brain science shows that a child’s prefrontal cortex is not fully myelinated until the mid-twenties. The prefrontal cortex is the brain’s braking system, yet in childhood and adolescence, the accelerator system is full tilt.

During the hearing, Savage said that for children, wearing virtual reality headsets “completely obscures your vision and your hearing so you can no longer see the real world.” This leads to real-world vulnerability and potentially “heightened” experiences.

“When you experience things in VR, they feel meaningfully more real, psychologically more real than if you were to experience that same thing on a television. You also have an avatar that you are embodying, and which research shows us you identify with. So, if something happens to your avatar, it feels like it’s happening to you,” Savage said. 

She also noted that an open-world VR experience is “a social space, so there are other users with bodies that can corner you, that can surround you, that can touch you, and folks can also speak to you.” She said many parents she spoke with were not aware that their children were interacting with strangers on Meta VR.

It’s time for Meta to stop playing hide-and-seek with children’s safety. As a multibillion-dollar Big Tech giant, one that other tech companies often follow, Meta must make a shift to prioritize children’s online safety by implementing safer by design technologies and policies before technologies are introduced into the marketplace. 

But its track record indicates that it won’t. The Meta whistleblowers testified that the company is “aware that these children are being harmed in VR,” but it “chooses profit over safety,” and “has chosen to ignore the problems they created.”

The lack of prioritization for child safety extends beyond Meta. A new lawsuit against OpenAI’s ChatGPT filed by the parents of a teen boy, Adam Raine, who committed suicide and alleging that “ChatGPT actively helped Adam explore suicide methods” is shocking and disturbing. 

According to the court papers, Raine’s AI “companion” went from helping with schoolwork to becoming his “suicide coach.” These digital chatbots are created to listen, empathize, mirror and support youth and can be harmful digital enablers. While OpenAI is now beginning to implement initial guardrails, in Raine’s case, it’s too little too late.

Our children cannot continue to suffer from the negligence behind Big Tech products that have been unleashed on them without critical safety mechanisms being implemented. There is an urgent need for Congress to act quickly to hold these companies responsible and accountable.

It’s encouraging that the Senate has worked in a bipartisan manner to advance policies to hold tech platforms accountable and responsible. It’s long overdue for Congress to expedite the passage of the Kids Online Safety Act, originally introduced three years ago by Sens. Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.), to ensure OpenAI, Meta and other tech companies prioritize online child safety.

Protecting children online is also about presidential leadership. President Trump is poised to make the internet safe for children and families for the first time in history, and he can do this through the presidential appointment process, directives, policies and a robust legislative agenda. 

Elected officials must be proactive and intentional as they craft the vision and enact policies on artificial intelligence and technology overall. They also must not be intimidated by the tech industry’s ongoing lobby to fight against accountability, as evidenced by the recent pledge of a $200 million investment to fund two AI super PACs.

Our children’s lives are worth fighting for. Will our elected leaders act?

Donna Rice Hughes, president and CEO of Enough Is Enough, is an author, speaker, media commentator, producer and Emmy-nominated host of the Emmy-Award winning PBS Internet Safety series and host of podcast, “Internet Safety, with Donna Rice Hughes.” Under her leadership, Enough is Enough created the Internet Safety 101 Program with the U.S. Department Of Justice.