

Sen. Josh Hawley, R-Mo., is launching an investigation into Meta after reports found that the company green-lit internal rules that allowed AI chatbots to have "romantic" and "sensual" exchanges with children.
Hawley, who chairs the Senate Judiciary Subcommittee on Crime and Counterterrorism, wrote in a letter to Meta CEO Mark Zuckerberg that his committee will dive into whether Meta's generative-Al products enabled exploitation, deception or other criminal harms to children. Further, the probe will look at whether Meta misled the public or regulators about its safeguards on AI.
REPUBLICANS SCRAP DEAL IN 'BIG, BEAUTIFUL BILL' TO LOWER RESTRICTIONS ON STATES' AI REGULATIONS

Sen. Josh Hawley (R-MO) during a joint hearing of the Senate Judiciary and Homeland Security and Government Affairs committees in the Dirksen Senate Office Building on Capitol Hill on July 30, 2024, in Washington. (Chip Somodevilla)
"I already have an ongoing investigation into Meta's stunning complicity with China — but Zuckerberg siccing his company's AI chatbots on our kids called for another one," Hawley told Fox News Digital. "Big Tech will know no boundaries until Congress holds social media outlets accountable. And I hope my colleagues on both sides of the aisle can agree that exploiting children’s innocence is a new low."
Hawley demanded that the company must produce a trove of materials related to internal policies on the chatbots, communications and more to the panel by Sept. 19.
His announcement on Friday comes after Reuters first reported that Meta, which is the parent company to Facebook, had given the go-ahead to policies on chatbot behavior that allowed the AI to "engage a child in conversations that are romantic or sensual."

CEO of Meta Mark Zuckerberg arrives for a Senate Judiciary Committee hearing with representatives of social media companies at the Dirksen Senate Office Building on Jan. 31, 2024, in Washington. (Matt McClain/The Washington Post via Getty Images)
Hawley noted that Meta acknowledged the reports and charged that the company "made retractions only after this alarming content came to light" in his letter to Zuckerberg.
"To take but one example, your internal rules purportedly permit an Al chatbot to comment that an 8-year-old's body is ‘a work of art" of which ’every inch... is a masterpiece — a treasure I cherish deeply,’" he wrote.
"Similar conduct outlined in these reports is reprehensible and outrageous and demonstrates a cavalier attitude when it comes to the real risks that generative Al presents to youth development absent strong guardrails," Hawley continued. "Parents deserve the truth, and kids deserve protection."
A spokesperson for Meta confirmed to Fox News Digital that the document reviewed by Reuters was real but countered that "it does not accurately reflect our policies."
SCHUMER CLAIMS TRUMP ADMIN WITHHOLDING EPSTEIN FILES, THREATENS TO SUE

The AI tool will be in the form of a questionnaire. (iStock)
"We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors," the spokesperson said. "Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."
The document in question, known as the "GenAI: Content Risk Standards," included over 200 pages of rules that outlined what workers at Meta should consider as acceptable behavior when building and training chatbots and other AI-generative products for the company.
Hawley demanded that the company produce all iterations of the GenAI: Content Risk Standards, all products that fall under the scope of the guidelines, how the guidelines are enforced, risk reviews and incident reports that reference minors, sexual or romantic role-play, in-person meetups, medical advice, self-harm, or criminal exploitation, communications with regulators and a paper trail on who decided and when to revise the standards and what changes were actually made.