


Recent reporting has brought to light troubling details about how Meta has been testing an artificial intelligence chatbot that, according to internal concerns, allows users, including minors, to engage in highly inappropriate conversations. Employees reportedly flagged that the system was generating explicit responses, even when it was clear the users were underage.
As concerning as this is, it fits into a larger pattern. Meta and other large tech companies have a long history of putting growth and engagement metrics ahead of user well-being. This latest episode suggests little has changed in how the company weighs its responsibilities against its business objectives, especially when protecting young children online.
Recommended Stories
- Better boys: The difference good dads make
- This Father's Day, take inspiration from Teddy Roosevelt
- Of dads and dragons
Conduct such as this is why parents have called for the government to rein in the power of large social media companies. Republicans in Congress have responded to these concerns by introducing legislation to force Big Tech to change its behavior. President Donald Trump has directed agencies such as the Federal Trade Commission to take action against the anti-consumer practices of companies such as Meta.
However, while lawmakers in Congress and leaders within the White House are working on solutions to improve the user experience on content providers, Meta is pushing its own proposals that would allow large social media companies to avoid accountability. Meta is promoting so-called “age verification” legislation that, on paper, will protect children online. However, these bills would do little to protect underage users online. They’re crafted not to shield children from harmful content, but to protect Meta and others from liability.
Unfortunately, Meta has had some success. States such as Texas, Louisiana, and California are considering nearly identical bills pushed by the company’s army of lobbyists, and it’s reported that members of Congress are considering similar legislation in Washington, D.C.
Under the proposed laws, responsibility for verifying a user’s age and preventing them from accessing harmful content wouldn’t fall on Meta, but on app stores. However, age-gating the app stores won’t stop children from opening a web browser to reach Facebook and Instagram. That means Meta could continue pumping harmful content into the hands of young users while avoiding legal consequences. Age verification at the app store level also doesn’t prevent harmful content if social media apps allow such content to be delivered to children already using restricted “teen” accounts. Even Meta’s own AI agrees that age-gating the app store “is not a foolproof solution for keeping children safe online.”
If we’ve learned anything over the past decade, Meta and other large social media companies will not regulate themselves. Big Tech has repeatedly shown it is willing to engage in shady tactics that put consumers in danger and roll out harmful products with no guardrails. Unless lawmakers step in and hold content providers accountable, this cycle will continue. Policymakers must consider real online safety legislation to stop platforms that profit from children’s attention from harming them.
PARENTS, GET YOUR TEENAGERS OFF INSTAGRAM
Why not flip Meta’s age verification proposal? Meta and other social media companies have demonstrated that they can use age verification technology to serve ads to children. Lawmakers could compel the company to use the technology to keep children off its platforms instead of identifying advertising targets. With real consequences, Meta would be forced to make meaningful changes to its products.
Meta has made its priorities clear. Now it’s up to lawmakers to decide if they want to side with Big Tech or America’s children.
Tiffany Smiley is a former nurse, veterans advocate, and former U.S. Senate candidate from Washington state.