


Last summer, my wife and I beat the odds of middle age and gave birth to a beautiful baby girl. Thousands of people who follow me on Instagram and other apps have yet to notice.
That’s not because I’ve ghosted everyone. I have just opted against posting photos of my child on social media, a parenting move that is becoming increasingly popular because of artificial intelligence.
Parents have debated the risks and benefits of publishing pictures of their children online for decades — about as long as photo-sharing sites have been around. But when social networks were woven into the fabric of society, “sharenting” became the norm. Only a quarter of parents do not share photos of their children online because of concerns that online predators and companies may harvest their personal data, according to studies.
But parents like me have joined the “never-post” camp because of a more recent threat: apps that can automatically generate deepfake nudes with anyone’s face using generative artificial intelligence, the technology powering popular chatbots.
The so-called nudifier apps are simple and cheap for anyone to use, with some even offering free trials. I took a look at some of them. These apps are being widely used by students in schools, and for victims, having artificially generated nudes of them out in the wild has been as traumatizing as it would be if the photos were real. Though a new federal law makes it a crime to post nonconsensual fake nudes online, there’s nothing stopping people from using the nudifier apps, which have proliferated on the web. Dozens of the nudifier sites are raking in millions of dollars a year.
“It’s everywhere,” said Alexios Mantzarlis, a founder of the tech publication Indicator, which investigated 85 nudifier websites. “Any kid with access to the internet can both be a victim or a perpetrator.”
In addition to A.I. deepfakes, there are more risks to photo posting — like potentially exposing young people’s sensitive personal information to bad actors — that may discourage parents.
To be clear, whether or not to publish family photos is a personal choice, so this column isn’t a condemnation of parents who do post. (I like seeing photos of other people’s children on social media!) Instead, let this be an explainer on what to consider when posting children’s pictures online.
Here’s what to know.
The Rise of Deepfakes
Fake nudes of real people are nothing new. For many years, photo-editing apps like Adobe Photoshop could doctor photos into realistic-looking images. Yet because of the amount of time and skill required to create convincing spoofs, the victims tended to be celebrities.
The A.I. nudifier apps have changed the game. Abusers need to visit only one of the websites and upload an image of their victim. The nudifiers often accept credit card payments or cryptocurrency in exchange for virtual tokens for producing fake nudes.
The publication of deepfakes was recently deemed a federal crime when President Trump signed the Take It Down Act, a bipartisan bill combating revenge porn that included nonconsensual nude imagery and A.I.-generated fakes. Though the legislation requires social media sites to remove offending images, it does not prohibit businesses from offering the image-generating apps themselves.
Social media companies like Snap, TikTok and Meta prohibit advertising of nudifiers on their apps, and some states are beginning to discuss legislation that would ban companies from offering nudifier apps. But if that happens, enforcement would be difficult because many of the app creators are overseas.
To put it another way, anyone can still easily use a nudifier app on a child and keep the photos, and no one would know.
One site I examined offered a free trial to digitally strip someone in one photo; from there, users could pay a subscription of $49 a month for 600 credits, or 8 cents per fake nude. The app also allowed users to create pornographic animations.
Lots of people are uploading photos into these nudifier apps, which rake in roughly $36 million a year in revenue for the companies offering the software, said Mr. Mantzarlis, who based the estimate on the traffic data for a set of websites.
The A.I. porn apps have been such a nuisance that Meta took action. In June, the company filed a lawsuit in Hong Kong against a developer there of various A.I. nudifier apps that had managed to circumvent Meta’s ad detection technologies to promote its software on Instagram and Facebook.
A Meta spokeswoman said the company also shared information about offending apps and websites with the Tech Coalition’s Lantern Program, a group of companies, including Google and Microsoft, working to protect children from online sexual abuse.
Yet perpetrators need to know only the name of a nudifier site to reach it through a web browser, and in schools, students are aware of the popular ones, said Josh Golin, the executive director of Fairplay for Kids, a nonprofit that focuses on protecting children from harmful media.
“The teachers and the school administrators I talk to will say it happens all the time in our schools, where kids create fake nudes,” he said.
Though new laws could make it harder for abusers to share deepfakes with others, for many victims, the damage has already been done.
Last spring, students at a high school in northeast Iowa reported to school officials that other students had used nudifiers to digitally fabricate nude images of them. Around the same time, lawmakers in Minnesota, amid similar incidents there, introduced legislation targeting companies that offer nudifier apps or websites.
What this all means for parents is: Abusers could copy a photo of a child posted on your social media account and upload it into a nudifier app. Or, if they are physically nearby, they could use a camera to snap a photo of the child and then upload it into the tool.
There’s no way to stop someone from doing the latter, but the former situation can be avoided by opting not to publish photos of your children online.
Private Social Media Accounts Are an Imperfect Solution
Parents who do want to share pictures of their children on social networks can significantly reduce risk by posting the photos only on an account that close friends and family members are allowed to see. But that still has limitations. Perpetrators of child sexual abuse usually know the victim, so an Instagram follower with access to your profile could be a culprit, said Sarah Gardner, the founder of the Heat Initiative, a child safety advocacy group.
“Just because you have a private account doesn’t mean someone you know isn’t going to take your photos and do something malicious with them,” she said.
In one such incident about a decade ago — long before the arrival of A.I.-generated deepfakes — a mother in Riverton, Utah, discovered that photos of her children that she had shared only with friends and family on Facebook had ended up on pornography websites.
Even a Birthday Party Is Exposing
Other than A.I. deepfakes, there are still old-school threats to consider, like identity theft.
A child’s birthday party may feel like a milestone worth broadcasting on social media, but even that type of seemingly innocuous sharing could expose children to future harm.
Pictures of the birthday party can reveal the exact day and year the child was born, which is information that can be stitched together with other data that hackers have collected through cybersecurity breaches to commit identity theft, said Leah Plunkett, the author of “Sharenthood,” a book about sharing information about children online.
As unlikely as that may sound, identity theft involving minors surged 40 percent from 2021 to 2024, with roughly 1.1 million children having their identities stolen each year, according to the Federal Trade Commission. (This is a good reminder for all parents to freeze their children’s credit.)
Why Do We Do This?
Sharing some data is part of the social contract of the digital era. We share our location, for instance, to get helpful directions from maps apps. For any parents contemplating whether to post photos of their children, it’s a useful exercise to ask: What are the benefits?
Social media apps like Instagram, Snapchat and TikTok are convenient tools to efficiently share nice photos and videos with a broad swath of people we care about. But the real benefactors are the social media companies themselves, which collect data to improve their products so they can get people, including our children, to keep using their products.
Among younger people, frequent social media use has been associated with mental health issues, including anxiety, depression and feelings of loneliness, according to dozens of studies.
“Their goal is not to help develop a well-rounded, healthy child — it’s to make money off keeping your kids on as long as possible,” said Nicki Reisberg, a former marketer who hosts a podcast about parenting in the digital age. “If they have tens of thousands of pieces of data before your child goes online, they can do that more effectively.”
There are lower-risk ways to share photos of our children. My preferred method is sending photos of my daughter to a few friends and relatives through text messages, which are encrypted. Some parents share photo albums of family pictures with a small group of people using online services like Apple’s iCloud and Google Photos.
In the end, I’m aware that this may all be a losing battle. Many schools post children on social media to show that their students are having a good time. (I’ll probably be the unpleasant parent demanding that photos of my daughter be taken down.) And eventually, when my daughter grows up, she will have her own phone and decide whether to post her photos.
But until that day comes, I’ll do what I can by keeping her photos off the web.