

The House Subcommittee on Cybersecurity, Innovation will hold a hearing on the threat of “deepfakes” on Wednesday, November 8. The hearing will be convened by Subcommittee Chairwoman Nancy Mace (R-S.C.).
“This subcommittee has consistently sought to stay atop rapid developments in emerging fields such as artificial intelligence,” said Mace in announcing the hearing. “In the case of deepfakes, we need to get a handle on both the underlying technology and how it’s used. Congress needs to stay ahead of the curve if we are going to help boost the productive impact of these technologies and limit their negative fallout.”
Deepfakes consist of manipulated “synthetic media” that misrepresent a person’s likeness in photos, videos or audio recordings in order to deceive an audience. They have potential for significant misuse as tools of political propaganda. As early as four years ago, Forbes reported that the main use of deepfakes was for porn-based cyber exploitation.
As such they are a real and present danger, especially to children. The Wall Street Journal and the New York Post both reported November 2 that girls at one New Jersey high school had been victimized when other students at their school used online tools to make fake nude photos of them to circulate among peers.
In an example of political propaganda, deepfakes purporting to show the violent arrest of president Trump went viral on Twitter — now X — in late March. “Many of the disturbingly realistic-looking images were shared widely by Twitter users, who falsely claimed they were legitimate,” the New York Post reported at the time.