THE AMERICA ONE NEWS
Jun 26, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Brian J. Dellinger


NextImg:Google’s Report on AI Abuse Isn’t Comforting

Over the past two years, reports of generative AI (GenAI) abuse have become commonplace. It’s rather less common for these reports to come from Google, whose GenAI Gemini has repeatedly earned notoriety.

To its credit, though, the software giant has done just that, releasing an analysis of common patterns of AI misuse. The report describes an array of potential harms that range from creating false identities (“sockpuppeting”), to portraying real people in invented actions or circumstances, to AI translations of existing scam content. (READ MORE: Now We’ve Got Proof that Wikipedia is Biased)

Unsurprisingly, the most common categories of abuse involve images or videos of real persons. Many of these are deepfaked pornography, as with the Spanish students who were recently convicted of generating sexual images of female classmates. Others have political objectives; during Russia’s war on Ukraine, both Presidents Volodymyr Zelensky and Vladimir Putin have been digitally impersonated giving false orders.

AI Hasn’t Introduced New Harms to the Internet. It’s Just Made Them Easier.

Other cases appear, so far, to be of more academic interest. A fundamental issue in large language models (LLMs) like ChatGPT is that user input can be indistinguishable from the system’s underlying instructions. Manufacturer attempts to limit the LLM can thus be easily circumvented, in some cases by the user simply typing, “ignore previous instructions.” Early versions of most LLMs could be “jailbroken” to give all manner of illegal or vulgar advice; in one infamous case, a user was given a recipe for napalm after asking for it as a lullaby. In practice, however, such issues seem rarely to occur outside the efforts of dedicated researchers (or users tweaking the system’s nose). Most internet users, after all, don’t need AI to find material of questionable legal or moral fiber.

Indeed, GenAI has introduced very few entirely new categories of harm. Impersonation and image editing are old tricks, and they claimed victims long before AI came on the scene. Yet as the report rightly notes, the work of OpenAI and similar companies has made these techniques cheaper, faster, and of higher quality than ever before. In doing so, they have “altered the costs and incentives” of scams and other abuses. It’s one thing to spend weeks slaving over a good-enough fake, hoping to lure a few unwary souls; it’s quite another to generate a photorealistic image with the press of a button. With the American election only a few months away, one shudders to think of the likely deluge of deepfaked video ahead. (WATCH: The Weekend Spectator Ep. 2: AI Is Progressing Faster Than You Think)

In some ways, the GenAI abuses call to mind the early days of public internet usage. Printed media chain letters and junk mail made a similar leap in ease and scale; instead of laboriously and expensively hand-mailing physical letters, a bad actor could send malicious e-mails at no marginal cost. (It’s hard to remember now the days before “spam” acquired its current meaning.) Users in the late 1990s despaired that all legitimate information exchange might be buried in a tide of nonsense.

Technology Sometimes Solves Its Own Problems

But technology sometimes provides the answers to its own problems. Solutions came slowly, but come they did — whether from users more carefully guarding their “good” e-mail addresses, or from improved filtering technologies. It is not unreasonable to hope that AI may see a similar adjustment. Current detectors are unreliable, but that is not an immutable law. In a decade, omnipresent AI fakes may seem as quaint as Nigerian princes asking for your bank account. Other problems, like students using “undressing” pornography engines, may remain, but giving unsupervised smartphones to children is a mistake for many reasons and is facing growing backlash.

The Google report does raise two final concerns. In describing threats to GenAI systems, it describes “data poisoning” as follows:

For example… [one tool] allows artists to add invisible changes to the pixels in their art before uploading online, to break any models that use it for training. Such attacks exploit the fact that most GenAI models are trained on publicly available datasets… scraped from the web, which malicious actors can easily compromise.

The artists utilizing these tools — who are, after all, modifying their own art — might balk at being described as “malicious users.” Similar concerns were raised in June when Microsoft’s CEO of AI said that virtually all web content was “fair use. Anyone can copy it, recreate with it,” and that even materials explicitly denying this permission were “a gray area.” To the extent that guarding intellectual property is viewed as hostile behavior, AI creators invite justifiable backlash. (READ MORE: The Developing World (Still) Needs Golden Rice)

A second concern, post-ChatGPT, is that we may assume even real data is digital fakery. Several news agencies claimed video clips of President Biden’s infirmity were digital “cheapfakes”; academics attempting to punish AI plagiarism struggle with false-positive accusations of innocent students. Hamas’s Oct. 7 massacre offers a more vital example, with several apparently real images initially dismissed as fakes. With GenAI, many are rightly on guard against believing lies; it is perhaps less obvious, but no less vital, to be careful of denying the truth.