THE AMERICA ONE NEWS
Feb 22, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET AI 
Sponsor:  QWIKET AI 
Sponsor:  QWIKET AI: Interactive Sports Knowledge.
Sponsor:  QWIKET AI: Interactive Sports Knowledge and Reasoning Support.
back  
topic
Le Monde
Le Monde
22 Feb 2024


Images Le Monde.fr

DarkGPT, EscapeGPT, WormGPT, WolfGPT, EvilGPT, DarkBARD, BadGPT, FreedomGPT. These names probably mean nothing to you, but their suffixes may put you on the right track. These are chatbots, like ChatGPT or Bard, but developed by the organized crime industry, capable of coding computer viruses, writing phishing emails, building a fake website, scanning a site's computer vulnerabilities to attack it and more.

On January 6, a team from Indiana University Bloomington made the first dive into the dark side of artificial intelligence (AI). One of the authors, Xiaojing Liao, has christened all these programs and services "Malla" for "malicious LLM applications." She explained that they identified 212 between February and September 2023 but that it's still growing.

XiaoFeng Wang, another co-author, said, "We're used to this kind of 'game.' The terrain has simply changed. It used to be the internet, then mobile phones, then the Cloud. Our study has shown that you no longer need to be a great programmer to do harm, through viruses or phishing. You just have to use these services." What's more, according to the researchers, these services are less expensive (between $5 and $199, or around €4.60 to €184) than those that existed before AI, averaging $399, and they remain lucrative. Analysis of bitcoin exchanges for the WormGPT platform – specialized in viruses and phishing emails (now closed) – revealed revenue of $28,000 in three months of activity.

Pushing professionalism very far, the team also looked at the reliability of these programs, and the results weren't too bad: Viruses, emails and suggested sites scored very well in effectiveness tests, even if the quality varied among all these services.

The article also showed the various methods used by cybercriminals. Either they used open-source language models (with accessible parameters), which they fine-tuned to specialize in malicious tasks, or they bypassed the protection of commercial services.

In the first case, the advantage was that these programs had no filters or bans and could be trained with any content. Thus Pygmallion-13B, based on Meta's Llama-13b, has been trained to generate offensive and violent content. OpenAI's Davinci-002 and Davinci-003, precursors of the models behind ChatGPT, have also been used for viruses and phishing.

An unpleasant surprise was discovering that these ad hoc models were then often available on established platforms such as Poe or FlowGPT, which allow dozens of conversational agents to be tested, including malicious ones, even if this violates the rules of these sites. "Some players have no interest in reacting until their business is affected. They're not interested in security until we can prove that it can do damage," said Wang.

You have 21.53% of this article left to read. The rest is for subscribers only.