HomesecurityDark Web: cybercriminals discuss the illegal use of ChatGPT

Dark Web: cybercriminals discuss the illegal use of ChatGPT

According to a new Research of Kaspersky, there is a noticeable increase in the number of discussions taking place in the dark web From cybercriminals, on the illegal use of ChatGPT and other language models (LLM).

Dark Web ChatGPT

Almost 3000 posts on the dark web dealt with this issue and suggested various malicious uses, from creating malicious versions of chatbot to the exploration of alternative projects such as XXXGPT and the FraudGPT.

According to the researchers, the greatest interest was shown last March. However, discussions are continuing, showing the continued focus of cybercriminals the exploitation of artificial intelligence technologies for illegal activities.

See also: Chinese hackers create ransomware via ChatGPT

According to data shared by Kaspersky with Infosecurity, cybercriminals are exploring various ways to Application ChatGPT and AI in general in their malicious activities. For example, they think about developing malware through the chatbot, processing stolen user data, analyzing files from infected devices and much more.

The company also said that it has seen the integration of automated responses from ChatGPT or chatbot equivalents in some of its forums. dark web.

In addition, threat actors tend to share jailbreaks via various dark web channels - specifically sets of prompts that can unlock additional features - and devise ways to exploit legitimate tools, such as those for pen-testing“.

See also: ChatGPT Code Interpreter: the Trojan Horse of Hackers

Kaspersky also revealed that many cybercriminals steal ChatGPT accounts and sell them on the dark web. At least 3000 posts were found advertising such accounts. This poses a significant threat to users and companies.

In response to these findings, Kaspersky recommended the implementation of reliable security solutions and dedicated services to combat attacks high profile.

Illegal use of ChatGPT may result in violations of privacy, as language models can be trained to reproduce personal information entered into the system.

Also, illegal use can lead to attacks phishing. Attackers can use ChatGPT to create convincing fake messages or emails that attempt to gain access to sensitive information.

See also: NCSC: Artificial intelligence (AI) will increase ransomware attacks

Another possible impact is the dissemination of misleading or harmful information. Language models such as ChatGPT can be used to create and spread false news or propaganda.

Finally, illegal use of ChatGPT can lead to abuse of the Technology for criminal activities, such as trafficking in illegal materials or incitement to commit illegal acts.

Source : www.infosecurity-magazine.com

Digital Fortress
Digital Fortresshttps://secnews.gr
Pursue Your Dreams & Live!
spot_img

Subscribe to the Newsletter

* indicates required

FOLLOW US

LIVE NEWS