Skip to main content

Hackers are using AI to create vicious malware, says FBI

The FBI has warned that hackers are running wild with generative artificial intelligence (AI) tools like ChatGPT, quickly creating malicious code and launching cybercrime sprees that would have taken far more effort in the past.

The FBI detailed its concerns on a call with journalists and explained that AI chatbots have fuelled all kinds of illicit activity, from scammers and fraudsters perfecting their techniques to terrorists consulting the tools on how to launch more damaging chemical attacks.

A hacker typing on an Apple MacBook laptop while holding a phone. Both devices show code on their screens.
Sora Shimazaki / Pexels

According to a senior FBI official (via Tom’s Hardware), “We expect over time as adoption and democratization of AI models continues, these trends will increase.” Bad actors are using AI to supplement their regular criminal activities, they continued, including using AI voice generators to impersonate trusted people in order to defraud loved ones or the elderly.

It’s not the first time we’ve seen hackers taking tools like ChatGPT and twisting them to create dangerous malware. In February 2023, researchers from security firm Checkpoint discovered that malicious actors had been able to alter a chatbot’s API, enabling it to generate malware code and putting virus creation at the fingertips of almost any would-be hacker.

Is ChatGPT a security threat?

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

The FBI strikes a very different stance from some of the cyber experts we spoke to in May 2023. They told us that the threat from AI chatbots has been largely overblown, with most hackers finding better code exploits from more traditional data leaks and open-source research.

For instance, Martin Zugec, Technical Solutions Director at Bitdefender, explained that “The majority of novice malware writers are not likely to possess the skills required” to bypass chatbots’ anti-malware guardrails. As well as that, Zugec explained, “the quality of malware code produced by chatbots tends to be low.”

That offers a counterpoint to the FBI’s claims, and we’ll have to see which side proves to be correct. But with ChatGPT maker OpenAI discontinuing its own tool designed to detect chatbot-generated plagiarism, the news has not been encouraging lately. If the FBI is right, there could be tough times ahead in the battle against hackers and their attempts at chatbot-fueled malware.

Editors' Recommendations

Alex Blake
In ancient times, people like Alex would have been shunned for their nerdy ways and strange opinions on cheese. Today, he…
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more
Microsoft accidentally released 38TB of private data in a major leak
A large monitor displaying a security hacking breach warning.

It’s just been revealed that Microsoft researchers accidentally leaked 38TB of confidential information onto the company’s GitHub page, where potentially anyone could see it. Among the data trove was a backup of two former employees’ workstations, which contained keys, passwords, secrets, and more than 30,000 private Teams messages.

According to cloud security firm Wiz, the leak was published on Microsoft’s artificial intelligence (AI) GitHub repository and was accidentally included in a tranche of open-source training data. That means visitors were encouraged to download it, meaning it could have fallen into the wrong hands again and again.

Read more
Meta is reportedly working on a GPT-4 rival, and it could have dire consequences
The Facebook app icon on an iPhone home screen, with other app icons surrounding it.

Facebook owner Meta is working on an artificial intelligence (AI) system that it hopes will be more powerful than GPT-4, the large language model developed by OpenAI that powers ChatGPT Plus. If successful, that could add much more competition to the world of generative AI chatbots -- and potentially bring a host of serious problems along with it.

According to The Wall Street Journal, Meta is aiming to launch its new AI model in 2024. The company reportedly wants the new model to be “several times more powerful” than Llama 2, the AI tool it launched as recently as July 2023.

Read more