Artificial intelligence tools, such as ChatGPT, have become the latest way for “bad actors” to distribute malware, scams and spam — research from Meta’s security team warns.
A May 1 research report from Facebook parent Meta’s security team found 10 malware families posing as ChatGPT and similar artificial intelligence tools in March, some of which were found in various browser extensions, noting:
Meta explained that these “bad actors” — malware operators, scammers, spammers and the like — have moved to AI because it’s the “latest wave” of what is capturing “people’s imagination and excitement.”
The research comes amid a major interest in artificial intelligence, with ChatGPT, in particular, capturing much attention brought to AI.
“For example, we’ve seen threat actors create malicious browser extensions available in official web stores that claim to offer ChatGPT-related tools," it added.
Meta security said that some of these “malicious extensions” included operations with ChatGPT functionality which coexisted alongside the malware.
ChatGPT-related Malware on the Rise, Meta Sayshttps://t.co/eEJRVb9qkI
The firm’s security team then explained that bad actors tend to move to where the latest craze is, referencing the hype around digital currency and the scams that have come from it:
“The generative AI space is rapidly evolving and bad actors know it,” they added, stressing the need to be “vigilant.”
Guy Rosen, Meta’s chief security officer, went one step further in a recent interview with Reuters by stating that “ChatGPT is the new crypto” for these bad actors.
Related: OpenAI launches bug bounty program to combat system vulnerabilities
It should however be noted that Meta is now making its own developments in generative AI.
Met
Read more on cointelegraph.com