Watch out AI fans – cybercriminals are using jailbroken Mistral and Grok tools to build powerful new malware

Estimated read time 2 min read




  • AI tools are more popular than ever – but so are the security risks
  • Top tools are being leveraged by cybercriminals with malicious intent
  • Grok and Mixtral were both found being used by crimianls

New research has warned top AI tools are powering ‘WormGPT’ variants, malicious GenAI tools which are generating malicious code, social engineering attacks, and even providing hacking tutorials.

With Large Language Models (LLMs) now widely used alongside tools like Mistral AI’s Mixtral and xAI’s Grok, experts from Cato CTRL found this isn’t always in the way they’re intended to be used.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours