By Consultants Review Team
Microsoft and OpenAI have jointly revealed how cybercriminals are leveraging advanced language models like ChatGPT to enhance their cyberattacks. Recent research by these tech giants has uncovered concerning efforts by hacker groups associated with Russia, North Korea, Iran, and China to exploit tools such as ChatGPT for various malicious purposes.
In a recent blog post, Microsoft emphasized that cybercrime syndicates and state-sponsored threat actors are actively exploring the potential of emerging AI technologies to bolster their operations and evade security measures. Notably, the notorious Strontium group, also known as APT28 or Fancy Bear, linked to Russian military intelligence, has been using Language Models (LLMs) to analyze satellite communication protocols and develop sophisticated social engineering tactics. Additionally, North Korean hackers from the Thallium group have been leveraging LLMs to identify vulnerabilities and orchestrate phishing campaigns, while Iranian hackers from the Curium group have been using LLMs to craft phishing emails and evade antivirus software. Chinese state-affiliated hackers have also been utilizing LLMs for various purposes, including research and tool enhancement.
Although major cyberattacks using LLMs have not been observed yet, Microsoft and OpenAI remain vigilant, dismantling accounts and assets associated with these malicious groups. Microsoft has issued warnings about future threats, such as voice impersonation, highlighting the potential risk posed by AI-powered fraud in voice synthesis.
In response to the growing AI-driven cyber threats, Microsoft is leveraging AI as a defensive tool. Homa Hayatyfar, principal detection analytics manager at Microsoft, emphasized the importance of using AI to fortify protective measures, enhance detection capabilities, and respond swiftly to emerging threats. This proactive approach aims to stay ahead of cyber adversaries and safeguard against evolving cyber threats in an AI-driven landscape.