Saturday, November 30, 2024
HomeDeFiOpenAI and Microsoft Thwart Cyberattacks Linked to Global Hacking Groups

OpenAI and Microsoft Thwart Cyberattacks Linked to Global Hacking Groups

OpenAI, the creator of the AI chatbot ChatGPT, in collaboration with Microsoft, its major investor, has successfully thwarted five cyberattacks attributed to various malevolent entities. These attacks were reportedly orchestrated by groups with ties to the military and intelligence arms of several countries, including Russia, Iran, China, and North Korea. The revelation came from a report by Microsoft, which has been vigilant in tracking these hacking groups that have shown interest in leveraging AI large language models (LLMs) for their cyber operations.

LLMs, like ChatGPT, are designed to generate human-like text responses by analyzing extensive datasets. The report identified the origin of these cyberattacks to be from two Chinese-affiliated groups named Charcoal Typhoon and Salmon Typhoon, alongside Crimson Sandstorm from Iran, Emerald Sleet from North Korea, and Forest Blizzard from Russia. These entities attempted to use ChatGPT-4 for various nefarious activities, including researching companies and cybersecurity tools, code debugging, script generation, phishing campaigns, technical paper translation, malware detection evasion, and the study of satellite communication and radar technologies. OpenAI responded by deactivating the implicated accounts upon discovery.

This incident led OpenAI to impose a comprehensive prohibition against the use of its AI products by state-sponsored hacking groups. Despite these measures, OpenAI acknowledges the ongoing challenge of completely preventing the misuse of its technologies. The surge in AI-generated deepfakes and scams following the release of ChatGPT has drawn increased attention from policymakers, prompting a closer examination of generative AI technologies.

In response to these concerns, OpenAI announced a $1 million cybersecurity grant in June 2023 to bolster AI-driven cybersecurity solutions. Nevertheless, hackers continue to find ways to circumvent the safeguards in place, exploiting ChatGPT to generate harmful content.

In a broader move to enhance AI safety, over 200 entities, including tech giants OpenAI, Microsoft, Anthropic, and Google, have joined forces with the Biden Administration to form the AI Safety Institute and the United States AI Safety Institute Consortium (AISIC). This initiative, spurred by President Joe Biden’s executive order on AI safety issued in late October 2023, aims to foster the safe development of AI technologies, counteract the threats posed by AI-generated deepfakes, and tackle pressing cybersecurity challenges.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

sixteen − 8 =

- Advertisment -

Most Popular