Andrew Neel (unsplash)
As the tech trend of 2023, artificial intelligence has catapulted to the forefront, reflected notably by an unexpected surge in Nasdaq Inc. Exchange stock prices. Nevertheless, this novelty is not immune to hacker attacks, a key finding detailed in a fresh report released by Vulcan Cyber Ltd.
The report enhances our understanding of the potential cybersecurity hazards arising from the swift expansion of generative AI technology. It shines a spotlight particularly on programs such as OpenAI LP’s ChatGPT, emphasizing that while AI’s promise is significant, it is also interspersed with potential security challenges.
Vulcan Cyber’s study primarily reveals how hackers could exploit ChatGPT to disseminate harmful packages into developers’ digital environments. Vulcan’s researchers argue that the threat is realistic due to AI’s ubiquity across almost all business applications, the characteristics of software supply chains, and the wide use of open-source code libraries.
The report underscores what its authors term as “AI package hallucination.” In certain scenarios, AI systems like ChatGPT generate plausible yet ultimately non-existent coding libraries. When ChatGPT suggests these “hallucinated” packages, an ill-intentioned individual could concoct and distribute a damaging package using the same name, thus leaving a secure environment exposed to unforeseen cyber threats.
Increasing dependence on AI tools for professional activities could expose users to cybersecurity threats, the researchers caution. If developers, rather than relying on traditional platforms like Stack Overflow, begin to turn to AI such as ChatGPT for coding solutions, they could inadvertently install these malicious packages, thereby jeopardizing the wider enterprise.
The Vulcan Cyber researchers emphasize the need for greater attention, but they maintain that this possible vulnerability does not necessarily warrant halting AI’s advancement. The report instead encourages heightened awareness and proactivity, especially among developers who incorporate AI into their daily tasks more and more.
The report advocates for developers to exercise discernment when validating libraries, particularly those recommended by AI. It urges developers to ensure the authenticity of a package before installing it by examining factors like the package’s creation date, download count, comments, and any notes attached.
Recognizing a malicious package may be challenging if the threat actor skillfully hides their actions or employs additional strategies such as creating a functional trojan package, according to the researchers. With threat actors orchestrating supply chain attacks by dispatching harmful libraries to well-known repositories, they stress the necessity for developers to scrutinize the libraries they use to confirm their legitimacy.
Considering AI’s current popularity, the report rightly highlights a substantial cybersecurity threat quickly arising from the extensive application of generative AI technologies.
The researchers advocate for enhanced alertness and proactivity, especially from developers. They underscore the necessity of thoroughly validating libraries recommended by AI platforms and stress the importance of striking a balance between harnessing AI’s enormous potential and conscientiously mitigating the associated risks.