top of page

AI in Action: The Unseen Cybersecurity Risks

Rabah Moula


In our rapidly advancing technological landscape, one can't help but marvel at the strides we've made. From smartphones to self-driving cars, we've seen it all, and it only gets more interesting. However, in our bid to explore the fascinating world of Artificial Intelligence (AI), specifically generative AI and Large Language Models (LLM) such as ChatGPT, we might be exposing ourselves to serious cybersecurity threats, particularly in the realm of open source development and the software supply chain.

 

The Raging Fire of AI and Its Unseen Dangers

Recent findings from a study by Rezilion reveal a worrying reality. The open source community, in a hurry to embrace the sweeping tide of generative AI, is largely overlooking the inherent security threats presented by LLM-based technologies. With over 30,000 GPT-related projects currently on GitHub, there's an unfortunate propensity for these projects to be insecure, thus posing substantial security risks for organizations.


Generative AI's exponential growth is anticipated to pose a rising security risk. Yotam Perkal, director of vulnerability research at Rezilion, argues that the mounting popularity of these systems inevitably makes them attractive targets for cyber attackers.


An investigation into 50 popular GPT and LLM-based open source projects found a concerning correlation between the popularity of these projects and their low security rating. The concern is that these vulnerabilities could increase if developers continue to use these projects as foundations for enterprise-level generative AI technologies.


Unpacking the Risks

Rezilion's research identified four primary areas of generative AI security risk:

  1. Trust boundary risk: Organizations establish trust boundaries to guarantee the security and reliability of an application's components and data. But as LLMs start to access external resources, the unpredictable nature of their outputs can be exploited by cybercriminals.

  2. Data management risk: There's a threat of data leakage and training-data poisoning, where an LLM could accidentally reveal sensitive information or malicious actors could intentionally introduce vulnerabilities into the LLM's training data.

  3. Inherent model risk: Risks here relate to the possibility of AI models returning false data sources or recommendations and the over-reliance on LLM-generated content. OpenAI, the creator of ChatGPT, has already warned users about these risks.

  4. Basic security best practice risk: This area covers the risks associated with improper error handling or insufficient access controls. Attackers can exploit information in the LLM error messages to gather sensitive information or exploit known vulnerabilities.

Cybersecurity Measures

A first step in risk mitigation is understanding that integrating generative AI and LLMs comes with unique challenges and security concerns. Perkal advocates for a "secure-by-design" approach when implementing generative AI-based systems.

In addition, organizations should monitor and log LLM interactions, regularly audit and review the AI system's responses, detect potential security and privacy issues, and subsequently update the LLM.

Glossary


  1. Generative AI: AI systems that can generate data that is similar to the data they were trained on.

  2. Large Language Models (LLM): AI models that generate human-like text based on the input they receive.

  3. Open source: A type of software that makes its source code available to the public, allowing anyone to view, use, modify, and distribute the project's source code.

  4. Trust boundaries: Areas within a system or network that have been verified and trusted to handle data securely.

  5. Data leakage: Unintentional exposure of sensitive data.

  6. Training-data poisoning: An attack where an adversary manipulates the training data to influence the results of a machine learning model.



Summary

Generative AI and LLMs like ChatGPT pose significant cybersecurity risks to enterprises due to their rapid adoption in the open source community and the software supply chain. Despite the benefits, these technologies often have vulnerabilities that malicious actors can exploit. Organizations must adopt a "secure-by-design" approach and monitor LLM interactions to mitigate these risks.

2 views

Kommentare


SUBSCRIBE

Sign up to receive news and updates.

Thanks for submitting!

©CyberGuardianNews. 

bottom of page