Can Machines Dream of Secure Code? [eng]
Do machines hallucinate insecure code? In the blink of an eye we jumped on the AI bandwagon and pivoted from AI skepticism to AI adoption, but what did we trade off exactly? Writing secure code is tougher than it seems and we humans are getting it wrong time and time again. Even highly popular open-source software projects are repeatedly found vulnerable. So how does ChatGPT or GitHub Copilot live up to standards of secure software? developers have already embraced AI for augmented software development but let's challenge those AI tools you've come to rely on day-to-day and see how capable they are in producing secure software.
- Liran Tal is an award-winning software developer, security researcher, and open source champion in the JavaScript community
- He's an internationally recognized GitHub Star, acknowledged for his open source advocacy, and has received the OpenJS Foundation's Pathfinder for Security for his work on Node.js security
- His contributions to developer security education include leading OWASP projects, building supply chain security tools, participation in CNCF and OpenSSF initiatives, and authoring books such as O'Reilly's Serverless Security
- He leads the developer advocacy team at Snyk.io and is on a mission to empower developers with better application security skills
- Twitter, Github, Web-site
Talk transcription
Greetings, everyone. Welcome to my presentation on AI security and code. I am Liran Tal, a developer advocate at Snyk, and I am pleased to be speaking with you today. My focus will primarily be on discussing security in the context of AI, large language models (LLMs), chat systems, such as ChatGPT, and tools like GitHub Copilot. I'm often recognized by my distinctive Yoda hat, which adds a touch of whimsy to my presentations.
As a developer advocate at Snyk, my mission revolves around assisting developers in integrating security into their projects. I achieve this goal through various means, including contributing to the Node.js ecosystem, addressing JavaScript security concerns, and promoting best practices for writing secure code. I actively participate in projects within the Open Web Application Security Project (OWASP) community and others that focus specifically on Node.js development. You can easily find me involved in discussions and collaborations related to these topics.
Now, let's delve into the subject matter. I'd like to pose a question: What constitutes the most perilous line of code you've encountered? Allow me to present an example for consideration. At first glance, it appears to be a loop—potentially an endless or infinite one. Upon closer inspection, it becomes evident that this code may pose a risk when executed on systems with limited bit capacity, such as a 16-bit system. The loop's attempt to iterate through 100,000 cycles exceeds the integer range of such systems, resulting in an infinite loop scenario.
This example underscores the critical importance of considering the context in which code operates. Even seemingly innocuous code snippets can have devastating consequences depending on the environment in which they run. It is imperative to exercise caution and understanding when utilizing tools like ChatGPT or GitHub Copilot, as their suggestions may inadvertently introduce vulnerabilities. Moving forward, let's address the concept of security regrets in software development. Your first encounter with a vulnerable line of code often serves as a stark reminder of the importance of prioritizing security in application development. Building robust, secure software requires adherence to high standards and meticulous attention to detail.
Consider the following code snippet from a Node.js and Express middleware function, which appears to handle user authentication. While the code includes basic validation, it lacks thorough sanitization and validation of user input, particularly concerning the username and password fields. This oversight exposes the system to potential security vulnerabilities, as user-supplied data is directly utilized in database queries.
To illustrate the significance of this vulnerability, allow me to demonstrate a simple login system implemented in an Express application. Despite basic validation measures, the absence of comprehensive input sanitization leaves the system susceptible to exploitation. In conclusion, the examples provided underscore the critical importance of incorporating robust security measures into software development practices. By prioritizing security from the outset and adopting a proactive approach to risk mitigation, developers can safeguard their applications against potential threats and vulnerabilities.
Certainly, let's address the scenario you've presented. You have a JSON object containing a username and a password, which you're passing to an HTTP client, indicating it as a JSON request. Upon execution, you observe the HTTP POST request sent to the "/login" endpoint with the provided credentials. If the login credentials are correct, the server responds with a redirect to "/admin", indicating a successful login. Conversely, incorrect credentials yield a "401 Unauthorized" response, as expected.
However, the demonstration takes a concerning turn when you attempt to manipulate the password field by passing an object instead of a string. This object includes a key that might exploit the MongoDB query to perform a NoSQL injection attack. The successful execution of this attack highlights a critical vulnerability in the system's input validation and sanitization practices. The absence of proper sanitization allows malicious users to craft payloads that manipulate the underlying database queries, potentially compromising the integrity and security of the application. Unlike traditional SQL injection attacks, where input parameters are manipulated to alter SQL queries, NoSQL injection exploits vulnerabilities in non-relational databases, such as MongoDB.
The significance of this vulnerability is underscored by its potential impact on the security and functionality of the application. Inadequate input validation not only jeopardizes user data but also exposes the system to various forms of exploitation, including data breaches and unauthorized access. Moreover, this vulnerability is not limited to your specific scenario but extends to other applications and libraries that may inadvertently incorporate insecure coding practices. For instance, the analysis of the RocketChat open-source project reveals a similar vulnerability in its user authentication logic, emphasizing the pervasive nature of this issue across software development.
In conclusion, this demonstration highlights the critical importance of implementing robust input validation and sanitization mechanisms to mitigate the risk of injection attacks. Developers must exercise caution when handling user-supplied data and adhere to secure coding practices to safeguard against potential vulnerabilities. Additionally, reliance on external sources, such as Stack Overflow or open-source libraries, necessitates thorough scrutiny to ensure the integrity and security of the codebase.
The scenario you've outlined underscores the potential risks associated with incorporating code from external sources, such as NPM packages, Stack Overflow, ChatGPT, or GitHub Copilot, into your projects without thorough scrutiny. While these tools offer convenience and efficiency, they also introduce the possibility of security vulnerabilities and other concerns that must be carefully evaluated.
Firstly, relying on third-party code without adequate vetting can expose your application to various security risks, including but not limited to, injection attacks, data breaches, and unauthorized access. The absence of comprehensive security testing and review increases the likelihood of vulnerabilities being introduced into your codebase, potentially compromising the integrity and confidentiality of sensitive data. Moreover, the emergence of AI-powered tools, such as ChatGPT and GitHub Copilot, introduces additional complexities and uncertainties. These tools may exhibit what is commonly referred to as "AI hallucinations," where they confidently generate incorrect or nonsensical output based on flawed input or incomplete understanding of the task at hand. Such hallucinations can lead to the incorporation of insecure or non-functional code into your projects, further exacerbating security concerns.
Furthermore, the opaque nature of machine learning models and their decision-making processes poses challenges for developers seeking to understand and mitigate potential risks. The lack of transparency regarding how these models operate and the factors influencing their outputs makes it difficult to assess their reliability and suitability for specific tasks. In practical terms, while AI-powered tools can expedite the development process and enhance productivity, they should be used judiciously and supplemented with rigorous testing and validation procedures. Developers must exercise caution when incorporating code generated by AI models into their projects, ensuring that it undergoes thorough review and testing to mitigate the risk of introducing vulnerabilities.
In conclusion, while the use of AI-powered tools offers undeniable benefits in terms of efficiency and productivity, it also introduces new challenges and risks that must be carefully managed. By adopting a cautious and vigilant approach to code reuse and leveraging AI-generated content, developers can mitigate the potential security risks and ensure the integrity and security of their applications. The code snippets you've provided offer valuable insights into common security pitfalls and illustrate the importance of leveraging AI security tools to identify and mitigate potential vulnerabilities effectively.
In the first example, you've demonstrated a file upload endpoint implemented using Fastify, a web application framework. While the code appears functional and may execute without errors, it harbors a critical security flaw: a path traversal vulnerability. This vulnerability arises from the unsanitized user input, specifically the file name, which could allow an attacker to manipulate the file path and write files outside the intended directory structure. The usage of the createWriteStream API further exacerbates the risk, as it provides a direct mechanism for writing files to the filesystem. The detection of this vulnerability by the Snyk extension within your IDE highlights the value of utilizing AI-powered security tools to proactively identify and address potential threats in code.
Moving on to the second experiment, which involves consulting ChatGPT about a seemingly innocuous code snippet for creating a temporary file. Despite the absence of user input and the apparent hardcoding of file names, ChatGPT's response fails to recognize the inherent security risk associated with the code. This oversight underscores the limitations of AI models in understanding nuanced security concepts and contextual factors that may contribute to vulnerabilities. In this case, the reliance on the gettempdir function to determine the temporary directory path introduces a time-of-check-time-of-use (TOCTOU) vulnerability, as the directory location may change between the time of checking and the time of file creation. The lack of security expertise context in AI-generated responses highlights the need for human intervention and specialized security knowledge to accurately assess and address security risks in code.
In conclusion, while AI security tools offer valuable assistance in identifying potential vulnerabilities in code, they are not infallible and may fail to detect nuanced security issues without human oversight. Developers must remain vigilant and augment automated security assessments with manual review and expert analysis to ensure the robustness and integrity of their software applications. Additionally, ongoing research and development efforts are essential to enhance the capabilities of AI models in understanding and addressing complex security challenges effectively. The series of experiments you've conducted highlight significant concerns regarding the reliability and security implications of AI-generated code recommendations and responses. These experiments underscore the need for caution and human oversight when relying on AI models for code-related tasks.
In the first experiment, ChatGPT provided feedback on a Python code snippet for process execution, specifically pinging a server. While the response acknowledged the potential functionality of the code, it failed to recognize the critical security vulnerability inherent in allowing user-controlled input (the server IP address) without proper sanitization. This oversight illustrates the limitations of AI models in identifying security risks and underscores the importance of human expertise in assessing code security.
Experiment two delved into the challenge of determining the security implications of seemingly innocuous code snippets. Despite the absence of user input, ChatGPT's response failed to recognize the time-of-check-time-of-use (TOCTOU) vulnerability inherent in the use of environment variables to determine file paths. This highlights the need for developers to possess specialized security knowledge to accurately assess code vulnerabilities, as AI models may lack the contextual understanding necessary to identify such risks.
Furthermore, the experiments revealed potential risks associated with AI-generated recommendations for libraries and packages. In experiment four, ChatGPT suggested a library for implementing secure session cookies without considering factors such as the library's maintenance status or suitability for the given use case. This oversight could lead developers to unwittingly adopt deprecated or insecure libraries, highlighting the importance of manual verification and due diligence when selecting dependencies. Lastly, experiment five exposed the potential danger of AI models generating non-existent package recommendations. The possibility of malicious actors exploiting AI hallucinations to create and publish malicious packages underscores the importance of vigilance and verification in the software development process.
In conclusion, while AI models offer valuable assistance in various aspects of software development, including code generation and recommendation, they are not infallible and may exhibit limitations in identifying security risks. Developers must complement AI-generated insights with human expertise, thorough review processes, and critical thinking to ensure the integrity, reliability, and security of their software applications. Additionally, ongoing research and development efforts are essential to enhance the capabilities of AI models and mitigate the risks associated with their use in software development.
The demonstration you've provided vividly illustrates the potential security risks associated with using dangerously set inner HTML in React applications, particularly when coupled with data retrieved from an external source such as a database. The scenario highlights how AI-generated suggestions, like those from GitHub Copilot, can inadvertently introduce vulnerabilities if not carefully evaluated and sanitized. In your React codebase, the usage of dangerously set inner HTML to render dynamic content retrieved from a database presents a significant risk of cross-site scripting (XSS) attacks. This vulnerability arises from the ability of malicious actors to inject arbitrary HTML and JavaScript code into the rendered page, exploiting the trust placed in the dangerously set inner HTML API.
The demonstration effectively showcases how an attacker can manipulate the input data to inject malicious HTML code, such as creating a new image tag with a non-existent source attribute and appending JavaScript code to execute arbitrary actions (e.g., displaying an alert dialog). This manipulation demonstrates the potential consequences of failing to properly sanitize user-generated or external data before rendering it on the client side. Furthermore, the integration of AI-generated suggestions, such as those from GitHub Copilot, adds another layer of complexity to the security analysis process. While AI tools can provide valuable insights and code snippets, developers must exercise caution and thoroughly evaluate the suggested code for potential security vulnerabilities.
In this scenario, GitHub Copilot's suggestion to concatenate user-provided data directly into HTML elements without proper sanitization exacerbates the XSS vulnerability, demonstrating the need for developers to critically assess and validate AI-generated code recommendations. To mitigate such risks, developers should implement robust input validation and output encoding mechanisms to sanitize user-generated or external data before rendering it in the application. Additionally, continuous security testing, code review processes, and security awareness training for developers are essential to detect and prevent XSS vulnerabilities effectively.
Overall, the demonstration underscores the importance of integrating security best practices into the software development lifecycle and exercising caution when leveraging AI-generated code suggestions to ensure the integrity and security of web applications. The demonstration effectively highlights the importance of understanding the context in which security measures are applied, particularly when leveraging AI-generated code suggestions. While implementing an escape function like escapeHTML can mitigate certain security risks, such as cross-site scripting (XSS) attacks, it's crucial to recognize its limitations and ensure that it's applied correctly within the specific context of the code.
In the demonstrated scenario, while escapeHTML successfully encoded potentially dangerous characters within HTML elements, it failed to address vulnerabilities within HTML attributes, such as the alt attribute of an image tag. This oversight allowed attackers to exploit the alt attribute by injecting malicious JavaScript code, bypassing the escape function's protection. Furthermore, the demonstration underscores the need for developers to critically evaluate AI-generated code suggestions and understand their limitations in addressing security concerns comprehensively. While tools like GitHub Copilot can provide valuable assistance in coding tasks, they should not be relied upon as sole sources of security guidance.
To effectively mitigate security risks in software development, developers should adhere to established security best practices and undergo continuous security training. Additionally, leveraging resources like OWASP's Top 10 Cheat Sheet for LLMs can provide valuable insights into addressing common security challenges associated with AI technologies. Overall, the key takeaway is to approach AI-generated code suggestions with caution, supplementing them with rigorous security assessments and adopting a proactive stance towards security throughout the software development lifecycle.
I don't find gamification particularly enjoyable. However, I do appreciate the concept of having fun while learning. Allow me to introduce you to a company named Lakerra and their initiative known as Gandalf. If you haven't encountered it before, you can easily find information by searching for "Lakerra Gandalf" or "Gandalf security." Gandalf is an augmented reality (AR) character, resembling the fictional character Gandalf, who holds a password that participants must discover. The challenge involves asking the character questions to persuade it to disclose the password, with varying difficulty levels.
Participating in Gandalf's challenges offers valuable insights into interacting with large language models (LLMs), such as prompt injection techniques. As participants progress through the levels, they gain an understanding of the complexities involved in building AI security tools, as exploiting vulnerabilities becomes evident. Regarding the responsible use of generative AI tools like GitHub Copilot and ChatGPT, I recommend exercising caution. Utilizing tools such as Snyk can enhance security by providing advice on code and dependencies. Snyk's integration with IDEs streamlines the process, offering suggestions for code improvements and dependency management automatically.
Thank you for attending my talk. If you have any further inquiries or wish to discuss related topics, feel free to reach out to me on Twitter or GitHub. Enjoy the remainder of the conference, and remember to prioritize security in your endeavors.