Executive Summary
Summary | |
---|---|
Title | PandasAI interactive prompt function can be exploited to run arbitrary Python code through prompt injection, which can lead to remote code execution (RCE) |
Informations | |||
---|---|---|---|
Name | VU#148244 | First vendor Publication | 2025-02-11 |
Vendor | VU-CERT | Last vendor Modification | 2025-02-11 |
Severity (Vendor) | N/A | Revision | M |
Security-Database Scoring CVSS v3
Cvss vector : N/A | |||
---|---|---|---|
Overall CVSS Score | NA | ||
Base Score | NA | Environmental Score | NA |
impact SubScore | NA | Temporal Score | NA |
Exploitabality Sub Score | NA | ||
Calculate full CVSS 3.0 Vectors scores |
Security-Database Scoring CVSS v2
Cvss vector : | |||
---|---|---|---|
Cvss Base Score | N/A | Attack Range | N/A |
Cvss Impact Score | N/A | Attack Complexity | N/A |
Cvss Expoit Score | N/A | Authentication | N/A |
Calculate full CVSS 2.0 Vectors scores |
Detail
OverviewPandasAI, an open source project by SinaptikAI, has been found vulnerable to Prompt Injection attacks. An attacker with access to the chat prompt can craft malicious input that is interpreted as code, potentially achieving arbitrary code execution. In response, SinaptikAI has implemented specific security configurations to address this vulnerability. DescriptionPandasAI is a Python library that allows users to interact with their data using natural language queries. The library parses these queries into Python or SQL code, leveraging a large language model (LLM) (such as OpenAI's GPT or similar) to generate explanations, insights, or code. As part of its setup, users import the AI A vulnerability was discovered that enables arbitrary Python code execution through prompt injection. Researchers at NVIDIA demonstrated the ability to bypass PandasAI's restrictions, such as preventing certain module imports, jailbreak protections, and the use of allowed lists. By embedding malicious Python code in various ways via a prompt, attackers can exploit the vulnerability to execute arbitrary code within the context of the process running PandasAI. This vulnerability arises from the fundamental challenge of maintaining a clear separation between code and data in AI chatbots and agents. In the case of PandasAI, any code generated and executed by the agent is implicitly trusted, allowing attackers with access to the prompt interface to inject malicious Python or SQL code. The security controls of PandasAI (2.4.3 and earlier) fail to distinguish between legitimate and malicious inputs, allowing the attackers to manipulate the system into executing untrusted code, leading to untrusted code execution (RCE), system compromise, or pivoting attacks on connected services. The vulnerability is tracked as CVE-2024-12366. Sinaptik AI has introduced new configuration parameters to address this issue and allow the user to choose appropriate security configuration for their installation and setup. ImpactAn attacker with access to the PandasAI interface can perform prompt injection attacks, instructing the connected LLM to translate malicious natural language inputs into executable Python or SQL code. This could result in arbitrary code execution, enabling attackers to compromise the system running PandasAI or maintain persistence within the environment. SolutionSinaptikAI has introduced a Security parameter to the configuration file of the PandasAI project. Users can now select one of three security configurations:
By choosing the appropriate configuration, users can tailor PandasAI's security to their specific needs. SinaptikAI has also released a sandbox. More information regarding the sandbox can be found at the appropriate documentation page. AcknowledgementsThank you to the reporter, the NVIDIA AI Red Team (Joe Lucas, Becca Lynch, Rich Harang, John Irwin, and Kai Greshake). This document was written by Christopher Cullen. |
Original Source
Url : https://kb.cert.org/vuls/id/148244 |
Alert History
Date | Informations |
---|---|
2025-02-11 17:33:12 |
|
2025-02-11 17:20:18 |
|