/ data / vuln_en / langchain / CVE-2025-46059.yaml
CVE-2025-46059.yaml
 1  info:
 2    name: langchain
 3    cve: CVE-2025-46059
 4    summary: Indirect prompt injection vulnerability in Langchain's GmailToolkit.
 5    details: |
 6      The langchain-ai v0.3.51 was found to have an indirect prompt injection vulnerability within its GmailToolkit component.
 7      This flaw enables attackers to execute arbitrary code and compromise the application through a specially crafted email message.
 8    cvss: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
 9    severity: CRITICAL
10    security_advise: |
11      1. Upgrade to a patched version of langchain-ai (if available).
12      2. Implement strict input validation and sanitization for all email content processed by GmailToolkit.
13      3. Review and restrict permissions for the Langchain application to minimize the impact of potential compromises.
14      4. Refer to the official Langchain security documentation for best practices.
15  rule: version < "0.3.51"
16  references:
17    - https://nvd.nist.gov/vuln/detail/CVE-2025-46059
18    - https://github.com/langchain-ai/langchain-community/issues/217#issuecomment-3144824471
19    - https://github.com/langchain-ai/langchain/issues/30833
20    - https://github.com/Jr61-star/CVEs/blob/main/CVE-2025-46059.md
21    - https://python.langchain.com/docs/security