CVE-2023-29374.yaml
1 info: 2 name: langchain 3 cve: CVE-2023-29374 4 summary: LangChain vulnerable to code injection 5 details: | 6 In LangChain through 0.0.131, the `LLMMathChain` chain allows prompt injection attacks 7 that can execute arbitrary code via the Python `exec()` method. 8 cvss: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H 9 severity: CRITICAL 10 security_advise: | 11 1. Upgrade to langchain>=0.0.132 12 2. Review and patch any custom chain implementations to prevent prompt injection 13 3. Implement input validation to sanitize user inputs 14 rule: version < "0.0.132" 15 references: 16 - https://nvd.nist.gov/vuln/detail/CVE-2023-29374 17 - https://github.com/hwchase17/langchain/issues/1026 18 - https://github.com/hwchase17/langchain/issues/814 19 - https://github.com/hwchase17/langchain/pull/1119 20 - https://github.com/langchain-ai/langchain 21 - https://github.com/pypa/advisory-database/tree/main/vulns/langchain/PYSEC-2023-18.yaml 22 - https://twitter.com/rharang/status/1641899743608463365/photo/1