/ data / vuln_en / langchain / CVE-2023-38896.yaml
CVE-2023-38896.yaml
 1  info:
 2    name: langchain
 3    cve: CVE-2023-38896
 4    summary: LangChain vulnerable to arbitrary code execution
 5    details: |
 6      An issue in Harrison Chase langchain before version 0.0.236 allows a remote attacker to execute arbitrary code via the `from_math_prompt` and `from_colored_object_prompt` functions.
 7    cvss: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
 8    severity: CRITICAL
 9    security_advise: |
10      1. Upgrade to langchain>=0.0.236
11      2. Review and update any usage of `from_math_prompt` and `from_colored_object_prompt` functions to ensure they are not vulnerable to code injection
12      3. Monitor for any further updates or patches released by the maintainers
13  rule: version < "0.0.236"
14  references:
15    - https://nvd.nist.gov/vuln/detail/CVE-2023-38896
16    - https://github.com/hwchase17/langchain/issues/5872
17    - https://github.com/hwchase17/langchain/pull/6003
18    - https://github.com/langchain-ai/langchain/commit/8ba9835b925473655914f63822775679e03ea137
19    - https://github.com/langchain-ai/langchain/commit/e294ba475a355feb95003ed8f1a2b99942509a9e
20    - https://github.com/langchain-ai/langchain
21    - https://github.com/pypa/advisory-database/tree/main/vulns/langchain/PYSEC-2023-146.yaml
22    - https://twitter.com/llm_sec/status/1668711587287375876