/ data / vuln_en / llama-cpp / CVE-2026-34159.yaml
CVE-2026-34159.yaml
 1  info:
 2    name: llama-cpp
 3    cve: CVE-2026-34159
 4    summary: llama.cpp RPC Backend Unauthenticated Remote Code Execution via deserialize_tensor() Buffer=0 Bypass
 5    details: >-
 6      llama.cpp prior to build b8492 contains a critical memory corruption vulnerability in the RPC
 7      backend's deserialize_tensor() function. When a tensor's buffer field is set to 0, all bounds
 8      validation is skipped. An unauthenticated attacker with TCP access to the RPC server port
 9      (default: 50052) can send crafted GRAPH_COMPUTE messages to read and write arbitrary process
10      memory. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE RPC commands (which
11      return raw heap addresses), this enables full ASLR bypass and remote code execution without
12      any credentials. The exploit chain: (1) leak heap/code pointers via ALLOC_BUFFER/BUFFER_GET_BASE,
13      (2) achieve arbitrary read/write via GRAPH_COMPUTE with buffer=0 tensors, (3) overwrite a
14      buffer iface.clear function pointer with system(), (4) trigger BUFFER_CLEAR to execute
15      attacker-controlled commands. A working public PoC was disclosed alongside the advisory.
16      This vulnerability is a bypass of the incomplete fixes for CVE-2024-42478 and CVE-2024-42479,
17      which patched GET_TENSOR and SET_TENSOR handlers but left the GRAPH_COMPUTE code path unprotected.
18      CWE-119 (Improper Restriction of Operations within the Bounds of a Memory Buffer).
19      CVSS 9.8 CRITICAL. Public PoC available.
20    cvss: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
21    severity: CRITICAL
22    security_advise: >-
23      Upgrade llama.cpp to build b8492 or later, which patches the bounds validation bypass in
24      deserialize_tensor() for the GRAPH_COMPUTE code path. If upgrade is not immediately possible,
25      apply network-level mitigations: restrict access to the RPC server port (default 50052) via
26      firewall rules to trusted hosts only. Do not expose the RPC port to untrusted networks or
27      the internet. The RPC backend must be explicitly compiled with -DGGML_RPC=ON; if distributed
28      inference is not required, rebuild without RPC support.
29    references:
30      - https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-j8rj-fmpv-wcxw
31      - https://github.com/ggml-org/llama.cpp/commit/39bf0d3c6a95803e0f41aaba069ffbee26721042
32      - https://github.com/ggml-org/llama.cpp/pull/20908
33      - https://nvd.nist.gov/vuln/detail/CVE-2026-34159
34  rule: 'version < "b8492"'
35  references:
36    - https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-j8rj-fmpv-wcxw
37    - https://github.com/ggml-org/llama.cpp/commit/39bf0d3c6a95803e0f41aaba069ffbee26721042
38    - https://github.com/ggml-org/llama.cpp/pull/20908
39    - https://nvd.nist.gov/vuln/detail/CVE-2026-34159