Cradicle Explorer
llama.cpp
  • Info
  • Issues
  • Patches
  • Wallets
  • Source
    • Source
    • .devops
    • .github
    • ci
    • cmake
    • common
    • docs
    • examples
    • baby-llama
    • batched-bench
    • CMakeLists.txt
    • README.md
    • batched-bench.cpp
    • batched.swift
    • batched
    • benchmark
    • convert-llama2c-to-ggml
    • cvector-generator
    • embedding
    • eval-callback
    • export-lora
    • finetune
    • gbnf-validator
    • gguf-split
    • gguf
    • gritlm
    • imatrix
    • infill
    • jeopardy
    • llama-bench
    • llama.android
    • llama.swiftui
    • llava
    • lookahead
    • lookup
    • main-cmake-pkg
    • main
    • parallel
    • passkey
    • perplexity
    • quantize-stats
    • quantize
    • retrieval
    • rpc
    • save-load-state
    • server
    • simple
    • speculative
    • sycl
    • tokenize
    • train-text-from-scratch
    • CMakeLists.txt
    • Miku.sh
    • base-translate.sh
    • chat-13B.bat
    • chat-13B.sh
    • chat-persistent.sh
    • chat-vicuna.sh
    • chat.sh
    • convert-legacy-llama.py
    • json-schema-pydantic-example.py
    • json_schema_to_grammar.py
    • llama.vim
    • llm.vim
    • pydantic-models-to-grammar-examples.py
    • pydantic_models_to_grammar.py
    • reason-act.sh
    • regex-to-grammar.py
    • server-embd.py
    • server-llama2-13B.sh
    • ts-type-to-grammar.sh
    • ggml-cuda
    • ggml-sycl
    • gguf-py
    • grammars
    • kompute-shaders
    • media
    • models
    • pocs
    • prompts
    • requirements
    • scripts
    • spm-headers
    • tests
    • vulkan-shaders
    • .clang-tidy
    • .dockerignore
    • .ecrc
    • .editorconfig
    • .flake8
    • .gitignore
    • .gitmodules
    • .pre-commit-config.yaml
    • AUTHORS
    • CMakeLists.txt
    • CMakePresets.json
    • CONTRIBUTING.md
    • LICENSE
    • Makefile
    • Package.swift
    • README-sycl.md
    • README.md
    • SECURITY.md
    • codecov.yml
    • convert-hf-to-gguf-update.py
    • convert-hf-to-gguf.py
    • convert-llama-ggml-to-gguf.py
    • flake.lock
    • flake.nix
    • ggml-alloc.c
    • ggml-alloc.h
    • ggml-backend-impl.h
    • ggml-backend.c
    • ggml-backend.h
    • ggml-blas.cpp
    • ggml-blas.h
    • ggml-common.h
    • ggml-cuda.cu
    • ggml-cuda.h
    • ggml-impl.h
    • ggml-kompute.cpp
    • ggml-kompute.h
    • ggml-metal.h
    • ggml-metal.m
    • ggml-metal.metal
    • ggml-quants.c
    • ggml-quants.h
    • ggml-rpc.cpp
    • ggml-rpc.h
    • ggml-sycl.cpp
    • ggml-sycl.h
    • ggml-vulkan-shaders.hpp
    • ggml-vulkan.cpp
    • ggml-vulkan.h
    • ggml.c
    • ggml.h
    • ggml_vk_generate_shaders.py
    • kompute
    • llama.cpp
    • llama.h
    • mypy.ini
    • pyrightconfig.json
    • requirements.txt
    • sgemm.cpp
    • sgemm.h
    • unicode-data.cpp
    • unicode-data.h
    • unicode.cpp
    • unicode.h
/ examples / batched-bench /
.. CMakeLists.txt README.md batched-bench.cpp