LLM inference in C/C++
Run the following Vcpkg command to install the port.
vcpkg install llama-cpp
Usage details are not available for this port.
v4743#0
Mar 12, 2025
!android
ggml-org/llama.cpp77K
github.com/ggml-org/llama.cpp
cf72b50294
MIT
Manifest