LLM inference in C/C++
Run the following Vcpkg command to install the port.
vcpkg install llama-cpp
Usage details are not available for this port.
v4743#1
Jun 3, 2025
All
ggml-org/llama.cpp
github.com/ggml-org/llama.cpp
7e7032a82d
MIT
Manifest