ggml-org/llama.cpp
LLM inference in C/C++ for running and serving models locally on various hardware

View on index · View in 3D Map
// SURVEILLANCE FEED
Discovered repositories from the open source frontier
LLM inference in C/C++ for running and serving models locally on various hardware

View on index · View in 3D Map