llama.cpp server hosting a reasoning model CPU only.
Explore LLM performance across hardware
Display loading spinner while preparing space