flask-llama / llama.cpp /ggml /src /ggml-cuda /template-instances /fattn-vec-f16-instance-hs64-f16-q5_1.cu
YZ-TAN's picture
Upload 2821 files
5a29263 verified
raw
history blame
182 Bytes
// This file has been autogenerated by generate_cu_files.py, do not edit manually.
#include "../fattn-vec-f16.cuh"
DECL_FATTN_VEC_F16_CASE(64, GGML_TYPE_F16, GGML_TYPE_Q5_1);