Spaces:
Running
on
A10G
Running
on
A10G
More storage
#151
by
noNyve
- opened
I can't convert models that are above 14B parameters. I think that has to do with small storage on the "GGUF My Repo space" servers. When i do try it though i get "Error quantizing: ". Smaller models work though...