model running speed
#4 opened about 2 months ago
by
gangqiang03
![](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/fxb0VCTXFkx8iAJgrtBZy.png)
Why am I using the original Lama model, but not as good as your onnx model? This is quite unusual
#3 opened 7 months ago
by
MetaInsight
GPU inference
3
#1 opened 9 months ago
by
Crowlley
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63ecde68c8827dd0f0f734e1/EHNX5Km8csDPHFUdhUj-t.png)