Recommended server endpoint

#49
by RonanMcGovern - opened

I note that prefix attention is not supported by vLLM yet?

Is there a recommended inference library?

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment