Readme says 256k context, but config.json has "max_position_embeddings": 32768

#4
by stev236 - opened

Is that an error or was that done for the convenience of users who run the model with default values on a single GPU?
The README file should mention that users may change it up to 256K as needed, if that's the case.

Thank you for your great contribution to Open Local LLM users community.

Sign up or log in to comment