cassanof commited on
Commit
9ee64ae
·
1 Parent(s): 9a56eb9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -48,7 +48,7 @@ model-index:
48
  ---
49
  # MultiPLCoder-15b
50
 
51
- 15 billion parameter version of MultiPLCoder, a set of StarCoder-based models finetuned on the MultiPL-T dataset.
52
  These models are state-of-the-art at low-resource languages, such as: Lua, Racket, and OCaml.
53
 
54
  This 15 billion parameter model is the most capable of the MultiPLCoder family. However, it requires a dedicated GPU for inference.
@@ -78,8 +78,8 @@ model = AutoModelForCausalLM.from_pretrained("nuprl/MultiPLCoder-15b", revision=
78
 
79
  Note that the model's default configuration does not enable caching, therefore you must specify to use the cache on generation.
80
  ```py
81
- toks = tokenizer.encode("-- Fibonacci iterative", return_tensors="pt")
82
- out = model.generate(toks, use_cache=True, do_sample=True, temperature=0.2, top_p=0.95, max_length=50)
83
  print(tokenizer.decode(out[0], skip_special_tokens=True))
84
  ```
85
  ```
 
48
  ---
49
  # MultiPLCoder-15b
50
 
51
+ 15 billion parameter version of MultiPLCoder, a set of StarCoder-based models finetuned on the [MultiPL-T dataset](https://huggingface.co/datasets/nuprl/MultiPL-T).
52
  These models are state-of-the-art at low-resource languages, such as: Lua, Racket, and OCaml.
53
 
54
  This 15 billion parameter model is the most capable of the MultiPLCoder family. However, it requires a dedicated GPU for inference.
 
78
 
79
  Note that the model's default configuration does not enable caching, therefore you must specify to use the cache on generation.
80
  ```py
81
+ toks = tokenizer.encode("-- Fibonacci iterative", return_tensors="pt").cuda()
82
+ out = model.generate(toks, use_cache=True, do_sample=True, temperature=0.2, top_p=0.95, max_length=256)
83
  print(tokenizer.decode(out[0], skip_special_tokens=True))
84
  ```
85
  ```