Hiroaki Hayashi
commited on
Commit
·
db240a5
1
Parent(s):
40d3f28
Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ This checkpoint (CodeGen-NL 350M) was pre-trained on [the Pile](https://github.c
|
|
17 |
|
18 |
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
|
19 |
The family of models are trained using 4 TPU-v4 chips by Google, leveraging data and model parallelism.
|
20 |
-
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474)for more details.
|
21 |
|
22 |
## Evaluation results
|
23 |
|
@@ -37,12 +37,11 @@ This model can be easily loaded using the `AutoModelForCausalLM` functionality:
|
|
37 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
38 |
tokenizer = AutoTokenizer.from_pretrained('Salesforce/codegen-350M-nl')
|
39 |
model = AutoModelForCausalLM.from_pretrained('Salesforce/codegen-350M-nl')
|
|
|
40 |
text = "def hello_world():"
|
41 |
input_ids = tokenizer(text, return_tensors="pt").input_ids
|
42 |
-
# simply generate a single sequence
|
43 |
generated_ids = model.generate(input_ids, max_length=128)
|
44 |
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
|
45 |
-
# this prints "{user.username}"
|
46 |
```
|
47 |
|
48 |
## BibTeX entry and citation info
|
|
|
17 |
|
18 |
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
|
19 |
The family of models are trained using 4 TPU-v4 chips by Google, leveraging data and model parallelism.
|
20 |
+
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
|
21 |
|
22 |
## Evaluation results
|
23 |
|
|
|
37 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
38 |
tokenizer = AutoTokenizer.from_pretrained('Salesforce/codegen-350M-nl')
|
39 |
model = AutoModelForCausalLM.from_pretrained('Salesforce/codegen-350M-nl')
|
40 |
+
|
41 |
text = "def hello_world():"
|
42 |
input_ids = tokenizer(text, return_tensors="pt").input_ids
|
|
|
43 |
generated_ids = model.generate(input_ids, max_length=128)
|
44 |
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
|
|
|
45 |
```
|
46 |
|
47 |
## BibTeX entry and citation info
|