Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ The model is trained on around ~11B tokens (64 size batch, 512 tokens, 350k step
|
|
9 |
## load tokenizer
|
10 |
|
11 |
```
|
12 |
-
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("flax-community/bengali-t5-
|
13 |
>>> tokenizer.encode("আমি বাংলার গান গাই")
|
14 |
>>> tokenizer.decode([93, 1912, 814, 5995, 3, 1])
|
15 |
```
|
|
|
9 |
## load tokenizer
|
10 |
|
11 |
```
|
12 |
+
>>> tokenizer = transformers.AutoTokenizer.from_pretrained("flax-community/bengali-t5-base")
|
13 |
>>> tokenizer.encode("আমি বাংলার গান গাই")
|
14 |
>>> tokenizer.decode([93, 1912, 814, 5995, 3, 1])
|
15 |
```
|