Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ datasets:
|
|
8 |
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [wikisql dataset](https://huggingface.co/datasets/wikisql) for **SQL** to **English** **translation** text2text mission.
|
9 |
|
10 |
The model can be loaded like so:
|
11 |
-
|
12 |
```python
|
13 |
from transformers import AutoTokenizer, AutoModelWithLMHead
|
14 |
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_wikisql_SQL2en")
|
@@ -28,6 +28,6 @@ output = model.generate(input_ids=features['input_ids'].cuda(),
|
|
28 |
tokenizer.decode(output[0])
|
29 |
```
|
30 |
|
31 |
-
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn)
|
32 |
|
33 |
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|
|
|
8 |
This is a [t5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) fine-tuned version on the [wikisql dataset](https://huggingface.co/datasets/wikisql) for **SQL** to **English** **translation** text2text mission.
|
9 |
|
10 |
The model can be loaded like so:
|
11 |
+
(Necessary packages: !pip install transformers sentencepiece)
|
12 |
```python
|
13 |
from transformers import AutoTokenizer, AutoModelWithLMHead
|
14 |
tokenizer = AutoTokenizer.from_pretrained("dbernsohn/t5_wikisql_SQL2en")
|
|
|
28 |
tokenizer.decode(output[0])
|
29 |
```
|
30 |
|
31 |
+
The whole training process and hyperparameters are in my [GitHub repo](https://github.com/DorBernsohn/SQLM)
|
32 |
|
33 |
> Created by [Dor Bernsohn](https://www.linkedin.com/in/dor-bernsohn-70b2b1146/)
|