Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,7 @@ TAPEX model fine-tuned on WTQ.
|
|
2 |
|
3 |
To load it and run inference, you can do the following:
|
4 |
|
|
|
5 |
from transformers import BartTokenizer, BartForConditionalGeneration
|
6 |
import pandas as pd
|
7 |
|
@@ -31,4 +32,5 @@ encoding = tokenizer(joint_input, return_tensors="pt")
|
|
31 |
outputs = model.generate(**encoding)
|
32 |
|
33 |
# decode
|
34 |
-
tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
|
|
|
2 |
|
3 |
To load it and run inference, you can do the following:
|
4 |
|
5 |
+
```
|
6 |
from transformers import BartTokenizer, BartForConditionalGeneration
|
7 |
import pandas as pd
|
8 |
|
|
|
32 |
outputs = model.generate(**encoding)
|
33 |
|
34 |
# decode
|
35 |
+
tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
36 |
+
```
|