Anshoo Mehra
commited on
Commit
·
a82dc16
1
Parent(s):
aeeba8e
Update README.md
Browse files
README.md
CHANGED
@@ -8,54 +8,72 @@ model-index:
|
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
12 |
-
should probably proofread and complete it, then remove this comment. -->
|
13 |
-
|
14 |
# t5-v1_1-base-squadV2AutoQgen
|
15 |
|
16 |
-
This model was trained from scratch on an unknown dataset.
|
17 |
-
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 1.6476
|
19 |
-
- Rouge1: 0.1407
|
20 |
-
- Rouge2: 0.0716
|
21 |
-
- Rougel: 0.1245
|
22 |
-
- Rougelsum: 0.1356
|
23 |
-
|
24 |
## Model description
|
25 |
|
26 |
-
|
27 |
|
28 |
## Intended uses & limitations
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
|
37 |
|
38 |
### Training hyperparameters
|
39 |
|
40 |
The following hyperparameters were used during training:
|
41 |
- learning_rate: 0.0003
|
42 |
-
- train_batch_size:
|
43 |
-
- eval_batch_size:
|
44 |
-
- seed: 42
|
45 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
46 |
- lr_scheduler_type: linear
|
47 |
-
- num_epochs:
|
48 |
|
49 |
### Training results
|
|
|
50 |
|
51 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|
52 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
|
53 |
-
|
|
54 |
-
|
55 |
-
| 1.
|
56 |
-
| 1.
|
57 |
-
| 1.1733 | 5.0 | 23790 | 1.6319 | 0.1404 | 0.0718 | 0.1239 | 0.1351 |
|
58 |
-
| 1.1225 | 6.0 | 28548 | 1.6476 | 0.1407 | 0.0716 | 0.1245 | 0.1356 |
|
59 |
|
60 |
|
61 |
### Framework versions
|
|
|
8 |
results: []
|
9 |
---
|
10 |
|
|
|
|
|
|
|
11 |
# t5-v1_1-base-squadV2AutoQgen
|
12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## Model description
|
14 |
|
15 |
+
This model was fine-tuned from base t5 v1.1 on SQUAD2 for auto-question generation(i.e. without hints).
|
16 |
|
17 |
## Intended uses & limitations
|
18 |
|
19 |
+
The model is expected to produce one or possibly more than one question from provided context. If you are looking for model which receive hints as input or combination, these will be added soon and the link will be provided here: ##)
|
20 |
|
21 |
+
This model can be used as below:
|
22 |
+
|
23 |
+
```
|
24 |
+
from transformers import (
|
25 |
+
AutoModelForSeq2SeqLM,
|
26 |
+
AutoTokenizer
|
27 |
+
)
|
28 |
+
|
29 |
+
model_checkpoint = "anshoomehra/t5-v1_1-base-squadV2AutoQgen"
|
30 |
+
device = 'cuda'
|
31 |
|
32 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint).to(device)
|
33 |
+
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
|
34 |
+
|
35 |
+
## Input with prompt
|
36 |
+
context="question_context: <context>"
|
37 |
+
encodings = tokenizer.encode(context, return_tensors='pt', truncation=True, padding='max_length').to(device)
|
38 |
+
|
39 |
+
## You can play with many hyperparams to condition the output
|
40 |
+
output = model.generate(encodings,
|
41 |
+
#max_length=300,
|
42 |
+
#min_length=20,
|
43 |
+
#length_penalty=2.0,
|
44 |
+
num_beams=4,
|
45 |
+
#early_stopping=True,
|
46 |
+
#do_sample=True,
|
47 |
+
#temperature=1.1
|
48 |
+
)
|
49 |
+
|
50 |
+
## Multiple questions are expected to be delimited by </s>
|
51 |
+
questions = [tokenizer.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=False) for id in output]
|
52 |
+
```
|
53 |
+
|
54 |
+
## Training and evaluation data
|
55 |
|
56 |
+
SQUAD split.
|
57 |
|
58 |
### Training hyperparameters
|
59 |
|
60 |
The following hyperparameters were used during training:
|
61 |
- learning_rate: 0.0003
|
62 |
+
- train_batch_size: 2
|
63 |
+
- eval_batch_size: 2
|
|
|
64 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
65 |
- lr_scheduler_type: linear
|
66 |
+
- num_epochs: 10
|
67 |
|
68 |
### Training results
|
69 |
+
Rouge metrics is heavily penalized because of multiple questions in target sample space.
|
70 |
|
71 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|
72 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
|
73 |
+
| 2.0146 | 1.0 | 4758 | 1.6980 | 0.143 | 0.0705 | 0.1257 | 0.1384 |
|
74 |
+
...
|
75 |
+
| 1.1733 | 9.0 | 23790 | 1.6319 | 0.1404 | 0.0718 | 0.1239 | 0.1351 |
|
76 |
+
| 1.1225 | 10.0 | 28548 | 1.6476 | 0.1407 | 0.0716 | 0.1245 | 0.1356 |
|
|
|
|
|
77 |
|
78 |
|
79 |
### Framework versions
|