bhargavis commited on
Commit
bb6bddd
·
verified ·
1 Parent(s): 3ea03e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -42,6 +42,8 @@ The small dataset size is intentional, as the focus is on few-shot learning rath
42
  - Max Input Length: 512 tokens
43
  - Max Output Length: 64 tokens
44
 
 
 
45
  ### Performance
46
  Due to the few-shot nature of this model, its performance is not directly comparable to models trained on the full XSUM dataset. However, it demonstrates the potential of few-shot learning for summarization tasks. Key metrics on the validation set (50 samples) include:
47
 
@@ -87,9 +89,6 @@ print(summary[0]["summary_text"])
87
  - The model is fine-tuned on BBC articles from the XSUM dataset. Its performance may vary on text from other domains.
88
  - The model may overfit to the training data due to the small dataset size.
89
 
90
- ##### Full-Shot learning model- For a more general-purpose summarization model, check out the full model trained on the entire XSUM dataset: [fulltrain-xsum-bart](https://huggingface.co/bhargavis/fulltrain-xsum-bart).
91
-
92
-
93
  ### Citation
94
  If you use this model in your research please cite it as follows:
95
 
 
42
  - Max Input Length: 512 tokens
43
  - Max Output Length: 64 tokens
44
 
45
+ ##### Full-Shot learning model- For a more general-purpose summarization model, check out the full model trained on the entire XSUM dataset: [fulltrain-xsum-bart](https://huggingface.co/bhargavis/fulltrain-xsum-bart).
46
+
47
  ### Performance
48
  Due to the few-shot nature of this model, its performance is not directly comparable to models trained on the full XSUM dataset. However, it demonstrates the potential of few-shot learning for summarization tasks. Key metrics on the validation set (50 samples) include:
49
 
 
89
  - The model is fine-tuned on BBC articles from the XSUM dataset. Its performance may vary on text from other domains.
90
  - The model may overfit to the training data due to the small dataset size.
91
 
 
 
 
92
  ### Citation
93
  If you use this model in your research please cite it as follows:
94