houssamDev commited on
Commit
7ec3d53
·
verified ·
1 Parent(s): b8dc257

Update README.md

Browse files

🧠 Text Summarization Model Evaluation
This project evaluates a sequence-to-sequence Transformer model on the Wikitext-103 dataset using ROUGE metrics. The model was trained to perform abstractive text summarization.

🏋️ Training Performance
Training Loss: 3.4396

This indicates the average model loss during training, showing reasonable convergence.

🧪 Validation Results
Metric Score
ROUGE-1 0.8325
ROUGE-2 0.7163
ROUGE-L 0.8326
ROUGE-Lsum 0.8326

The high ROUGE scores on the validation set demonstrate that the model captures both unigram and bigram overlap effectively, while maintaining structural similarity with the target summaries.

🧾 Test Results
Metric Score
ROUGE-1 0.7806
ROUGE-2 0.6820
ROUGE-L 0.7805
ROUGE-Lsum 0.7805

The model generalizes well to unseen data with a slight drop compared to validation performance, which is expected.

📌 Notes
Model: You can replace this with your specific model name (e.g., t5-base, bart-large, etc.)

Dataset: wikitext-103-raw-v1 from Hugging Face Datasets.

Evaluation Metric: ROUGE – commonly used in summarization tasks to measure the overlap between generated and reference texts.

Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -1,3 +1,17 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Salesforce/wikitext
5
+ language:
6
+ - en
7
+ metrics:
8
+ - rouge
9
+ base_model:
10
+ - distilbert/distilgpt2
11
+ pipeline_tag: text-generation
12
+ library_name: transformers
13
+ tags:
14
+ - wikipedia
15
+ - text-generation-inference
16
+ - gbt
17
+ ---