Lucas-Hyun-Lee commited on
Commit
8712d21
·
verified ·
1 Parent(s): a67a6f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -36,8 +36,7 @@ This is the model card of a 🤗 transformers model that has been pushed on the
36
  - **Input Format:** The model expects input in a text-to-text format. Specifically, you provide a prompt (e.g., the lecture content) and specify the desired task (e.g., “summarize”). The model then generates a summary as the output.
37
  - **Fine-Tuning:** The Lucas-Hyun-Lee/T5_small_lecture_summarization model has likely undergone fine-tuning on lecture-specific data. During fine-tuning, it learns to optimize its parameters for summarization by minimizing a loss function.
38
  - **Model Size:** As the name suggests, this is a small-sized variant of T5. Smaller models are computationally efficient and suitable for scenarios where memory or processing power is limited.
39
- - **Performance:** The model’s performance depends on the quality and diversity of the training data, as well as the specific lecture content it encounters during fine-tuning.
40
- - It should be evaluated based on metrics such as ROUGE (Recall-Oriented Understudy for Gisting Evaluation) scores.
41
 
42
  ### Model Sources [optional]
43
 
 
36
  - **Input Format:** The model expects input in a text-to-text format. Specifically, you provide a prompt (e.g., the lecture content) and specify the desired task (e.g., “summarize”). The model then generates a summary as the output.
37
  - **Fine-Tuning:** The Lucas-Hyun-Lee/T5_small_lecture_summarization model has likely undergone fine-tuning on lecture-specific data. During fine-tuning, it learns to optimize its parameters for summarization by minimizing a loss function.
38
  - **Model Size:** As the name suggests, this is a small-sized variant of T5. Smaller models are computationally efficient and suitable for scenarios where memory or processing power is limited.
39
+ - **Performance:** The model’s performance depends on the quality and diversity of the training data, as well as the specific lecture content it encounters during fine-tuning. It should be evaluated based on metrics such as ROUGE (Recall-Oriented Understudy for Gisting Evaluation) scores.
 
40
 
41
  ### Model Sources [optional]
42