File size: 2,699 Bytes
d999d47
 
2817023
d999d47
 
 
 
 
 
 
 
 
 
 
 
 
 
2817023
d999d47
060c385
a95219d
d999d47
 
 
 
 
f4ac15a
 
d999d47
a95219d
d999d47
 
 
a95219d
 
d999d47
a95219d
d999d47
a95219d
 
d999d47
a95219d
d999d47
a95219d
d999d47
 
 
 
 
 
 
a95219d
 
d999d47
 
 
 
 
 
 
a95219d
d999d47
a95219d
 
d999d47
a95219d
 
 
d999d47
a95219d
d999d47
a95219d
d999d47
a95219d
 
 
 
 
 
 
d999d47
a95219d
 
d999d47
a95219d
d999d47
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
library_name: transformers
tags: [summarization, transformers, t5, fine-tuning, custom-dataset, text-generation]
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->



## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This is a fine-tuned T5 model for text summarization using the SAMSum dataset. The model has been trained using 🤗 Transformers and Hugging Face Trainer with mixed precision (fp16) to optimize memory efficiency.

- **Developed by:** Saravanan K
- **Finetuned from model [optional]:** t5-base

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/SARAVANANVIJAY123/DL-Assessment/blob/main/DL-L%26D%20CODE.ipynb


### Use Cases

### Direct Use

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This model can be used for text summarization tasks, particularly for summarizing dialogues and conversations.

### Downstream Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model can be fine-tuned further on other summarization datasets or used in larger NLP applications requiring summarization capabilities.

## Out Of Scope Use

The model may not perform well on non-dialogue-based text or non-English languages.



## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Biases: Since it is trained on the SAMSum dataset, it may have biases related to conversational English data.
Limitations: Performance may degrade on texts that are significantly different from the training dataset.



## How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

# Define model name (same as uploaded one)
model_name = "Saravanankumaran/summarisation_model"

# Load model and tokenizer from Hugging Face Hub
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

print("Model loaded successfully! ✅")

# Below is the example text to use the model

text = """
Laxmi Kant: what work you planning to give Tom?
Juli: i was hoping to send him on a business trip first.
Laxmi Kant: cool. is there any suitable work for him?
Juli: he did excellent in last quarter. i will assign new project, once he is back.
"""
inputs = tokenizer(text, return_tensors="pt")

output = model.generate(**inputs)
summary = tokenizer.decode(output[0], skip_special_tokens=True)

print("Generated Output:", summary)