Delete README (1).md
#2
by
aishu1505
- opened
- README (1).md +0 -65
README (1).md
DELETED
@@ -1,65 +0,0 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
datasets: Hemanth-thunder/en_ta
|
4 |
-
language:
|
5 |
-
- ta
|
6 |
-
- en
|
7 |
-
widget:
|
8 |
-
- text: A room without books is like a body without a soul
|
9 |
-
- text: hardwork never fail
|
10 |
-
- text: Be the change that you wish to see in the world.
|
11 |
-
- text: i love seeing moon
|
12 |
-
pipeline_tag: text2text-generation
|
13 |
-
---
|
14 |
-
|
15 |
-
# English to Tamil Translation Model
|
16 |
-
|
17 |
-
This model translates English sentences into Tamil using a fine-tuned version of the [Mr-Vicky](https://huggingface.co/Mr-Vicky-01/Fine_tune_english_to_tamil) available on the Hugging Face model hub.
|
18 |
-
|
19 |
-
## About the Authors
|
20 |
-
This model was developed by [suriya7](https://huggingface.co/suriya7) in collaboration with [Mr-Vicky](https://huggingface.co/Mr-Vicky-01).
|
21 |
-
|
22 |
-
## Usage
|
23 |
-
|
24 |
-
To use this model, you can either directly use the Hugging Face `transformers` library or you can use the model via the Hugging Face inference API.
|
25 |
-
|
26 |
-
|
27 |
-
### Model Information
|
28 |
-
|
29 |
-
Training Details
|
30 |
-
|
31 |
-
- **This model has been fine-tuned for English to Tamil translation.**
|
32 |
-
- **Training Duration: Over 10 hours**
|
33 |
-
- **Loss Achieved: 0.6**
|
34 |
-
- **Model Architecture**
|
35 |
-
- **The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.**
|
36 |
-
|
37 |
-
### Installation
|
38 |
-
To use this model, you'll need to have the `transformers` library installed. You can install it via pip:
|
39 |
-
```bash
|
40 |
-
pip install transformers
|
41 |
-
```
|
42 |
-
### Via Transformers Library
|
43 |
-
|
44 |
-
You can use this model in your Python code like this:
|
45 |
-
|
46 |
-
## Inference
|
47 |
-
1. **How to use the model in our notebook**:
|
48 |
-
```python
|
49 |
-
# Load model directly
|
50 |
-
import torch
|
51 |
-
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
52 |
-
|
53 |
-
checkpoint = "suriya7/English-to-Tamil"
|
54 |
-
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
55 |
-
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
|
56 |
-
|
57 |
-
def language_translator(text):
|
58 |
-
tokenized = tokenizer([text], return_tensors='pt')
|
59 |
-
out = model.generate(**tokenized, max_length=128)
|
60 |
-
return tokenizer.decode(out[0],skip_special_tokens=True)
|
61 |
-
|
62 |
-
text_to_translate = "hardwork never fail"
|
63 |
-
output = language_translator(text_to_translate)
|
64 |
-
print(output)
|
65 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|