Files changed (1) hide show
  1. README.md +48 -1
README.md CHANGED
@@ -3,4 +3,51 @@ license: apache-2.0
3
  language:
4
  - ta
5
  - en
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  language:
4
  - ta
5
  - en
6
+ ---
7
+ pipeline_tag: text2text-generation
8
+
9
+ datasets:aishu15/aryaumeshl
10
+
11
+
12
+
13
+ ## Usage
14
+
15
+ To use this model, you can either directly use the Hugging Face `transformers` library or you can use the model via the Hugging Face inference API.
16
+
17
+
18
+ ### Model Information
19
+
20
+ Training Details
21
+
22
+ - **This model has been fine-tuned for English to Tamil translation.**
23
+ - **Training Duration: Over 10 hours**
24
+ - **Loss Achieved: 0.6**
25
+ - **Model Architecture**
26
+ - **The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.**
27
+
28
+ ### Installation
29
+ To use this model, you'll need to have the `transformers` library installed. You can install it via pip:
30
+ ```bash
31
+ pip install transformers
32
+ ```
33
+ ### Via Transformers Library
34
+
35
+ You can use this model in your Python code like this:
36
+
37
+ ## Inference
38
+ 1. **How to use the model in our notebook**:
39
+ ```python
40
+ # Load model directly
41
+ import torch
42
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
43
+ checkpoint = "aishu15/English-to-Tamil"
44
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
45
+ model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
46
+ def language_translator(text):
47
+ tokenized = tokenizer([text], return_tensors='pt')
48
+ out = model.generate(**tokenized, max_length=128)
49
+ return tokenizer.decode(out[0],skip_special_tokens=True)
50
+ text_to_translate = "hardwork never fail"
51
+ output = language_translator(text_to_translate)
52
+ print(output)
53
+ ```