File size: 1,936 Bytes
132a784
 
3b31e7f
 
 
132a784
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b31e7f
132a784
 
3b31e7f
 
132a784
3b31e7f
132a784
27ff03a
cdbcd9a
e756c52
27ff03a
 
 
 
 
e756c52
7213ed9
132a784
e756c52
7213ed9
132a784
e756c52
 
132a784
e756c52
7213ed9
132a784
e756c52
7213ed9
132a784
e756c52
7213ed9
132a784
e756c52
7213ed9
132a784
e756c52
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
library_name: transformers
license: apache-2.0
language:
- mn
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->



## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** [Sainbayar B. (Б. Сайнбаяр) https://www.instagram.com/only_sainaa/]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [Mongolian Cyrillic to Traditional Mongolian Script conversion (Монгол кириллээс монгол бичиг рүү хөрвүүлэгч загвар)]
- **Language(s) (NLP):** [Mongolian /Монгол/]
- **License:** [More Information Needed]
- **Finetuned from model [google-t5-small]:** [More Information Needed]


```python
#Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("onlysainaa/cyrillic_to_script-t5-model")
model = AutoModelForSeq2SeqLM.from_pretrained("onlysainaa/cyrillic_to_script-t5-model")

#Check if CUDA (GPU) is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

#Move the model to the same device (GPU or CPU)
model.to(device)

#Prepare text input
input_text = "сайн уу"  #Mongolian greeting

#Tokenize the input text
inputs = tokenizer(input_text, return_tensors="pt")

#Move the input tensors to the same device as the model
inputs = {k: v.to(device) for k, v in inputs.items() if k in ['input_ids', 'attention_mask']}

#Generate translation
outputs = model.generate(**inputs)

#Decode the output to human-readable text
translated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)

#Print the translated text
print(f"Translated Text: {translated_text}")
```