Iker commited on
Commit
f24e8a5
·
1 Parent(s): 0dccfd5
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -44,8 +44,8 @@ See the [Supported languages table](supported_languages.md) for a table of the s
44
 
45
  ## Supported Models
46
 
47
- 💥 EasyTranslate now supports any Seq2SeqLM (m2m100, nllb200, small100, mbart, MarianMT, T5, FlanT5, etc.) and any CausalLM (GPT2, LLaMA, Vicuna, Falcon) model from HuggingFace's Hub!!
48
- We still recommend you to use M2M100 or NLLB200 for the best results, but you can experiment with other LLMs and prompting to generate translations. See [Prompting Section](#prompting) for more information.
49
  You can also see [the examples folder](examples) for examples of how to use EasyTranslate with different models.
50
 
51
  ### M2M100
@@ -74,7 +74,7 @@ You can also see [the examples folder](examples) for examples of how to use Easy
74
  - **facebook/nllb-200-distilled-600M**: <https://huggingface.co/facebook/nllb-200-distilled-600M>
75
 
76
  ### Other MT Models supported
77
- We support every MT model in the 🤗 Hugging Face's Hub. If you find one that doesn't work, please open an issue for us to fix it or a PR with the fix. This includes, among many others:
78
  - **Small100**: <https://huggingface.co/alirezamsh/small100>
79
  - **Mbart many-to-many / many-to-one**: <https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt>
80
  - **Opus MT**: <https://huggingface.co/Helsinki-NLP/opus-mt-es-en>
@@ -112,10 +112,10 @@ HuggingFace Transformers
112
  If you plan to use NLLB200, please use >= 4.28.0, as an important bug was fixed in this version.
113
  pip install --upgrade transformers
114
 
115
- BitsAndBytes (Optional, for 8-bits / 4bits quantization)
116
  pip install bitsandbytes
117
 
118
- PEFT (Optional, for LoRA models)
119
  pip install peft
120
  ```
121
 
 
44
 
45
  ## Supported Models
46
 
47
+ 💥 EasyTranslate now supports any Seq2SeqLM (m2m100, nllb200, small100, mbart, MarianMT, T5, FlanT5, etc.) and any CausalLM (GPT2, LLaMA, Vicuna, Falcon) model from 🤗 Hugging Face's Hub!!
48
+ We still recommend you to use M2M100 or NLLB200 for the best results, but you can experiment with any other MT model, as well as prompting LLMs to generate translations (See [Prompting Section](#prompting) for more details).
49
  You can also see [the examples folder](examples) for examples of how to use EasyTranslate with different models.
50
 
51
  ### M2M100
 
74
  - **facebook/nllb-200-distilled-600M**: <https://huggingface.co/facebook/nllb-200-distilled-600M>
75
 
76
  ### Other MT Models supported
77
+ We support every MT model in the 🤗 Hugging Face's Hub. If you find a model that doesn't work, please open an issue for us to fix it or a PR with the fix. This includes, among many others:
78
  - **Small100**: <https://huggingface.co/alirezamsh/small100>
79
  - **Mbart many-to-many / many-to-one**: <https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt>
80
  - **Opus MT**: <https://huggingface.co/Helsinki-NLP/opus-mt-es-en>
 
112
  If you plan to use NLLB200, please use >= 4.28.0, as an important bug was fixed in this version.
113
  pip install --upgrade transformers
114
 
115
+ BitsAndBytes (Optional, required 8-bits / 4-bits quantization)
116
  pip install bitsandbytes
117
 
118
+ PEFT (Optional, required for loading LoRA models)
119
  pip install peft
120
  ```
121