Abhaykoul commited on
Commit
364fab2
·
verified ·
1 Parent(s): aeec47a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -13,6 +13,7 @@ We're proud to be part of the [Hugging Face](https://huggingface.co/) community.
13
  2. **OEvortex/HelpingAI-Lite**: A lightweight version of our Text Generation model, optimized for speed and efficiency. It's perfect for tasks where computational resources are limited.
14
  3. **OEvortex/HelpingAI-Lite-chat**: HelpingAI-Lite-chat is a conversational model with 1 billion parameters. It is finetuned from HelpingAI and falcon
15
  4. **OEvortex/HelpingAI-Lite-2x1B**: The HelpingAI-Lite-2x1B stands as a state-of-the-art MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time.
 
16
  ### Datasets
17
  1. **OEvortex/Bhagavad_Gita**: This dataset provides a comprehensive collection of verses from the Bhagavad Gita, a 700-verse Hindu scripture.
18
 
 
13
  2. **OEvortex/HelpingAI-Lite**: A lightweight version of our Text Generation model, optimized for speed and efficiency. It's perfect for tasks where computational resources are limited.
14
  3. **OEvortex/HelpingAI-Lite-chat**: HelpingAI-Lite-chat is a conversational model with 1 billion parameters. It is finetuned from HelpingAI and falcon
15
  4. **OEvortex/HelpingAI-Lite-2x1B**: The HelpingAI-Lite-2x1B stands as a state-of-the-art MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time.
16
+ 5. **OEvortex/HelpingAI-Vision**: It uses the LLaVA adapter and the full SigLIP encoder to generate one token embedding per N parts of an image, enhancing scene understanding by capturing more detailed information
17
  ### Datasets
18
  1. **OEvortex/Bhagavad_Gita**: This dataset provides a comprehensive collection of verses from the Bhagavad Gita, a 700-verse Hindu scripture.
19