Abhaykoul commited on
Commit
9e950f7
Β·
verified Β·
1 Parent(s): 05c31c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -23
README.md CHANGED
@@ -32,37 +32,39 @@ widget:
32
  Explain me working of AI .</s>
33
  <|assistant|>
34
  ---
 
35
 
36
- # HelpingAI-Lite-1T
37
- # Subscribe to my YouTube channel
38
- [Subscribe](https://youtube.com/@OEvortex)
39
-
40
-
41
- HelpingAI-Lite is a lite version of the HelpingAI model that can assist with coding tasks. It's trained on a diverse range of datasets and fine-tuned to provide accurate and helpful responses.
42
-
43
- ## License
44
-
45
- This model is licensed under MIT.
46
-
47
- ## Datasets
48
-
49
- The model was trained on the following datasets:
50
  - cerebras/SlimPajama-627B
51
- - bigcode/starcoderdata
52
  - HuggingFaceH4/ultrachat_200k
 
53
  - HuggingFaceH4/ultrafeedback_binarized
54
  - OEvortex/vortex-mini
55
  - Open-Orca/OpenOrca
56
 
 
 
 
 
 
 
57
 
58
- ## Language
 
59
 
60
- The model supports English language.
 
 
 
 
61
 
62
- ## Usage
 
63
 
64
- # CPU and GPU code
 
65
 
 
66
  ```python
67
  from transformers import pipeline
68
  from accelerate import Accelerator
@@ -71,17 +73,17 @@ from accelerate import Accelerator
71
  accelerator = Accelerator()
72
 
73
  # Initialize the pipeline
74
- pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite", device=accelerator.device)
75
 
76
  # Define the messages
77
  messages = [
78
  {
79
  "role": "system",
80
- "content": "You are a chatbot who can help code!",
81
  },
82
  {
83
  "role": "user",
84
- "content": "Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.",
85
  },
86
  ]
87
 
@@ -93,4 +95,10 @@ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_
93
 
94
  # Print the generated text
95
  print(outputs[0]["generated_text"])
96
- ```
 
 
 
 
 
 
 
32
  Explain me working of AI .</s>
33
  <|assistant|>
34
  ---
35
+ 🌟 **HelpingAI-Lite-1.5T Model Card** 🌟
36
 
37
+ πŸ“Š **Datasets used:**
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  - cerebras/SlimPajama-627B
 
39
  - HuggingFaceH4/ultrachat_200k
40
+ - bigcode/starcoderdata
41
  - HuggingFaceH4/ultrafeedback_binarized
42
  - OEvortex/vortex-mini
43
  - Open-Orca/OpenOrca
44
 
45
+ πŸ—£οΈ **Language:**
46
+ - English (en)
47
+
48
+ πŸ“ˆ **Metrics:**
49
+ - Accuracy
50
+ - Speed
51
 
52
+ πŸ“š **Library Name:**
53
+ - transformers
54
 
55
+ 🏷️ **Tags:**
56
+ - Coder
57
+ - Text-Generation
58
+ - Transformers
59
+ - HelpingAI
60
 
61
+ πŸ”’ **License:**
62
+ - MIT
63
 
64
+ 🧠 **Model Overview:**
65
+ HelpingAI-Lite-1.5T is an advanced version of the HelpingAI-Lite model, trained on a vast corpus of 1.5 trillion tokens. This extensive training data enables the model to provide precise and insightful responses, particularly for coding tasks.
66
 
67
+ πŸ”§ **Usage Example:**
68
  ```python
69
  from transformers import pipeline
70
  from accelerate import Accelerator
 
73
  accelerator = Accelerator()
74
 
75
  # Initialize the pipeline
76
+ pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite-1.5T", device=accelerator.device)
77
 
78
  # Define the messages
79
  messages = [
80
  {
81
  "role": "system",
82
+ "content": "You are interacting with a sophisticated chatbot model optimized for coding tasks!",
83
  },
84
  {
85
  "role": "user",
86
+ "content": "Please generate a Python function that calculates the factorial of a given number.",
87
  },
88
  ]
89
 
 
95
 
96
  # Print the generated text
97
  print(outputs[0]["generated_text"])
98
+ ```
99
+
100
+ πŸ’‘ **Features:**
101
+ - Exceptional accuracy and speed
102
+ - Trained on a massive 1.5 trillion token dataset
103
+ - Specialized for text-generation tasks in coding
104
+ - Fine-tuned for optimal performance in English language task