Abhaykoul commited on
Commit
05c31c9
·
verified ·
1 Parent(s): 3763e43

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -17
README.md CHANGED
@@ -1,29 +1,96 @@
1
  ---
2
- license: apache-2.0
3
  datasets:
4
- - Open-Orca/OpenOrca
5
- - bigcode/starcoderdata
6
  - cerebras/SlimPajama-627B
 
 
 
 
 
7
  language:
8
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- #### Base model:
14
- PY007/TinyLlama-1.1B-intermediate-step-480k-1T
15
 
16
- #### Dataset:
17
- Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format
 
 
 
 
 
 
 
 
 
18
 
19
- #### Model License:
20
- Apache 2.0, following the TinyLlama base model.
21
 
22
- #### Quantisation:
23
- - GPTQ:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-GPTQ
24
- - AWQ:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-AWQ
25
- - GGUF:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF
26
 
27
- #### Hardware and training details:
28
- Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning.
29
- https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details.
 
1
  ---
 
2
  datasets:
 
 
3
  - cerebras/SlimPajama-627B
4
+ - HuggingFaceH4/ultrachat_200k
5
+ - bigcode/starcoderdata
6
+ - HuggingFaceH4/ultrafeedback_binarized
7
+ - OEvortex/vortex-mini
8
+ - Open-Orca/OpenOrca
9
  language:
10
  - en
11
+ metrics:
12
+ - accuracy
13
+ - speed
14
+ library_name: transformers
15
+ tags:
16
+ - coder
17
+ - Text-Generation
18
+ - Transformers
19
+ - HelpingAI
20
+ license: mit
21
+ widget:
22
+ - text: |
23
+ <|system|>
24
+ You are a chatbot who can code!</s>
25
+ <|user|>
26
+ Write me a function to search for OEvortex on youtube use Webbrowser .</s>
27
+ <|assistant|>
28
+ - text: |
29
+ <|system|>
30
+ You are a chatbot who can be a teacher!</s>
31
+ <|user|>
32
+ Explain me working of AI .</s>
33
+ <|assistant|>
34
  ---
35
 
36
+ # HelpingAI-Lite-1T
37
+ # Subscribe to my YouTube channel
38
+ [Subscribe](https://youtube.com/@OEvortex)
39
+
40
+
41
+ HelpingAI-Lite is a lite version of the HelpingAI model that can assist with coding tasks. It's trained on a diverse range of datasets and fine-tuned to provide accurate and helpful responses.
42
+
43
+ ## License
44
+
45
+ This model is licensed under MIT.
46
+
47
+ ## Datasets
48
+
49
+ The model was trained on the following datasets:
50
+ - cerebras/SlimPajama-627B
51
+ - bigcode/starcoderdata
52
+ - HuggingFaceH4/ultrachat_200k
53
+ - HuggingFaceH4/ultrafeedback_binarized
54
+ - OEvortex/vortex-mini
55
+ - Open-Orca/OpenOrca
56
+
57
+
58
+ ## Language
59
+
60
+ The model supports English language.
61
+
62
+ ## Usage
63
+
64
+ # CPU and GPU code
65
+
66
+ ```python
67
+ from transformers import pipeline
68
+ from accelerate import Accelerator
69
+
70
+ # Initialize the accelerator
71
+ accelerator = Accelerator()
72
 
73
+ # Initialize the pipeline
74
+ pipe = pipeline("text-generation", model="OEvortex/HelpingAI-Lite", device=accelerator.device)
75
 
76
+ # Define the messages
77
+ messages = [
78
+ {
79
+ "role": "system",
80
+ "content": "You are a chatbot who can help code!",
81
+ },
82
+ {
83
+ "role": "user",
84
+ "content": "Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.",
85
+ },
86
+ ]
87
 
88
+ # Prepare the prompt
89
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
90
 
91
+ # Generate predictions
92
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
 
 
93
 
94
+ # Print the generated text
95
+ print(outputs[0]["generated_text"])
96
+ ```