alinemati commited on
Commit
f358dc2
·
verified ·
1 Parent(s): ffb6506

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +13 -19
README.md CHANGED
@@ -13,28 +13,22 @@ tags:
13
  - bnb
14
  ---
15
 
16
- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
17
 
18
- We have a Google Colab Tesla T4 notebook for Mistral 7b here: https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing
19
 
20
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
21
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
23
 
24
- ## ✨ Finetune for Free
 
 
25
 
26
- All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
 
 
27
 
28
- | Unsloth supports | Free Notebooks | Performance | Memory use |
29
- |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
30
- | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
31
- | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
32
- | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
33
- | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
34
- | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
35
- | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
36
- | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
37
 
38
- - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
39
- - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
40
- - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
 
13
  - bnb
14
  ---
15
 
16
+ **osllm.ai Models Highlights Program**
17
 
18
+ **We believe there's no need to pay a token if you have a GPU on your computer.**
19
 
20
+ Highlighting new and noteworthy models from the community. Join the conversation on Discord.
 
 
21
 
22
+ <p align="center">
23
+ <a href="https://osllm.ai">Official Website</a> &bull; <a href="https://docs.osllm.ai/index.html">Documentation</a> &bull; <a href="https://discord.gg/2fftQauwDD">Discord</a>
24
+ </p>
25
 
26
+ <p align="center">
27
+ <b>NEW:</b> <a href="https://docs.google.com/forms/d/1CQXJvxLUqLBSXnjqQmRpOyZqD6nrKubLz2WTcIJ37fU/prefill">Subscribe to our mailing list</a> for updates and news!
28
+ </p>
29
 
30
 
 
 
 
 
 
 
 
31
 
32
+ Disclaimers
33
+
34
+ osllm.ai is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. osllm.ai does not endorse, support, represent, or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate, or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. osllm.ai may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. osllm.ai disclaims all warranties or guarantees about the accuracy, reliability, or benefits of the Community Models. osllm.ai further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted, or available at any time or location, or error-free, virus-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through osllm.ai.