SandLogicTechnologies commited on
Commit
d958145
·
verified ·
1 Parent(s): c8e75b0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ base_model:
5
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
6
+ tags:
7
+ - Qwen2
8
+ - DeepSeek
9
+ ---
10
+ # DeepSeek-R1-Distill-Qwen-7B Quantized Models
11
+
12
+ This repository contains Q4_KM and Q5_KM quantized versions of the [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) model, optimized for efficient deployment while maintaining strong performance.
13
+
14
+ Discover our full range of quantized language models by visiting our [SandLogic Lexicon HuggingFace](https://huggingface.co/SandLogicTechnologies). To learn more about our company and services, check out our website at [SandLogic](https://www.sandlogic.com/).
15
+
16
+ ## Model Description
17
+
18
+ These models are quantized versions of DeepSeek-R1-Distill-Qwen-7B, which is a distilled 7B parameter model based on the Qwen architecture. The model demonstrates that reasoning patterns from larger models can be effectively distilled into smaller architectures, resulting in exceptional performance on various benchmarks.
19
+
20
+ ### Key Features
21
+ - Fine-tuned using DeepSeek-R1 generated reasoning data
22
+ - Modified configurations and tokenizer optimized for performance
23
+ - Maintains strong reasoning capabilities while reducing model size
24
+ - Suitable for research and production deployment
25
+
26
+ ### Available Quantized Versions
27
+
28
+ 1. **Q4_KM Version**
29
+ - 4-bit quantization using the K-means method
30
+ - Approximately 4 GB model size
31
+ - Optimal balance between model size and performance
32
+ - Recommended for resource-constrained environments
33
+
34
+ 2. **Q5_KM Version**
35
+ - 5-bit quantization using the K-means method
36
+ - Approximately 4.5GB model size
37
+ - Higher precision than Q4 while maintaining significant size reduction
38
+ - Recommended when higher accuracy is needed
39
+
40
+ ## Usage
41
+
42
+ ```bash
43
+ pip install llama-cpp-python
44
+ ```
45
+ Please refer to the llama-cpp-python [documentation](https://llama-cpp-python.readthedocs.io/en/latest/) to install with GPU support.
46
+
47
+ ### Basic Text Completion
48
+ Here's an example demonstrating how to use the high-level API for basic text completion:
49
+
50
+ ```python
51
+ from llama_cpp import Llama
52
+
53
+ llm = Llama(
54
+ model_path="model/path/",
55
+ verbose=False,
56
+ # n_gpu_layers=-1, # Uncomment to use GPU acceleration
57
+ # n_ctx=2048, # Uncomment to increase the context window
58
+ )
59
+
60
+ # Example of a reasoning task
61
+ output = llm(
62
+ "Q: Explain the concept of natural selection in simple terms. A: ",
63
+ max_tokens=256,
64
+ stop=["Q:", "\n\n"],
65
+ echo=False
66
+ )
67
+
68
+ print(output["choices"][0]["text"])
69
+ ```
70
+
71
+
72
+
73
+ ## Model Configuration Changes
74
+
75
+ Please note that DeepSeek have made slight modifications to the original Qwen-7B configurations and tokenizer to optimize performance. When using these models, ensure you're using provided settings rather than the original Qwen-7B configurations.
76
+
77
+ ## License
78
+
79
+ This model inherits the license of the original DeepSeek-R1-Distill-Qwen-7B model. Please refer to the original model's license for usage terms and conditions.
80
+
81
+
82
+
83
+ ## Acknowledgments
84
+
85
+ We thank the DeepSeek AI team for open-sourcing their distilled models and demonstrating that smaller models can achieve impressive performance through effective distillation techniques. Special thanks also to the Qwen team for providing the base model architecture.