Update README.md
Browse files
README.md
CHANGED
@@ -8,41 +8,51 @@ base_model:
|
|
8 |
|
9 |
# litert-community/DeepSeek-R1-Distill-Qwen-1.5B
|
10 |
|
11 |
-
This model
|
12 |
|
13 |
-
##
|
14 |
|
15 |
-
|
|
|
|
|
16 |
|
17 |
[](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/DeepSeek-R1-Distill-Qwen-1.5B/blob/main/deepseek%20tflite.ipynb)
|
18 |
|
19 |
-
|
|
|
|
|
20 |
|
21 |
-
|
22 |
|
23 |
-
|
24 |
|
25 |
-
Note that all benchmark stats are from a Samsung S24 Ultra.
|
26 |
|
27 |
<table border="1">
|
28 |
<tr>
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
<tr>
|
37 |
-
<th></th>
|
38 |
-
<td><b>Prefill 512 tokens</b></td><td><b>Decode 128 tokens</b></td>
|
39 |
</tr>
|
40 |
<tr>
|
41 |
-
|
42 |
-
|
|
|
|
|
|
|
|
|
|
|
43 |
</tr>
|
44 |
<tr>
|
45 |
-
|
46 |
-
|
|
|
|
|
|
|
|
|
47 |
</tr>
|
48 |
-
</table>
|
|
|
8 |
|
9 |
# litert-community/DeepSeek-R1-Distill-Qwen-1.5B
|
10 |
|
11 |
+
This model provides a few variants of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) that are ready for deployment on Android using the [LiteRT (fka TFLite) stack](https://ai.google.dev/edge/litert) and [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference).
|
12 |
|
13 |
+
## Use the models
|
14 |
|
15 |
+
### Colab
|
16 |
+
|
17 |
+
*Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.*
|
18 |
|
19 |
[](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/DeepSeek-R1-Distill-Qwen-1.5B/blob/main/deepseek%20tflite.ipynb)
|
20 |
|
21 |
+
### Android
|
22 |
+
|
23 |
+
To build the demo app from source, please follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md) from the GitHub repository.
|
24 |
|
25 |
+
## Performance
|
26 |
|
27 |
+
### Android
|
28 |
|
29 |
+
Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size, 65 tokens prefill, 128 tokens decode.
|
30 |
|
31 |
<table border="1">
|
32 |
<tr>
|
33 |
+
<th></th>
|
34 |
+
<th>Backend</th>
|
35 |
+
<th>Prefill (tokens/sec)</th>
|
36 |
+
<th>Decode (tokens/sec)</th>
|
37 |
+
<th>Time-to-first-token (sec)</th>
|
38 |
+
<th>Memory (RSS in MB)</th>
|
39 |
+
<th>Model size (MB)</th>
|
|
|
|
|
|
|
40 |
</tr>
|
41 |
<tr>
|
42 |
+
<td>fp32 (baseline)</td>
|
43 |
+
<td rowspan="2">CPU</td>
|
44 |
+
<td><p style="text-align: right">45</p></td>
|
45 |
+
<td><p style="text-align: right">6</p></td>
|
46 |
+
<td><p style="text-align: right">1.58</p></td>
|
47 |
+
<td><p style="text-align: right">6,144</p></td>
|
48 |
+
<td><p style="text-align: right">7,124</p></td>
|
49 |
</tr>
|
50 |
<tr>
|
51 |
+
<td>dynamic_int8</td>
|
52 |
+
<td><p style="text-align: right">271</p></td>
|
53 |
+
<td><p style="text-align: right">23</p></td>
|
54 |
+
<td><p style="text-align: right">0.54 </p></td>
|
55 |
+
<td><p style="text-align: right">1,869 </p></td>
|
56 |
+
<td><p style="text-align: right">1,861</p></td>
|
57 |
</tr>
|
58 |
+
</table>
|