alanzhuly commited on
Commit
0dd2af4
·
verified ·
1 Parent(s): 0d89662

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ DeepSeek-R1 has been making headlines for rivaling OpenAI’s O1 reasoning model
23
  We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to 1/4 its original size—without losing any accuracy. Tests on an **HP Omnibook AIPC** with an **AMD Ryzen™ AI 9 HX 370 processor** showed a decoding speed of **66.40 tokens per second** and a peak RAM usage of just **1228 MB** in NexaQuant version—compared to only **25.28 tokens** per second and **3788 MB RAM** in the unquantized version—while NexaQuant **maintaining full precision model accuracy.**
24
 
25
 
26
- ## NexaQunat Use Case Demo
27
 
28
  Here’s a comparison of how a standard Q4_K_M and NexaQuant-4Bit handle a common investment banking brain teaser question. NexaQuant excels in accuracy while shrinking the model file size by 4 times.
29
 
 
23
  We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to 1/4 its original size—without losing any accuracy. Tests on an **HP Omnibook AIPC** with an **AMD Ryzen™ AI 9 HX 370 processor** showed a decoding speed of **66.40 tokens per second** and a peak RAM usage of just **1228 MB** in NexaQuant version—compared to only **25.28 tokens** per second and **3788 MB RAM** in the unquantized version—while NexaQuant **maintaining full precision model accuracy.**
24
 
25
 
26
+ ## NexaQuant Use Case Demo
27
 
28
  Here’s a comparison of how a standard Q4_K_M and NexaQuant-4Bit handle a common investment banking brain teaser question. NexaQuant excels in accuracy while shrinking the model file size by 4 times.
29