Update README.md
Browse files
README.md
CHANGED
@@ -98,7 +98,7 @@ We evaluate DMind-1 and DMind-1-mini using the [DMind Benchmark](https://hugging
|
|
98 |
|
99 |
To complement accuracy metrics, we conducted a **cost-performance analysis** by comparing benchmark scores against publicly available input token prices across 24 leading LLMs. In this evaluation:
|
100 |
|
101 |
-
- **DMind-1** achieved the highest Web3 score while maintaining one of the lowest token input costs among top-tier models such as Grok 3 and Claude 3.
|
102 |
|
103 |
- **DMind-1-mini** ranked second, retaining over 95% of DMind-1’s performance with greater efficiency in latency and compute.
|
104 |
|
|
|
98 |
|
99 |
To complement accuracy metrics, we conducted a **cost-performance analysis** by comparing benchmark scores against publicly available input token prices across 24 leading LLMs. In this evaluation:
|
100 |
|
101 |
+
- **DMind-1** achieved the highest Web3 score while maintaining one of the lowest token input costs among top-tier models such as Grok 3 and Claude 3.7 Sonnet.
|
102 |
|
103 |
- **DMind-1-mini** ranked second, retaining over 95% of DMind-1’s performance with greater efficiency in latency and compute.
|
104 |
|