|
--- |
|
license: mit |
|
base_model: perplexity-ai/r1-1776-distill-llama-70b |
|
--- |
|
|
|
# R1 1776 Distill Llama 70B |
|
|
|
Blog link: [https://perplexity.ai/hub/blog/open-sourcing-r1-1776](https://perplexity.ai/hub/blog/open-sourcing-r1-1776 ) |
|
|
|
This is a Llama 70B distilled version of [R1 1776](https://huggingface.co/perplexity-ai/r1-1776). |
|
|
|
R1 1776 is a DeepSeek-R1 reasoning model that has been post-trained by Perplexity AI to remove Chinese Communist Party censorship. |
|
The model provides unbiased, accurate, and factual information while maintaining high reasoning capabilities. |
|
|
|
## Evals |
|
|
|
To ensure our model remains fully “uncensored” and capable of engaging with a broad spectrum of sensitive topics, |
|
we curated a diverse, multilingual evaluation set of over a 1000 of examples that comprehensively cover such subjects. |
|
We then use human annotators as well as carefully designed LLM judges to measure the likelihood a model will evade or |
|
provide overly sanitized responses to the queries. |
|
|
|
We also ensured that the model’s math and reasoning abilities remained intact after the decensoring process. |
|
Evaluations on multiple benchmarks showed that our post-trained model performed on par with the base R1 model, |
|
indicating that the decensoring had no impact on its core reasoning capabilities. |
|
|
|
| Benchmark | R1-Distill-Llama-70B | R1-1776-Distill-Llama-70B | |
|
| --- | --- | --- | |
|
| China Censorship | 80.53 | 0.2 | |
|
| Internal Benchmarks (avg) | 47.64 | 48.4 | |
|
| AIME 2024 | 70 | 70 | |
|
| MATH-500 | 94.5 | 94.8 | |
|
| MMLU | 88.52 * | 88.40 | |
|
| DROP | 84.55 * | 84.83 | |
|
| GPQA | 65.2 | 65.05 | |
|
|
|
\* Evaluated by Perplexity AI since they were not reported in the [paper](https://arxiv.org/abs/2501.12948). |