Update README.md
Browse files
README.md
CHANGED
@@ -20,4 +20,46 @@ tags:
|
|
20 |
use any connector for gguf file, i.e., [gguf-connector](https://pypi.org/project/gguf-connector/)
|
21 |
|
22 |
### reference
|
23 |
-
- base model: microsoft/[phi-4](https://huggingface.co/microsoft/phi-4)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
use any connector for gguf file, i.e., [gguf-connector](https://pypi.org/project/gguf-connector/)
|
21 |
|
22 |
### reference
|
23 |
+
- base model: microsoft/[phi-4](https://huggingface.co/microsoft/phi-4)
|
24 |
+
|
25 |
+
### model summary (by microsoft)
|
26 |
+
|
27 |
+
| | |
|
28 |
+
|-------------------------|-------------------------------------------------------------------------------|
|
29 |
+
| **Developers** | Microsoft Research |
|
30 |
+
| **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures |
|
31 |
+
| **Architecture** | 14B parameters, dense decoder-only Transformer model |
|
32 |
+
| **Inputs** | Text, best suited for prompts in the chat format |
|
33 |
+
| **Context length** | 16K tokens |
|
34 |
+
| **GPUs** | 1920 H100-80G |
|
35 |
+
| **Training time** | 21 days |
|
36 |
+
| **Training data** | 9.8T tokens |
|
37 |
+
| **Outputs** | Generated text in response to input |
|
38 |
+
| **Dates** | October 2024 – November 2024 |
|
39 |
+
| **Status** | Static model trained on an offline dataset with cutoff dates of June 2024 and earlier for publicly available data |
|
40 |
+
| **Release date** | December 12, 2024 |
|
41 |
+
| **License** | MIT |
|
42 |
+
|
43 |
+
## Intended Use
|
44 |
+
|
45 |
+
| | |
|
46 |
+
|-------------------------------|-------------------------------------------------------------------------|
|
47 |
+
| **Primary Use Cases** | Our model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:<br><br>1. Memory/compute constrained environments.<br>2. Latency bound scenarios.<br>3. Reasoning and logic. |
|
48 |
+
| **Out-of-Scope Use Cases** | Our models is not specifically designed or evaluated for all downstream purposes, thus:<br><br>1. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.<br>2. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English.<br>3. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. |
|
49 |
+
|
50 |
+
## model quality (by microsoft)
|
51 |
+
|
52 |
+
to understand the capabilities, we compare `phi-4` with a set of models over OpenAI’s SimpleEval benchmark.
|
53 |
+
|
54 |
+
at the high-level overview of the model quality on representative benchmarks. For the table below, higher numbers indicate better performance:
|
55 |
+
|
56 |
+
| **Category** | **Benchmark** | **phi-4** (14B) | **phi-3** (14B) | **Qwen 2.5** (14B instruct) | **GPT-4o-mini** | **Llama-3.3** (70B instruct) | **Qwen 2.5** (72B instruct) | **GPT-4o** |
|
57 |
+
|------------------------------|---------------|-----------|-----------------|----------------------|----------------------|--------------------|-------------------|-----------------|
|
58 |
+
| Popular Aggregated Benchmark | MMLU | 84.8 | 77.9 | 79.9 | 81.8 | 86.3 | 85.3 | **88.1** |
|
59 |
+
| Science | GPQA | **56.1** | 31.2 | 42.9 | 40.9 | 49.1 | 49.0 | 50.6 |
|
60 |
+
| Math | MGSM<br>MATH | 80.6<br>**80.4** | 53.5<br>44.6 | 79.6<br>75.6 | 86.5<br>73.0 | 89.1<br>66.3* | 87.3<br>80.0 | **90.4**<br>74.6 |
|
61 |
+
| Code Generation | HumanEval | 82.6 | 67.8 | 72.1 | 86.2 | 78.9* | 80.4 | **90.6** |
|
62 |
+
| Factual Knowledge | SimpleQA | 3.0 | 7.6 | 5.4 | 9.9 | 20.9 | 10.2 | **39.4** |
|
63 |
+
| Reasoning | DROP | 75.5 | 68.3 | 85.5 | 79.3 | **90.2** | 76.7 | 80.9 |
|
64 |
+
|
65 |
+
\* these scores are lower than those reported by Meta, perhaps because simple-evals has a strict formatting requirement that Llama models have particular trouble following. We use the simple-evals framework because it is reproducible, but Meta reports 77 for MATH and 88 for HumanEval on Llama-3.3-70B.
|