Update README.md
Browse files
README.md
CHANGED
@@ -25,19 +25,19 @@ license: apache-2.0
|
|
25 |
|
26 |
We have created a new version of the benchmark with instances that are harder than the previous one. There has been a lot of progress in models
|
27 |
over the last year as a result the previous version of the benchmark was saturated. The methodology is the same, we have also released the
|
28 |
-
dataset generation script which scans the top 100 Python projects to generate the instances. You can see it [here](
|
29 |
-
The same [eval script](
|
30 |
only use their OSS rules for this version of the benchmark.
|
31 |
|
32 |
-
The highest score a model can get on this benchmark is 100%, you can see the oracle run logs [here](
|
33 |
|
34 |
# New Evaluation
|
35 |
|
36 |
| Model | Score | Logs |
|
37 |
|:-----:|:-----:|:----:|
|
38 |
-
| gpt-4o-mini | 52.21 | [link]()|
|
39 |
-
| + 3-shot prompt | 53.
|
40 |
-
| + rag (embedding & reranking) | 58.41 | [link]() |
|
41 |
| + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | | [link]() |
|
42 |
|
43 |
|
@@ -87,34 +87,34 @@ API token already exists in /Users/user/.semgrep/settings.yml. To login with a d
|
|
87 |
```
|
88 |
|
89 |
After the run, the script will also create a log file which captures the stats for the run and the files that were fixed.
|
90 |
-
You can see an example [here](
|
91 |
Due to the recent versions of Semgrep not detecting a few of the samples in the dataset as vulnerable anymore, the maximum score
|
92 |
-
possible on the benchmark is 77.63%. You can see the oracle run log [here](
|
93 |
|
94 |
## Evaluation
|
95 |
We did some detailed evaluations recently (19/08/2024):
|
96 |
|
97 |
| Model | Score | Logs |
|
98 |
|:-----:|:-----:|:----:|
|
99 |
-
| gpt-4o-mini | 67.11 | [link](
|
100 |
-
| gpt-4o-mini + 3-shot prompt | 71.05 | [link](
|
101 |
-
| gpt-4o-mini + rag (embedding & reranking) | 72.37 | [link](
|
102 |
-
| gpt-4o-mini + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 77.63 | [link](
|
103 |
|
104 |
|
105 |
| Model | Score | Logs |
|
106 |
|:-----:|:-----:|:----:|
|
107 |
-
| gpt-4o | 68.42 | [link](
|
108 |
-
| gpt-4o + 3-shot prompt | 77.63 | [link](
|
109 |
-
| gpt-4o + rag (embedding & reranking) | 77.63 | [link](
|
110 |
-
| gpt-4o + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 77.63 | [link](
|
111 |
|
112 |
# Leaderboard
|
113 |
|
114 |
The top models on the leaderboard are all fine-tuned using the same dataset that we released called [synth vuln fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes).
|
115 |
You can read about our experience with fine-tuning them on our [blog](https://www.patched.codes/blog/a-comparative-study-of-fine-tuning-gpt-4o-mini-gemini-flash-1-5-and-llama-3-1-8b).
|
116 |
You can also explore the leaderboard with this [interactive visualization](https://claude.site/artifacts/5656c16d-9751-407c-9631-a3526c259354).
|
117 |
-
 | Time (mins) | Price (USD) |
|
120 |
|:-------------------------:|:----------------------:|:-------------:|:-----------:|
|
|
|
25 |
|
26 |
We have created a new version of the benchmark with instances that are harder than the previous one. There has been a lot of progress in models
|
27 |
over the last year as a result the previous version of the benchmark was saturated. The methodology is the same, we have also released the
|
28 |
+
dataset generation script which scans the top 100 Python projects to generate the instances. You can see it [here](_script_for_gen.py).
|
29 |
+
The same [eval script](_script_for_eval.py) works as before. You do not need to login to Semgrep anymore as we
|
30 |
only use their OSS rules for this version of the benchmark.
|
31 |
|
32 |
+
The highest score a model can get on this benchmark is 100%, you can see the oracle run logs [here](oracle-0-shot_semgrep_1.85.0_20240820_174931.log).
|
33 |
|
34 |
# New Evaluation
|
35 |
|
36 |
| Model | Score | Logs |
|
37 |
|:-----:|:-----:|:----:|
|
38 |
+
| gpt-4o-mini | 52.21 | [link](gpt-4o-mini-0-shot_semgrep_1.85.0_20240820_201236.log)|
|
39 |
+
| + 3-shot prompt | 53.10 | [link](gpt-4o-mini-3-shot_semgrep_1.85.0_20240820_213814.log)|
|
40 |
+
| + rag (embedding & reranking) | 58.41 | [link](gpt-4o-mini-3-shot-sim_semgrep_1.85.0_20240821_023541.log) |
|
41 |
| + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | | [link]() |
|
42 |
|
43 |
|
|
|
87 |
```
|
88 |
|
89 |
After the run, the script will also create a log file which captures the stats for the run and the files that were fixed.
|
90 |
+
You can see an example [here](gpt-4o-mini_semgrep_1.85.0_20240818_215254.log).
|
91 |
Due to the recent versions of Semgrep not detecting a few of the samples in the dataset as vulnerable anymore, the maximum score
|
92 |
+
possible on the benchmark is 77.63%. You can see the oracle run log [here](oracle-0-shot_semgrep_1.85.0_20240819_022711.log).
|
93 |
|
94 |
## Evaluation
|
95 |
We did some detailed evaluations recently (19/08/2024):
|
96 |
|
97 |
| Model | Score | Logs |
|
98 |
|:-----:|:-----:|:----:|
|
99 |
+
| gpt-4o-mini | 67.11 | [link](gpt-4o-mini_semgrep_1.85.0_20240818_215254.log)|
|
100 |
+
| gpt-4o-mini + 3-shot prompt | 71.05 | [link](gpt-4o-mini-3-shot_semgrep_1.85.0_20240818_234709.log)|
|
101 |
+
| gpt-4o-mini + rag (embedding & reranking) | 72.37 | [link](gpt-4o-mini-1-shot-sim_semgrep_1.85.0_20240819_013810.log) |
|
102 |
+
| gpt-4o-mini + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 77.63 | [link](ft_gpt-4o-mini-2024-07-18_patched_patched_9uUpKXcm_semgrep_1.85.0_20240818_220158.log) |
|
103 |
|
104 |
|
105 |
| Model | Score | Logs |
|
106 |
|:-----:|:-----:|:----:|
|
107 |
+
| gpt-4o | 68.42 | [link](gpt-4o-0-shot_semgrep_1.85.0_20240819_015355.log)|
|
108 |
+
| gpt-4o + 3-shot prompt | 77.63 | [link](gpt-4o-3-shot_semgrep_1.85.0_20240819_020525.log)|
|
109 |
+
| gpt-4o + rag (embedding & reranking) | 77.63 | [link](gpt-4o-1-shot-sim_semgrep_1.85.0_20240819_023323.log) |
|
110 |
+
| gpt-4o + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 77.63 | [link](ft_gpt-4o-2024-05-13_patched_patched-4o_9xp8XOM9-0-shot_semgrep_1.85.0_20240819_075205.log) |
|
111 |
|
112 |
# Leaderboard
|
113 |
|
114 |
The top models on the leaderboard are all fine-tuned using the same dataset that we released called [synth vuln fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes).
|
115 |
You can read about our experience with fine-tuning them on our [blog](https://www.patched.codes/blog/a-comparative-study-of-fine-tuning-gpt-4o-mini-gemini-flash-1-5-and-llama-3-1-8b).
|
116 |
You can also explore the leaderboard with this [interactive visualization](https://claude.site/artifacts/5656c16d-9751-407c-9631-a3526c259354).
|
117 |
+

|
118 |
|
119 |
| Model | StaticAnalysisEval (%) | Time (mins) | Price (USD) |
|
120 |
|:-------------------------:|:----------------------:|:-------------:|:-----------:|
|