Update README.md
Browse files
README.md
CHANGED
@@ -33,6 +33,16 @@ pip install -r requirements.txt
|
|
33 |
python _script_for_eval.py
|
34 |
```
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
# Leaderboard
|
37 |
|
38 |
The top models on the leaderboard are all fine-tuned using the same dataset that we released called [synth vuln fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes).
|
@@ -75,3 +85,5 @@ You can also explore the leaderboard with this [interactive visualization](https
|
|
75 |
The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).
|
76 |
|
77 |
Some models timed out during the run or had intermittent API errors. We try each example 3 times in such cases. This is why some runs are reported to be longer than 1 hr (60:00+ mins).
|
|
|
|
|
|
33 |
python _script_for_eval.py
|
34 |
```
|
35 |
|
36 |
+
We need to use the logged in version of Semgrep to get access to more rules for vulnerability detection. So, make sure you login before running the eval script.
|
37 |
+
|
38 |
+
```
|
39 |
+
% semgrep login
|
40 |
+
API token already exists in /Users/user/.semgrep/settings.yml. To login with a different token logout use `semgrep logout`
|
41 |
+
```
|
42 |
+
|
43 |
+
After the run, the script will also create a log file which captures the stats for the run and the files that were fixed.
|
44 |
+
You can see an example [here](https://huggingface.co/datasets/patched-codes/static-analysis-eval/blob/main/gpt-4o-mini_semgrep_1.85.0_20240818_215254.log).
|
45 |
+
|
46 |
# Leaderboard
|
47 |
|
48 |
The top models on the leaderboard are all fine-tuned using the same dataset that we released called [synth vuln fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes).
|
|
|
85 |
The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).
|
86 |
|
87 |
Some models timed out during the run or had intermittent API errors. We try each example 3 times in such cases. This is why some runs are reported to be longer than 1 hr (60:00+ mins).
|
88 |
+
|
89 |
+
If you want to add your model to the leaderboard, you can send in a PR to this repo with the log file from the evaluation run.
|