Update README.md
Browse files
README.md
CHANGED
@@ -26,29 +26,28 @@ where each program is a file that has exactly 1 vulnerability as detected by a p
|
|
26 |
|
27 |
# Leaderboard
|
28 |
|
29 |
-
|
|
30 |
-
|
31 |
-
|
|
32 |
-
| gemini-1.5-flash-latest
|
33 |
-
| Llama-3-70B-instruct
|
34 |
-
| Llama-3-8B-instruct
|
35 |
-
| gemini-1.5-pro-latest
|
36 |
-
| gpt-4-1106-preview
|
37 |
-
|
|
38 |
-
| gpt-4-0125-preview
|
39 |
-
|
|
40 |
-
|
|
41 |
-
|
|
42 |
-
|
|
43 |
-
|
|
44 |
-
|
|
45 |
-
|
|
46 |
-
|
|
47 |
-
|
|
48 |
-
|
|
49 |
-
| Codellama-70b-Instruct
|
50 |
-
| CodeLlama-34b-Instruct
|
51 |
-
|
52 |
|
53 |
The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).
|
54 |
|
|
|
26 |
|
27 |
# Leaderboard
|
28 |
|
29 |
+
| Model | StaticAnalysisEval (%) | Time (mins) | Price (USD) |
|
30 |
+
|:-------------------------:|:----------------------:|:-------------:|:-----------:|
|
31 |
+
| gpt-4o | 69.74 | 23:0 | 1.53 |
|
32 |
+
| gemini-1.5-flash-latest | 68.42 | 18:2 | 0.07 |
|
33 |
+
| Llama-3-70B-instruct | 65.78 | 35:2 | |
|
34 |
+
| Llama-3-8B-instruct | 65.78 | 31.34 | |
|
35 |
+
| gemini-1.5-pro-latest | 64.47 | 34:40 | |
|
36 |
+
| gpt-4-1106-preview | 64.47 | 27:56 | 3.04 |
|
37 |
+
| gpt-4 | 63.16 | 26:31 | 6.84 |
|
38 |
+
| gpt-4-0125-preview | 53.94 | 34:40 | |
|
39 |
+
| patched-coder-7b | 51.31 | 45.20 | |
|
40 |
+
| patched-coder-34b | 46.05 | 33:58 | 0.87 |
|
41 |
+
| Mistral-Large | 40.80 | 60:00+ | |
|
42 |
+
| Gemini-pro | 39.47 | 16:09 | 0.23 |
|
43 |
+
| Mistral-Medium | 39.47 | 60:00+ | 0.80 |
|
44 |
+
| Mixtral-Small | 30.26 | 30:09 | |
|
45 |
+
| gpt-3.5-turbo-0125 | 28.95 | 21:50 | |
|
46 |
+
| claude-3-opus-20240229 | 25.00 | 60:00+ | |
|
47 |
+
| Gemma-7b-it | 19.73 | 36:40 | |
|
48 |
+
| gpt-3.5-turbo-1106 | 17.11 | 13:00 | 0.23 |
|
49 |
+
| Codellama-70b-Instruct | 10.53 | 30.32 | |
|
50 |
+
| CodeLlama-34b-Instruct | 7.89 | 23:16 | |
|
|
|
51 |
|
52 |
The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).
|
53 |
|