metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: source
dtype: string
- name: file_name
dtype: string
- name: cwe
dtype: string
splits:
- name: train
num_bytes: 87854
num_examples: 76
download_size: 53832
dataset_size: 87854
Dataset Card for "static-analysis-eval"
A dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub), where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep).
Leaderboard
Model | StaticAnalysisEval (%) | Time (mins) | Price (USD) |
---|---|---|---|
gpt-4o | 69.74 | 23:05 | 1.53 |
gemini-1.5-flash-latest | 68.42 | 18:23 | 0.07 |
Llama-3-70B-instruct | 65.78 | 35:26 | |
Llama-3-8B-instruct | 65.78 | 31.34 | |
gemini-1.5-pro-latest | 64.47 | 34:40 | |
gpt-4-1106-preview | 64.47 | 27:56 | 3.04 |
gpt-4 | 63.16 | 26:31 | 6.84 |
gpt-4-0125-preview | 53.94 | 34:40 | |
patched-coder-7b | 51.31 | 45.20 | |
patched-coder-34b | 46.05 | 33:58 | 0.87 |
Mistral-Large | 40.80 | 60:00+ | |
Gemini-pro | 39.47 | 16:09 | 0.23 |
Mistral-Medium | 39.47 | 60:00+ | 0.80 |
Mixtral-Small | 30.26 | 30:09 | |
gpt-3.5-turbo-0125 | 28.95 | 21:50 | |
claude-3-opus-20240229 | 25.00 | 60:00+ | |
Gemma-7b-it | 19.73 | 36:40 | |
gpt-3.5-turbo-1106 | 17.11 | 13:00 | 0.23 |
Codellama-70b-Instruct | 10.53 | 30.32 | |
CodeLlama-34b-Instruct | 7.89 | 23:16 |
The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer).
Some models timed out during the run or had intermittent API errors. We try each example 3 times in such cases. This is why some runs are reported to be longer than 1 hr (60:00+ mins).