Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:

Add link to paper

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -21,7 +21,8 @@ dataset_info:
21
  dataset_size: 1015823
22
  ---
23
 
24
- # SOTA fine-tuning by OpenAI
 
25
 
26
  OpenAI used the [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) and fine-tuned
27
  a new version of gpt-4o is now the SOTA on this benchmark. More details and code is available from their [repo.](https://github.com/openai/build-hours/tree/main/5-4o_fine_tuning)
@@ -72,8 +73,7 @@ technique like MOA can improve performance without fine-tuning.
72
 
73
  # Static Analysis Eval Benchmark
74
 
75
- A dataset of 76 Python programs taken from real Python open source projects (top 100 on GitHub),
76
- where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep).
77
 
78
  You can run the `_script_for_eval.py` script to check the results.
79
 
 
21
  dataset_size: 1015823
22
  ---
23
 
24
+ A dataset of 76 Python programs taken from real Python open source projects (top 100 on GitHub),
25
+ where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep), used in the paper [Patched MOA: optimizing inference for diverse software development tasks](https://huggingface.co/papers/2407.18521).
26
 
27
  OpenAI used the [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) and fine-tuned
28
  a new version of gpt-4o is now the SOTA on this benchmark. More details and code is available from their [repo.](https://github.com/openai/build-hours/tree/main/5-4o_fine_tuning)
 
73
 
74
  # Static Analysis Eval Benchmark
75
 
76
+
 
77
 
78
  You can run the `_script_for_eval.py` script to check the results.
79