|
--- |
|
datasets: |
|
- davanstrien/aart-ai-safety-dataset |
|
- obalcells/advbench |
|
- databricks/databricks-dolly-15k |
|
--- |
|
# Malicious & Jailbreaking Prompt Classifer |
|
|
|
# Datasets Used |
|
|
|
[MaliciousInstruct](https://github.com/Princeton-SysML/Jailbreak_LLM/blob/main/data/MaliciousInstruct.txt) |
|
|
|
[AART](https://github.com/google-research-datasets/aart-ai-safety-dataset/blob/main/aart-v1-20231117.csv) |
|
|
|
[StrongREJECT](https://github.com/alexandrasouly/strongreject/blob/main/strongreject_dataset/strongreject_dataset.csv) |
|
|
|
[DAN](https://github.com/verazuo/jailbreak_llms/tree/main/data) |
|
|
|
[AdvBench](https://github.com/llm-attacks/llm-attacks/tree/main/data/advbench) |
|
|
|
[Databricks-Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k) |