ydeng9 commited on
Commit
597b7ab
·
verified ·
1 Parent(s): edb3698

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -3
README.md CHANGED
@@ -1,3 +1,106 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - fr
6
+ - es
7
+ - de
8
+ base_model:
9
+ - Qwen/Qwen2.5-0.5B
10
+ ---
11
+
12
+ ## Overview
13
+ A brief description of what this model does and how it’s unique or relevant:
14
+
15
+ - **Goal**: Classification upon safety of the input text sequences.
16
+ - **Model Description**: DuoGuard-0.5B is a multilingual, decoder-only LLM-based classifier specifically designed for safety content moderation across 12 distinct subcategories. Each forward pass produces a 12-dimensional logits vector, where each dimension corresponds to a specific content risk area, such as violent crimes, hate, or sexual content. By applying a sigmoid function to these logits, users obtain a multi-label probability distribution, which allows for fine-grained detection of potentially unsafe or disallowed content.
17
+ For simplified binary moderation tasks, the model can be used to produce a single “safe”/“unsafe” label by taking the maximum of the 12 subcategory probabilities and comparing it to a given threshold (e.g., 0.5). If the maximum probability across all categories is above the threshold, the content is deemed “unsafe.” Otherwise, it is considered “safe.”
18
+
19
+ DuoGuard-0.5B is built upon Qwen 2.5 (0.5B), a multilingual large language model supporting 29 languages—including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. DuoGuard-0.5B is specialized (fine-tuned) for safety content moderation primarily in English, French, German, and Spanish, while still retaining the broader language coverage inherited from the Qwen 2.5 base model. It is provided with open weights.
20
+ ## How to Use
21
+ A quick code snippet or set of instructions on how to load and use your model in an application or script:
22
+ ```python
23
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
24
+ import torch
25
+
26
+ # 1. Initialize the tokenizer
27
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B")
28
+ tokenizer.pad_token = tokenizer.eos_token
29
+
30
+ # 2. Load the DuoGuard-0.5B model
31
+ model = AutoModelForSequenceClassification.from_pretrained(
32
+ "DuoGuard/DuoGuard-0.5B",
33
+ torch_dtype=torch.bfloat16
34
+ ).to('cuda:0')
35
+
36
+ # 3. Define a sample prompt to test
37
+ prompt = "How to kill a python process?"
38
+
39
+ # 4. Tokenize the prompt
40
+ inputs = tokenizer(
41
+ prompt,
42
+ return_tensors="pt",
43
+ truncation=True,
44
+ max_length=512 # adjust as needed
45
+ ).to('cuda:0')
46
+
47
+ # 5. Run the model (inference)
48
+ with torch.no_grad():
49
+ outputs = model(**inputs)
50
+ # DuoGuard outputs a 12-dimensional vector (one probability per subcategory).
51
+ logits = outputs.logits # shape: (batch_size, 12)
52
+ probabilities = torch.sigmoid(logits) # element-wise sigmoid
53
+
54
+ # 6. Multi-label predictions (one for each category)
55
+ threshold = 0.5
56
+ category_names = [
57
+ "Violent crimes",
58
+ "Non-violent crimes",
59
+ "Sex-related crimes",
60
+ "Child sexual exploitation",
61
+ "Specialized advice",
62
+ "Privacy",
63
+ "Intellectual property",
64
+ "Indiscriminate weapons",
65
+ "Hate",
66
+ "Suicide and self-harm",
67
+ "Sexual content",
68
+ "Jailbreak prompts",
69
+ ]
70
+
71
+ # Extract probabilities for the single prompt (batch_size = 1)
72
+ prob_vector = probabilities[0].tolist() # shape: (12,)
73
+
74
+ predicted_labels = []
75
+ for cat_name, prob in zip(category_names, prob_vector):
76
+ label = 1 if prob > threshold else 0
77
+ predicted_labels.append(label)
78
+
79
+ # 7. Overall binary classification: "safe" vs. "unsafe"
80
+ # We consider the prompt "unsafe" if ANY category is above the threshold.
81
+ max_prob = max(prob_vector)
82
+ overall_label = 1 if max_prob > threshold else 0 # 1 => unsafe, 0 => safe
83
+
84
+ # 8. Print results
85
+ print(f"Prompt: {prompt}\n")
86
+ print(f"Multi-label Probabilities (threshold={threshold}):")
87
+ for cat_name, prob, label in zip(category_names, prob_vector, predicted_labels):
88
+ print(f" - {cat_name}: {prob:.3f}")
89
+
90
+ print(f"\nMaximum probability across all categories: {max_prob:.3f}")
91
+ print(f"Overall Prompt Classification => {'UNSAFE' if overall_label == 1 else 'SAFE'}")
92
+ ```
93
+
94
+ ### Citation
95
+
96
+ ```plaintext
97
+ @misc{deng2024duoguard,
98
+ title={DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails},
99
+ author={Yihe Deng and Yu Yang and Junkai Zhang and Wei Wang and Bo Li},
100
+ year={2024},
101
+ eprint={2407.},
102
+ archivePrefix={arXiv},
103
+ primaryClass={cs.CL},
104
+ url={https://arxiv.org/abs/2407.},
105
+ }
106
+ ```