cedricbonhomme commited on
Commit
cc72314
·
verified ·
1 Parent(s): 25373ed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -6
README.md CHANGED
@@ -16,22 +16,44 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # vulnerability-severity-classification-roberta-base
18
 
19
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
 
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.5068
22
  - Accuracy: 0.8288
23
 
24
  ## Model description
25
 
26
- More information needed
 
 
 
 
 
 
 
 
 
27
 
28
- ## Intended uses & limitations
 
 
 
29
 
30
- More information needed
 
 
31
 
32
- ## Training and evaluation data
 
 
 
33
 
34
- More information needed
 
 
 
 
35
 
36
  ## Training procedure
37
 
 
16
 
17
  # vulnerability-severity-classification-roberta-base
18
 
19
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the dataset [CIRCL/vulnerability-scores](https://huggingface.co/datasets/CIRCL/vulnerability-scores).
20
+
21
  It achieves the following results on the evaluation set:
22
  - Loss: 0.5068
23
  - Accuracy: 0.8288
24
 
25
  ## Model description
26
 
27
+ It is a classification model and is aimed to assist in classifying vulnerabilities by severity based on their descriptions.
28
+
29
+
30
+ ## How to get started with the model
31
+
32
+ ```python
33
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
34
+ import torch
35
+
36
+ labels = ["low", "medium", "high", "critical"]
37
 
38
+ model_name = "CIRCL/vulnerability-severity-classification-distilbert-base-uncased"
39
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
40
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
41
+ model.eval()
42
 
43
+ test_description = "SAP NetWeaver Visual Composer Metadata Uploader is not protected with a proper authorization, allowing unauthenticated agent to upload potentially malicious executable binaries \
44
+ that could severely harm the host system. This could significantly affect the confidentiality, integrity, and availability of the targeted system."
45
+ inputs = tokenizer(test_description, return_tensors="pt", truncation=True, padding=True)
46
 
47
+ # Run inference
48
+ with torch.no_grad():
49
+ outputs = model(**inputs)
50
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
51
 
52
+ # Print results
53
+ print("Predictions:", predictions)
54
+ predicted_class = torch.argmax(predictions, dim=-1).item()
55
+ print("Predicted severity:", labels[predicted_class])
56
+ ```
57
 
58
  ## Training procedure
59