cedricbonhomme commited on
Commit
424edeb
·
verified ·
1 Parent(s): 819ffc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -23
README.md CHANGED
@@ -32,29 +32,32 @@ It is a classification model and is aimed to assist in classifying vulnerabiliti
32
  ## How to get started with the model
33
 
34
  ```python
35
- from transformers import AutoModelForSequenceClassification, AutoTokenizer
36
- import torch
37
-
38
- labels = ["low", "medium", "high", "critical"]
39
-
40
- model_name = "CIRCL/vulnerability-scores"
41
- tokenizer = AutoTokenizer.from_pretrained(model_name)
42
- model = AutoModelForSequenceClassification.from_pretrained(model_name)
43
- model.eval()
44
-
45
- test_description = "langchain_experimental 0.0.14 allows an attacker to bypass the CVE-2023-36258 fix and execute arbitrary code via the PALChain in the python exec method."
46
- inputs = tokenizer(test_description, return_tensors="pt", truncation=True, padding=True)
47
-
48
- # Run inference
49
- with torch.no_grad():
50
- outputs = model(**inputs)
51
- predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
52
-
53
-
54
- # Print results
55
- print("Predictions:", predictions)
56
- predicted_class = torch.argmax(predictions, dim=-1).item()
57
- print("Predicted severity:", labels[predicted_class])
 
 
 
58
  ```
59
 
60
  ## Training procedure
 
32
  ## How to get started with the model
33
 
34
  ```python
35
+ >>> from transformers import AutoModelForSequenceClassification, AutoTokenizer
36
+ ... import torch
37
+ ...
38
+ ... labels = ["low", "medium", "high", "critical"]
39
+ ...
40
+ ... model_name = "CIRCL/vulnerability-severity-classification-distilbert-base-uncased"
41
+ ... tokenizer = AutoTokenizer.from_pretrained(model_name)
42
+ ... model = AutoModelForSequenceClassification.from_pretrained(model_name)
43
+ ... model.eval()
44
+ ...
45
+ ... test_description = "SAP NetWeaver Visual Composer Metadata Uploader is not protected with a proper authorization, allowing unauthenticated agent to upload potentially malicious executable binaries \
46
+ that could severely harm the host system. This could significantly affect the confidentiality, integrity, and availability of the targeted system."
47
+ ... inputs = tokenizer(test_description, return_tensors="pt", truncation=True, padding=True)
48
+ ...
49
+ ... # Run inference
50
+ ... with torch.no_grad():
51
+ ... outputs = model(**inputs)
52
+ ... predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
53
+ ...
54
+ ... # Print results
55
+ ... print("Predictions:", predictions)
56
+ ... predicted_class = torch.argmax(predictions, dim=-1).item()
57
+ ... print("Predicted severity:", labels[predicted_class])
58
+ ...
59
+ Predictions: tensor([[4.9335e-04, 3.4782e-02, 2.6257e-01, 7.0215e-01]])
60
+ Predicted severity: critical
61
  ```
62
 
63
  ## Training procedure