cedricbonhomme's picture
Update README.md
07e72fc verified
|
raw
history blame
3.13 kB
metadata
library_name: transformers
license: mit
base_model: roberta-base
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: vulnerability-severity-classification-roberta-base
    results: []
datasets:
  - CIRCL/vulnerability-scores

vulnerability-severity-classification-roberta-base

This model is a fine-tuned version of roberta-base on the dataset CIRCL/vulnerability-scores.

You can read this page for more information.

It achieves the following results on the evaluation set:

  • Loss: 0.4963
  • Accuracy: 0.8298

Model description

It is a classification model and is aimed to assist in classifying vulnerabilities by severity based on their descriptions.

How to get started with the model

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

labels = ["low", "medium", "high", "critical"]

model_name = "CIRCL/vulnerability-severity-classification-distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()

test_description = "SAP NetWeaver Visual Composer Metadata Uploader is not protected with a proper authorization, allowing unauthenticated agent to upload potentially malicious executable binaries \
that could severely harm the host system. This could significantly affect the confidentiality, integrity, and availability of the targeted system."
inputs = tokenizer(test_description, return_tensors="pt", truncation=True, padding=True)

# Run inference
with torch.no_grad():
    outputs = model(**inputs)
    predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)

# Print results
print("Predictions:", predictions)
predicted_class = torch.argmax(predictions, dim=-1).item()
print("Predicted severity:", labels[predicted_class])

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.5857 1.0 27531 0.6245 0.7464
0.6164 2.0 55062 0.5566 0.7777
0.467 3.0 82593 0.5368 0.8013
0.4208 4.0 110124 0.4849 0.8209
0.2856 5.0 137655 0.4963 0.8298

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1