cedricbonhomme's picture
Update README.md
8fc2ccc verified
metadata
base_model: hfl/chinese-macbert-base
datasets:
  - CIRCL/Vulnerability-CNVD
library_name: transformers
license: apache-2.0
metrics:
  - accuracy
tags:
  - generated_from_trainer
  - text-classification
  - classification
  - nlp
  - chinese
  - vulnerability
pipeline_tag: text-classification
language: zh
model-index:
  - name: vulnerability-severity-classification-chinese-macbert-base
    results: []

VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification (Chinese Text)

This model is a fine-tuned version of hfl/chinese-macbert-base on the dataset CIRCL/Vulnerability-CNVD.

For more information, visit the Vulnerability-Lookup project page or the ML-Gateway GitHub repository, which demonstrates its usage in a FastAPI server.

It achieves the following results on the evaluation set:

  • Loss: 0.6172
  • Accuracy: 0.7817

How to use

You can use this model directly with the Hugging Face transformers library for text classification:

from transformers import pipeline

classifier = pipeline(
    "text-classification",
    model="CIRCL/vulnerability-severity-classification-chinese-macbert-base"
)

# Example usage for a Chinese vulnerability description
description_chinese = "TOTOLINK A3600R是中国吉翁电子(TOTOLINK)公司的一款6天线1200M无线路由器。TOTOLINK A3600R存在缓冲区溢出漏洞,该漏洞源于/cgi-bin/cstecgi.cgi文件的UploadCustomModule函数中的File参数未能正确验证输入数据的长度大小,攻击者可利用该漏洞在系统上执行任意代码或者导致拒绝服务。"
result_chinese = classifier(description_chinese)
print(result_chinese)
# Expected output example: [{'label': '高', 'score': 0.9802}]

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6329 1.0 3412 0.5832 0.7546
0.5215 2.0 6824 0.5531 0.7750
0.4827 3.0 10236 0.5521 0.7768
0.3448 4.0 13648 0.5822 0.7814
0.3865 5.0 17060 0.6172 0.7817

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.7.1+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1