vgen_cpp / README.md
WANG Ning
Upload README.md
d3f6408 verified
|
raw
history blame
1.14 kB
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
tags:
  - code
pretty_name: vgen_cpp
size_categories:
  - 1K<n<10K

Dataset Card for Opencores

In the process of continual pre-training, we utilized the publicly available VGen dataset. VGen aggregates Verilog repositories from GitHub, systematically filters out duplicates and excessively large files, and retains only those files containing \texttt{module} and \texttt{endmodule} statements.

We also incorporated the CodeSearchNet dataset \cite{codesearchnet}, which contains approximately 40MB function codes and their documentation.

Dataset Features

  • text (string): The pretraining corpus: nature language and Verilog/C code.

Loading the dataset

from datasets import load_dataset

ds = load_dataset("WANGNingroci/vgen_cpp", split="train")
print(ds[0])

Citation

@article{wang2024large,
  title={Large Language Model for Verilog Generation with Golden Code Feedback},
  author={Wang, Ning and Yao, Bingkun and Zhou, Jie and Wang, Xi and Jiang, Zhe and Guan, Nan},
  journal={arXiv preprint arXiv:2407.18271},
  year={2024}
}