jacobhoffmann commited on
Commit
c63fb42
·
verified ·
1 Parent(s): 3c251b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -1
README.md CHANGED
@@ -92,4 +92,66 @@ prompt = f"Generate unit tests in Dart for the following class:\n{input_code}"
92
  # Generate tests
93
  inputs = tokenizer(prompt, return_tensors="pt")
94
  outputs = model.generate(**inputs, max_length=512)
95
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  # Generate tests
93
  inputs = tokenizer(prompt, return_tensors="pt")
94
  outputs = model.generate(**inputs, max_length=512)
95
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
96
+
97
+ ## Training Details
98
+
99
+ ### Training Data
100
+
101
+ The fine-tuning dataset consists of **16,252 Dart code-test pairs** extracted from open-source GitHub repositories using Google BigQuery. The data was subjected to quality filtering and deduplication to ensure high relevance and consistency.
102
+
103
+ ### Training Procedure
104
+
105
+ - **Fine-tuning Approach:** Supervised Fine-Tuning (SFT) with QLoRA for memory efficiency.
106
+ - **Hardware:** Training was conducted on a single NVIDIA A100 GPU.
107
+ - **Optimization:** Flash Attention 2 was utilized for enhanced performance.
108
+ - **Duration:** The training process ran for up to 32 hours.
109
+
110
+ ### Training Hyperparameters
111
+
112
+ - **Mixed Precision:** FP16
113
+ - **Optimizer:** AdamW
114
+ - **Learning Rate:** 5e-5
115
+ - **Epochs:** 3
116
+
117
+ ### Environmental Impact
118
+
119
+ - **Hardware Type:** NVIDIA A100 GPU
120
+ - **Hours Used:** 32 hours
121
+ - **Carbon Emitted:** 13.099 kgCO2eq
122
+
123
+ ---
124
+
125
+ ## Evaluation
126
+
127
+ ### Testing Data, Factors & Metrics
128
+
129
+ - **Testing Data:** A subset of **42 Dart files** from the training dataset, evaluated in a zero-shot setting.
130
+ - **Factors:** Syntax correctness, functional correctness.
131
+ - **Metrics:** pass@1, syntax error rate, functional correctness rate.
132
+
133
+ ### Results
134
+
135
+ - **Syntax Correctness:** +76% improvement compared to the base model.
136
+ - **Functional Correctness:** +16.67% improvement compared to the base model.
137
+
138
+ ---
139
+
140
+ ## Citation
141
+
142
+ If you use this model in your research, please cite:
143
+
144
+ **BibTeX:**
145
+ ```bibtex
146
+ @inproceedings{hoffmann2024testgen,
147
+ title={Test Case Generation with Fine-Tuned LLaMA Models},
148
+ author={Hoffmann, Jacob and Frister, Demian},
149
+ booktitle={Proceedings of the 29th ACM/SIGSOFT International Workshop on Automated Software Testing (AST)},
150
+ year={2024},
151
+ doi={10.1145/3644032.3644454}
152
+ }
153
+
154
+ ## Model Card Contact
155
+
156
+ - **Jacob Hoffmann**: [[email protected]](mailto:[email protected])
157
+ - **Demian Frister**: [[email protected]](mailto:[email protected])