File size: 2,749 Bytes
f126ca7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48346a9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f126ca7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: mit
task_categories:
- question-answering
- text2text-generation
- text-generation
language:
- en
tags:
- Java
- Code
- UnitTest
- testing
pretty_name: Java Method to Test ChatML
size_categories:
- 100K<n<1M
---
# Java Method to Test ChatML

This dataset is based on the [methods2test](https://github.com/microsoft/methods2test) dataset from Microsoft. It follows the ChatML template format: `[{'role': '', 'content': ''}, {...}]`.

Originally, methods2test contains only Java methods at different levels of granularity along with their corresponding test cases. The different focal method segmentations are illustrated here:

![Focal methods segmentation](./assets/fc.png)

To simulate a conversation between a Java developer and an AI assistant, I introduce two key parameters:

- The prompt template
- The input code choice

## Prompt Template

The prompts simulate a user asking an AI assistant to generate a test case. For this, I used six different prompt templates, replicating real-life scenarios in which a user might ask an assistant to generate test cases:

1. Direct request (e.g., default)
2. Educational question
3. Code review
4. Bug fixing
5. Debugging (similar but slightly different)
6. TDD Approach (Test-Driven Development)

You can find these prompts in the [prompts](./assets/prompts.py) file.

## Input Code

The input code is selected from one of the six possibilities described in the "Focal methods segmentation" figure. To simulate real-life scenarios, I randomly select among these possibilities using the following distribution:

| Split | fm  | +fc | +c  | +m  | +f  |
|-------|---- |---- |---- |---- |---- |
| Train | 10% | 50% | 0%  | 10% | 30% |
| Test  | 0%  | 50% | 0%  | 0%  | 50% |

There is a bit more diversity in the training data to prevent underfitting during LLM fine-tuning. If you prefer, you can merge the splits and shuffle the data.

## Summary

This dataset is a great starting point for building a powerful Java coding assistant or for complementing a larger knowledge base to train more advanced foundational models.

If you have any suggestions or spot any bugs, feel free to open a discussion! 😉 If you find it useful, don’t hesitate to give it a like. Peace! ✌️

## Usage and Citation

To use it with [🤗datasets](https://huggingface.co/docs/datasets/):

```python
from datasets import load_dataset

ds = load_dataset("...", split="train")
```

To use it elsewhere:

```bibtex
@misc{letoquart2025java,
  author = {Pierre-Louis Létoquart},
  title = {Dataset: Java Method to Test ChatML},
  year = {2025},
  publisher = {Hugging Face},
  journal = {Hugging Face Hub},
  howpublished = {\url{https://huggingface.co/datasets/random-long-int/Java_method2test_chatml}},
}
```