File size: 2,141 Bytes
e67edda
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
# Model fine-tuning

This directory contains scripts for:
- **Model fine-tuning**: Generate datasets and fine-tune an LLM on GitHub PRs and commits.
- **RAG indexing**: Generate vector indexes (embeddings) based on the repository.
- **GitHub crawler**: Retrieve PR metadata, comments, reviews, and commit diffs from a public GitHub repository.

## Directory structure

- `model/`: Python scripts for dataset generation, fine-tuning, and RAG vector indexing.
- `github/`: Node.js CLI tool for crawling GitHub repositories.
- `../data/`: Output directory for crawled data, generated datasets, and vector indexes.

---

## Dataset generation & RAG indexing

### Overview

- **generate_dataset.py**: Processes raw PR metadata and commit diffs (from `../data/`) to generate training examples in JSONL format.
- **rag.py**: Generates vector indexes (embeddings) from processed data for retrieval-augmented generation.

### Quick Start

1. **Install dependencies**:
    ```bash
    pip3 install -r requirements.txt
    ```
2. **Prepare a `settings.json` file**:
    ```json
    {
      "system_instruction": "...",
      "base_model": "microsoft/Phi-4-reasoning",
      "max_context_size": 32768,
      "embed_model": "all-MiniLM-L6-v2",
      "repository": "https://github.com/dotnet/runtime"
    }
    ```
3. **Data preparation & indexing**:
    - Run the dataset generator and RAG indexer:
      ```sh
      python3 generate_dataset.py
      python3 rag.py
      ```

## GitHub Crawler

A CLI tool to retrieve PR metadata, comments, reviews, and commit diffs from a public GitHub repo.

### Quick Start

1. **Install dependencies**:
    ```bash
    npm install
    ```
2. **Set your GitHub token**:
    ```bash
    export GITHUB_TOKEN=YOUR_TOKEN
    ```
3. **Run the crawler**:
    ```bash
    node main.js
    ```

## Expected Output

After running, you'll find:
```
../data/raw_sample/
β”œβ”€β”€ prs/
β”‚   β”œβ”€β”€ pr-1.json
β”‚   β”œβ”€β”€ pr-2.json
β”‚   └── ...
└── diffs/
    β”œβ”€β”€ <sha1>.diff
    β”œβ”€β”€ <sha2>.diff
    └── ...
../data/processed/
    train.parquet
    test.parquet
../data/faiss/
    index
```