Ubuntu
Initial commit
e67edda
# Model fine-tuning
This directory contains scripts for:
- **Model fine-tuning**: Generate datasets and fine-tune an LLM on GitHub PRs and commits.
- **RAG indexing**: Generate vector indexes (embeddings) based on the repository.
- **GitHub crawler**: Retrieve PR metadata, comments, reviews, and commit diffs from a public GitHub repository.
## Directory structure
- `model/`: Python scripts for dataset generation, fine-tuning, and RAG vector indexing.
- `github/`: Node.js CLI tool for crawling GitHub repositories.
- `../data/`: Output directory for crawled data, generated datasets, and vector indexes.
---
## Dataset generation & RAG indexing
### Overview
- **generate_dataset.py**: Processes raw PR metadata and commit diffs (from `../data/`) to generate training examples in JSONL format.
- **rag.py**: Generates vector indexes (embeddings) from processed data for retrieval-augmented generation.
### Quick Start
1. **Install dependencies**:
```bash
pip3 install -r requirements.txt
```
2. **Prepare a `settings.json` file**:
```json
{
"system_instruction": "...",
"base_model": "microsoft/Phi-4-reasoning",
"max_context_size": 32768,
"embed_model": "all-MiniLM-L6-v2",
"repository": "https://github.com/dotnet/runtime"
}
```
3. **Data preparation & indexing**:
- Run the dataset generator and RAG indexer:
```sh
python3 generate_dataset.py
python3 rag.py
```
## GitHub Crawler
A CLI tool to retrieve PR metadata, comments, reviews, and commit diffs from a public GitHub repo.
### Quick Start
1. **Install dependencies**:
```bash
npm install
```
2. **Set your GitHub token**:
```bash
export GITHUB_TOKEN=YOUR_TOKEN
```
3. **Run the crawler**:
```bash
node main.js
```
## Expected Output
After running, you'll find:
```
../data/raw_sample/
β”œβ”€β”€ prs/
β”‚ β”œβ”€β”€ pr-1.json
β”‚ β”œβ”€β”€ pr-2.json
β”‚ └── ...
└── diffs/
β”œβ”€β”€ <sha1>.diff
β”œβ”€β”€ <sha2>.diff
└── ...
../data/processed/
train.parquet
test.parquet
../data/faiss/
index
```