Model fine-tuning
This directory contains scripts for:
- Model fine-tuning: Generate datasets and fine-tune an LLM on GitHub PRs and commits.
- RAG indexing: Generate vector indexes (embeddings) based on the repository.
- GitHub crawler: Retrieve PR metadata, comments, reviews, and commit diffs from a public GitHub repository.
Directory structure
model/
: Python scripts for dataset generation, fine-tuning, and RAG vector indexing.github/
: Node.js CLI tool for crawling GitHub repositories.../data/
: Output directory for crawled data, generated datasets, and vector indexes.
Dataset generation & RAG indexing
Overview
- generate_dataset.py: Processes raw PR metadata and commit diffs (from
../data/
) to generate training examples in JSONL format. - rag.py: Generates vector indexes (embeddings) from processed data for retrieval-augmented generation.
Quick Start
- Install dependencies:
pip3 install -r requirements.txt
- Prepare a
settings.json
file:{ "system_instruction": "...", "base_model": "microsoft/Phi-4-reasoning", "max_context_size": 32768, "embed_model": "all-MiniLM-L6-v2", "repository": "https://github.com/dotnet/runtime" }
- Data preparation & indexing:
- Run the dataset generator and RAG indexer:
python3 generate_dataset.py python3 rag.py
- Run the dataset generator and RAG indexer:
GitHub Crawler
A CLI tool to retrieve PR metadata, comments, reviews, and commit diffs from a public GitHub repo.
Quick Start
- Install dependencies:
npm install
- Set your GitHub token:
export GITHUB_TOKEN=YOUR_TOKEN
- Run the crawler:
node main.js
Expected Output
After running, you'll find:
../data/raw_sample/
βββ prs/
β βββ pr-1.json
β βββ pr-2.json
β βββ ...
βββ diffs/
βββ <sha1>.diff
βββ <sha2>.diff
βββ ...
../data/processed/
train.parquet
test.parquet
../data/faiss/
index