metadata
pretty_name: .NET Runtime
tags:
- raw-json
- parquet
- faiss-index
- text
- large-scale
- offline-processing
- github
- code
- datasets
license: mit
language:
- en
size_categories:
- 100K<n<1M
task_categories:
- text-classification
- text-retrieval
source_datasets: []
annotations_creators:
- machine-generated
- human-verified
.NET Runtime Fine-Tuning Data and Index
This directory contains data for fine-tuning models and building RAGs for the dotnet/runtime repository.
Overview
- data/: Contains all datasets and indexes.
- raw/sample/: Sample PRs and diffs collected from GitHub.
- raw_data.tar: Archive of collected PRs and diffs from GitHub.
- samples/: Json files with processed samples suitable for dataset generation.
- processed/: Parquet files for fine-tuning (e.g.,
train.parquet
,test.parquet
). - faiss/: Vector indexes for RAG workflows.
- scripts/: Python and nodejs scripts for crawling, processing, and indexing.
Data Structure
data/
βββ raw/
| βββ sample/
β β βββ prs/
β β βββ diffs/
β βββ raw_data.tar
βββ processed/
β βββ train.parquet
β βββ test.parquet
βββ faiss/
βββ index.faiss
βββ index.pkl
Generated dataset
PR is considered as a timeline with events. Input is PR metadata (title, description, label) and commit n-1, with all events between n-1 and n. Completion is n. It is possible to filter by time, label, authors, etc.
Scripts
See scripts/README.md for details on running the crawler, dataset generation, and RAG indexing.
PyTorch Dataset Example
from datasets import load_dataset
# Load Parquet train/test splits
train = load_dataset("parquet", data_files="data/processed/train.parquet", split="train")
test = load_dataset("parquet", data_files="data/processed/test.parquet", split="train")
RAG Vector Search Example
import faiss
import numpy as np
# Load FAISS index
index = faiss.read_index("data/faiss/index.faiss")
# Example query embedding (replace with your embedding)
query_embedding = ...
# Search
D, I = index.search(query_embedding.reshape(1, -1), k=5)
print("Top 5 similar PR indices:", I[0])