File size: 2,327 Bytes
06bbd08
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c7144dc
06bbd08
 
 
 
 
 
 
fdd0e33
 
 
 
 
 
97df98a
 
 
 
fdd0e33
97df98a
fdd0e33
 
 
 
 
97df98a
 
 
 
 
fdd0e33
 
 
 
97df98a
 
fdd0e33
 
 
 
 
 
 
 
97df98a
fdd0e33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
pretty_name: ".NET Runtime"
tags:
  - raw-json
  - parquet
  - faiss-index
  - text
  - large-scale
  - offline-processing
  - github
  - code
  - datasets
license: mit
language:
  - en
size_categories:
  - 100K<n<1M
task_categories:
  - text-classification
  - text-retrieval
source_datasets: []
annotations_creators:
  - machine-generated
  - human-verified
---

# .NET Runtime Fine-Tuning Data and Index

This directory contains data for fine-tuning models and building RAGs for the dotnet/runtime repository.

## Overview

- **data/**: Contains all datasets and indexes.
    - **raw/sample/**: Sample PRs and diffs collected from GitHub.
    - **raw_data.tar**: Archive of collected PRs and diffs from GitHub.
    - **samples/**: Json files with processed samples suitable for dataset generation.
    - **processed/**: Parquet files for fine-tuning (e.g., `train.parquet`, `test.parquet`).
    - **faiss/**: Vector indexes for RAG workflows.
- **scripts/**: Python and nodejs scripts for crawling, processing, and indexing.

## Data Structure

```
data/
β”œβ”€β”€ raw/
|   β”œβ”€β”€ sample/
β”‚   β”‚   β”œβ”€β”€ prs/
β”‚   β”‚   β”œβ”€β”€ diffs/
β”‚   └── raw_data.tar
β”œβ”€β”€ processed/
β”‚   β”œβ”€β”€ train.parquet
β”‚   └── test.parquet
└── faiss/
    └── index.faiss
    └── index.pkl
```

## Generated dataset

PR is considered as a timeline with events. Input is PR metadata (title, description, label) and commit n-1, with all events between n-1 and n. Completion is n. It is possible to filter by time, label, authors, etc.

## Scripts

See [scripts/README.md](scripts/README.md) for details on running the crawler, dataset generation, and RAG indexing.

## PyTorch Dataset Example

```python
from datasets import load_dataset

# Load Parquet train/test splits
train = load_dataset("parquet", data_files="data/processed/train.parquet", split="train")
test = load_dataset("parquet", data_files="data/processed/test.parquet", split="train")
```

## RAG Vector Search Example

```python
import faiss
import numpy as np

# Load FAISS index
index = faiss.read_index("data/faiss/index.faiss")

# Example query embedding (replace with your embedding)
query_embedding = ...

# Search
D, I = index.search(query_embedding.reshape(1, -1), k=5)
print("Top 5 similar PR indices:", I[0])
```