|
--- |
|
dataset_info: |
|
features: |
|
- name: repo_name |
|
dtype: string |
|
- name: repo_commit |
|
dtype: string |
|
- name: repo_content |
|
dtype: string |
|
- name: repo_readme |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 29227644 |
|
num_examples: 158 |
|
- name: test |
|
num_bytes: 8765331 |
|
num_examples: 40 |
|
download_size: 12307532 |
|
dataset_size: 37992975 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
license: apache-2.0 |
|
task_categories: |
|
- summarization |
|
tags: |
|
- code |
|
size_categories: |
|
- n<1K |
|
--- |
|
|
|
# Generate README Eval |
|
|
|
The generate-readme-eval is a dataset (train split) and benchmark (test split) to evaluate the effectiveness of LLMs |
|
when summarizing entire GitHub repos in form of a README.md file. The datset is curated from top 400 real Python repositories |
|
from GitHub with at least 1000 stars and 100 forks. The script used to generate the dataset can be found [here](_script_for_gen.py). |
|
For the dataset we restrict ourselves to GH repositories that are less than 100k tokens in size to allow us to put the entire repo |
|
in the context of LLM in a single call. The `train` split of the dataset can be used to fine-tune your own model, the results |
|
reported here are for the `test` split. |
|
|
|
To evaluate a LLM on the benchmark we can use the evaluation script given [here](_script_for_eval.py). During evaluation we prompt |
|
the LLM to generate a structured README.md file using the entire contents of the repository (`repo_content`). We evaluate the output |
|
response from LLM by comparing it with the actual README file of that repository across several different metrics. |
|
|
|
In addition to the traditional NLP metircs like BLEU, ROUGE scores and cosine similarity, we also compute custom metrics |
|
that capture structural similarity, code consistency, readbility and information retrieval (from code to README). The final score |
|
is generated between by taking a weighted average of the metrics. The weights used for the final score are shown below. |
|
|
|
``` |
|
weights = { |
|
'bleu': 0.1, |
|
'rouge-1': 0.033, |
|
'rouge-2': 0.033, |
|
'rouge-l': 0.034, |
|
'cosine_similarity': 0.1, |
|
'structural_similarity': 0.1, |
|
'information_retrieval': 0.2, |
|
'code_consistency': 0.2, |
|
'readability': 0.2 |
|
} |
|
``` |
|
|
|
At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the |
|
leaderboard please create a PR with the log file of the run and details about the model. |
|
|
|
# Leaderboard |
|
|
|
|