Spaces:
Running
Running
File size: 2,483 Bytes
0380c4f eae65f4 872c476 eae65f4 872c476 0380c4f eae65f4 0380c4f eae65f4 0380c4f eae65f4 0380c4f eae65f4 0380c4f eae65f4 0380c4f eae65f4 0380c4f eae65f4 0380c4f eae65f4 0380c4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
title: TRAIL
emoji: 🥇
colorFrom: green
colorTo: indigo
sdk: gradio
app_file: app.py
pinned: true
license: mit
short_description: 'TRAIL: Trace Reasoning and Agentic Issue Localization'
sdk_version: 5.19.0
---
# Model Performance Leaderboard
This is a Hugging Face Space that hosts a leaderboard for comparing model performances across various metrics of TRAIL dataset.
## Features
- **Submit Model Results**: Share your model's performance metrics
- **Interactive Leaderboard**: View and sort all submissions
- **Integrated Backend**: Stores all submissions with timestamp and attribution
- **Customizable Metrics**: Configure which metrics to display and track
## Installation
### Setting Up Your Space
1. Upload all files to your Hugging Face Space
2. Make sure to make `start.sh` executable:
```bash
chmod +x start.sh
```
3. Configure your Space to use the `start.sh` script as the entry point
### Troubleshooting Installation Issues
If you encounter JSON parsing errors:
1. Check if `models.json` exists and is a valid JSON file
2. Run `python setup.py` to regenerate configuration files
3. If problems persist, delete the `models.json` file and let the setup script create a new one
## How to Use
### Viewing the Leaderboard
Navigate to the "Leaderboard" tab to see all submitted models. You can:
- Sort by any metric (click on the dropdown)
- Change sort order (ascending/descending)
- Refresh the leaderboard for the latest submissions
### Submitting a Model
1. Go to the "Submit Model" tab
2. Fill in your model name, your name, and optional description
3. Enter values for the requested metrics
4. Click "Submit Model"
## Configuration
You can customize this leaderboard by modifying the `models.json` file:
```json
{
"title": "TRAIL Performance Leaderboard",
"description": "This leaderboard tracks and compares model performance across multiple metrics. Submit your model results to see how they stack up!",
"metrics": ["accuracy", "f1_score", "precision", "recall"],
"main_metric": "accuracy"
}
```
- `title`: The title of your leaderboard
- `description`: A description that appears at the top
- `metrics`: List of metrics to track
- `main_metric`: Default metric for sorting
## Technical Details
This leaderboard is built using:
- Gradio for the UI components
- A file-based database to store submissions
- Pandas for data manipulation and display
## License
This project is open source and available under the MIT license. |