Spaces:
Sleeping
Sleeping
title: Top5 Error Rate | |
emoji: π | |
colorFrom: yellow | |
colorTo: blue | |
sdk: gradio | |
sdk_version: 5.24.0 | |
app_file: app.py | |
pinned: false | |
tags: | |
- evaluate | |
- metric | |
# Metric Card for Top-5 error rate | |
## Metric Description | |
The "top-5 error" is the percentage of times that the target label does not appear among the 5 highest-probability predictions. It can be computed with: | |
Top-5 Error Rate = 1 - Top-5 Accuracy | |
or equivalently: | |
Top-5 Error Rate = (Number of incorrect top-5 predictions) / (Total number of cases processed) | |
Where: | |
- Top-5 Accuracy: The proportion of cases where the true label is among the model's top 5 predicted classes. | |
- Incorrect top-5 prediction: The true label is not in the top 5 predicted classes (ranked by probability). | |
## How to Use | |
At minimum, this metric requires predictions and references as inputs. | |
```python | |
accuracy_metric = evaluate.load("Aye10032/top5_error_rate") | |
labels: torch.Tensor = batch_data['labels'] | |
train_output = model(datas) | |
results = accuracy_metric.compute(references=train_output.cpu(), predictions=labels) | |
print(results) | |
``` | |
output is | |
``` | |
{'top5_error_rate': ..., 'accuracy': ...} | |
``` |