Spaces:
Build error
Build error
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,50 +1,47 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
datasets:
|
| 4 |
-
|
| 5 |
tags:
|
| 6 |
- evaluate
|
| 7 |
- metric
|
| 8 |
-
description: "
|
| 9 |
sdk: gradio
|
| 10 |
sdk_version: 3.0.2
|
| 11 |
app_file: app.py
|
| 12 |
pinned: false
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# Metric Card for
|
| 16 |
|
| 17 |
***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
|
| 18 |
|
| 19 |
## Metric Description
|
| 20 |
-
*
|
|
|
|
|
|
|
| 21 |
|
| 22 |
## How to Use
|
| 23 |
-
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
-
|
| 28 |
-
*List all input arguments in the format below*
|
| 29 |
-
- **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
|
| 30 |
-
|
| 31 |
-
### Output Values
|
| 32 |
-
|
| 33 |
-
*Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
|
| 34 |
-
|
| 35 |
-
*State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
|
| 36 |
-
|
| 37 |
-
#### Values from Popular Papers
|
| 38 |
-
*Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
|
| 39 |
-
|
| 40 |
-
### Examples
|
| 41 |
-
*Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
|
| 42 |
-
|
| 43 |
-
## Limitations and Bias
|
| 44 |
-
*Note any known limitations or biases that the metric has, with links and references if possible.*
|
| 45 |
|
| 46 |
## Citation
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
## Further References
|
| 50 |
-
|
|
|
|
| 1 |
---
|
| 2 |
+
title: FBeta_Score
|
| 3 |
datasets:
|
| 4 |
-
|
| 5 |
tags:
|
| 6 |
- evaluate
|
| 7 |
- metric
|
| 8 |
+
description: "Calculate FBeta_Score"
|
| 9 |
sdk: gradio
|
| 10 |
sdk_version: 3.0.2
|
| 11 |
app_file: app.py
|
| 12 |
pinned: false
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Metric Card for FBeta_Score
|
| 16 |
|
| 17 |
***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
|
| 18 |
|
| 19 |
## Metric Description
|
| 20 |
+
*Compute the F-beta score.
|
| 21 |
+
The F-beta score is the weighted harmonic mean of precision and recall, reaching its optimal value at 1 and its worst value at 0.
|
| 22 |
+
The beta parameter determines the weight of recall in the combined score. beta < 1 lends more weight to precision, while beta > 1 favors recall (beta -> 0 considers only precision, beta -> +inf only recall).*
|
| 23 |
|
| 24 |
## How to Use
|
| 25 |
+
``` python
|
| 26 |
|
| 27 |
+
f_beta = evaluate.load("leslyarun/f_beta")
|
| 28 |
+
results = f_beta.compute(references=[0, 1], predictions=[0, 1], beta=0.5)
|
| 29 |
+
print(results)
|
| 30 |
+
{'f_beta_score': 1.0}
|
| 31 |
|
| 32 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
|
| 34 |
## Citation
|
| 35 |
+
@article{scikit-learn,
|
| 36 |
+
title={Scikit-learn: Machine Learning in {P}ython},
|
| 37 |
+
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
|
| 38 |
+
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
|
| 39 |
+
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
|
| 40 |
+
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
|
| 41 |
+
journal={Journal of Machine Learning Research},
|
| 42 |
+
volume={12},
|
| 43 |
+
pages={2825--2830},
|
| 44 |
+
year={2011}
|
| 45 |
|
| 46 |
## Further References
|
| 47 |
+
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html#sklearn.metrics.fbeta_score
|