transZ commited on
Commit
3acc566
·
1 Parent(s): 5f14d86
Files changed (1) hide show
  1. README.md +106 -29
README.md CHANGED
@@ -1,50 +1,127 @@
1
  ---
2
- title: test_parascore
3
- datasets:
4
- -
5
- tags:
6
- - evaluate
7
- - metric
8
- description: "TODO: add a description here"
9
  sdk: gradio
10
  sdk_version: 3.0.2
11
  app_file: app.py
12
  pinned: false
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
- # Metric Card for test_parascore
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
 
19
- ## Metric Description
20
- *Give a brief overview of this metric, including what task(s) it is usually used for, if any.*
21
 
22
- ## How to Use
23
- *Give general statement of how to use the metric*
24
 
25
- *Provide simplest possible example for using the metric*
 
 
 
 
 
 
 
 
26
 
27
- ### Inputs
28
- *List all input arguments in the format below*
29
- - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
30
 
31
- ### Output Values
 
 
 
 
 
 
 
 
32
 
33
- *Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
34
 
35
- *State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
36
 
37
- #### Values from Popular Papers
38
- *Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
39
 
40
- ### Examples
41
- *Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
42
 
43
- ## Limitations and Bias
44
- *Note any known limitations or biases that the metric has, with links and references if possible.*
45
 
46
  ## Citation
47
- *Cite the source where this metric was introduced.*
48
 
49
- ## Further References
50
- *Add any useful further references.*
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Test ParaScore
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: red
 
 
 
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
+ description: >-
14
+ BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference
15
+ sentences by cosine similarity.
16
+ It has been shown to correlate with human judgment on sentence-level and system-level evaluation.
17
+ Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language
18
+ generation tasks.
19
+
20
+ See the project's README at https://github.com/Tiiiger/bert_score#readme for more information.
21
  ---
22
 
23
+ # Metric Card for BERT Score
24
+
25
+ ## Metric description
26
+
27
+ BERTScore is an automatic evaluation metric for text generation that computes a similarity score for each token in the candidate sentence with each token in the reference sentence. It leverages the pre-trained contextual embeddings from [BERT](https://huggingface.co/bert-base-uncased) models and matches words in candidate and reference sentences by cosine similarity.
28
+
29
+ Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks.
30
+
31
+ ## How to use
32
+
33
+ BERTScore takes 3 mandatory arguments : `predictions` (a list of string of candidate sentences), `references` (a list of strings or list of list of strings of reference sentences) and either `lang` (a string of two letters indicating the language of the sentences, in [ISO 639-1 format](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)) or `model_type` (a string specififying which model to use, according to the BERT specification). The default behavior of the metric is to use the suggested model for the target language when one is specified, otherwise to use the `model_type` indicated.
34
+
35
+ ```python
36
+ from evaluate import load
37
+ bertscore = load("bertscore")
38
+ predictions = ["hello there", "general kenobi"]
39
+ references = ["hello there", "general kenobi"]
40
+ results = bertscore.compute(predictions=predictions, references=references, lang="en")
41
+ ```
42
+
43
+ BERTScore also accepts multiple optional arguments:
44
+
45
+
46
+ `num_layers` (int): The layer of representation to use. The default is the number of layers tuned on WMT16 correlation data, which depends on the `model_type` used.
47
+
48
+ `verbose` (bool): Turn on intermediate status update. The default value is `False`.
49
+
50
+ `idf` (bool or dict): Use idf weighting; can also be a precomputed idf_dict.
51
+ `device` (str): On which the contextual embedding model will be allocated on. If this argument is `None`, the model lives on `cuda:0` if cuda is available.
52
+ `nthreads` (int): Number of threads used for computation. The default value is `4`.
53
+ `rescale_with_baseline` (bool): Rescale BERTScore with the pre-computed baseline. The default value is `False`.
54
+ `batch_size` (int): BERTScore processing batch size, at least one of `model_type` or `lang`. `lang` needs to be specified when `rescale_with_baseline` is `True`.
55
+ `baseline_path` (str): Customized baseline file.
56
+
57
+ `use_fast_tokenizer` (bool): `use_fast` parameter passed to HF tokenizer. The default value is `False`.
58
+
59
+
60
+ ## Output values
61
+
62
+ BERTScore outputs a dictionary with the following values:
63
+
64
+ `precision`: The [precision](https://huggingface.co/metrics/precision) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
65
+
66
+ `recall`: The [recall](https://huggingface.co/metrics/recall) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
67
+
68
+ `f1`: The [F1 score](https://huggingface.co/metrics/f1) for each sentence from the `predictions` + `references` lists, which ranges from 0.0 to 1.0.
69
+
70
+ `hashcode:` The hashcode of the library.
71
+
72
+
73
+ ### Values from popular papers
74
+ The [original BERTScore paper](https://openreview.net/pdf?id=SkeHuCVFDr) reported average model selection accuracies (Hits@1) on WMT18 hybrid systems for different language pairs, which ranged from 0.004 for `en<->tr` to 0.824 for `en<->de`.
75
 
76
+ For more recent model performance, see the [metric leaderboard](https://paperswithcode.com/paper/bertscore-evaluating-text-generation-with).
77
 
78
+ ## Examples
 
79
 
80
+ Maximal values with the `distilbert-base-uncased` model:
 
81
 
82
+ ```python
83
+ from evaluate import load
84
+ bertscore = load("bertscore")
85
+ predictions = ["hello world", "general kenobi"]
86
+ references = ["hello world", "general kenobi"]
87
+ results = bertscore.compute(predictions=predictions, references=references, model_type="distilbert-base-uncased")
88
+ print(results)
89
+ {'precision': [1.0, 1.0], 'recall': [1.0, 1.0], 'f1': [1.0, 1.0], 'hashcode': 'distilbert-base-uncased_L5_no-idf_version=0.3.10(hug_trans=4.10.3)'}
90
+ ```
91
 
92
+ Partial match with the `distilbert-base-uncased` model:
 
 
93
 
94
+ ```python
95
+ from evaluate import load
96
+ bertscore = load("bertscore")
97
+ predictions = ["hello world", "general kenobi"]
98
+ references = ["goodnight moon", "the sun is shining"]
99
+ results = bertscore.compute(predictions=predictions, references=references, model_type="distilbert-base-uncased")
100
+ print(results)
101
+ {'precision': [0.7380737066268921, 0.5584042072296143], 'recall': [0.7380737066268921, 0.5889028906822205], 'f1': [0.7380737066268921, 0.5732481479644775], 'hashcode': 'bert-base-uncased_L5_no-idf_version=0.3.10(hug_trans=4.10.3)'}
102
+ ```
103
 
104
+ ## Limitations and bias
105
 
106
+ The [original BERTScore paper](https://openreview.net/pdf?id=SkeHuCVFDr) showed that BERTScore correlates well with human judgment on sentence-level and system-level evaluation, but this depends on the model and language pair selected.
107
 
108
+ Furthermore, not all languages are supported by the metric -- see the [BERTScore supported language list](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) for more information.
 
109
 
110
+ Finally, calculating the BERTScore metric involves downloading the BERT model that is used to compute the score-- the default model for `en`, `roberta-large`, takes over 1.4GB of storage space and downloading it can take a significant amount of time depending on the speed of your internet connection. If this is an issue, choose a smaller model; for instance `distilbert-base-uncased` is 268MB. A full list of compatible models can be found [here](https://docs.google.com/spreadsheets/d/1RKOVpselB98Nnh_EOC4A2BYn8_201tmPODpNWu4w7xI/edit#gid=0).
 
111
 
 
 
112
 
113
  ## Citation
 
114
 
115
+ ```bibtex
116
+ @inproceedings{bert-score,
117
+ title={BERTScore: Evaluating Text Generation with BERT},
118
+ author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},
119
+ booktitle={International Conference on Learning Representations},
120
+ year={2020},
121
+ url={https://openreview.net/forum?id=SkeHuCVFDr}
122
+ }
123
+ ```
124
+
125
+ ## Further References
126
+ - [BERTScore Project README](https://github.com/Tiiiger/bert_score#readme)
127
+ - [BERTScore ICLR 2020 Poster Presentation](https://iclr.cc/virtual_2020/poster_SkeHuCVFDr.html)