mathiascreutz
		
	commited on
		
		
					Commit 
							
							·
						
						d6a4efb
	
1
								Parent(s):
							
							04ebbb9
								
Minor modifications
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -123,8 +123,8 @@ data = load_dataset("GEM/opusparcus", lang="de") 
     | 
|
| 123 | 
         | 
| 124 | 
         
             
            The above command will download the validation and test sets for
         
     | 
| 125 | 
         
             
            German. If additionally, you want to retrieve training data, you need
         
     | 
| 126 | 
         
            -
            to specify the level of quality you desire, such as "90% 
     | 
| 127 | 
         
            -
             
     | 
| 128 | 
         | 
| 129 | 
         
             
            ```
         
     | 
| 130 | 
         
             
            data = load_dataset("GEM/opusparcus", lang="fr", quality=90)
         
     | 
| 
         @@ -283,55 +283,85 @@ and largest (`quality=60`) train configuration have been shown. 
     | 
|
| 283 | 
         | 
| 284 | 
         
             
            ### Curation Rationale
         
     | 
| 285 | 
         | 
| 286 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 287 | 
         | 
| 288 | 
         
             
            ### Source Data
         
     | 
| 289 | 
         | 
| 290 | 
         
             
            #### Initial Data Collection and Normalization
         
     | 
| 291 | 
         | 
| 292 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 293 | 
         | 
| 294 | 
         
             
            #### Who are the source language producers?
         
     | 
| 295 | 
         | 
| 296 | 
         
            -
             
     | 
| 
         | 
|
| 297 | 
         | 
| 298 | 
         
             
            ### Annotations
         
     | 
| 299 | 
         | 
| 300 | 
         
             
            #### Annotation process
         
     | 
| 301 | 
         | 
| 302 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 303 | 
         | 
| 304 | 
         
             
            #### Who are the annotators?
         
     | 
| 305 | 
         | 
| 306 | 
         
            -
             
     | 
| 
         | 
|
| 307 | 
         | 
| 308 | 
         
             
            ### Personal and Sensitive Information
         
     | 
| 309 | 
         | 
| 310 | 
         
            -
             
     | 
| 311 | 
         | 
| 312 | 
         
             
            ## Considerations for Using the Data
         
     | 
| 313 | 
         | 
| 314 | 
         
             
            ### Social Impact of Dataset
         
     | 
| 315 | 
         | 
| 316 | 
         
            -
             
     | 
| 317 | 
         | 
| 318 | 
         
             
            ### Discussion of Biases
         
     | 
| 319 | 
         | 
| 320 | 
         
            -
             
     | 
| 
         | 
|
| 321 | 
         | 
| 322 | 
         
             
            ### Other Known Limitations
         
     | 
| 323 | 
         | 
| 324 | 
         
            -
             
     | 
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 325 | 
         | 
| 326 | 
         
             
            ## Additional Information
         
     | 
| 327 | 
         | 
| 328 | 
         
             
            ### Dataset Curators
         
     | 
| 329 | 
         | 
| 330 | 
         
            -
             
     | 
| 331 | 
         | 
| 332 | 
         
             
            ### Licensing Information
         
     | 
| 333 | 
         | 
| 334 | 
         
            -
             
     | 
| 335 | 
         | 
| 336 | 
         
             
            ### Citation Information
         
     | 
| 337 | 
         | 
| 
         | 
|
| 123 | 
         | 
| 124 | 
         
             
            The above command will download the validation and test sets for
         
     | 
| 125 | 
         
             
            German. If additionally, you want to retrieve training data, you need
         
     | 
| 126 | 
         
            +
            to specify the level of quality you desire, such as "French, with 90%
         
     | 
| 127 | 
         
            +
            quality of the training data":
         
     | 
| 128 | 
         | 
| 129 | 
         
             
            ```
         
     | 
| 130 | 
         
             
            data = load_dataset("GEM/opusparcus", lang="fr", quality=90)
         
     | 
| 
         | 
|
| 283 | 
         | 
| 284 | 
         
             
            ### Curation Rationale
         
     | 
| 285 | 
         | 
| 286 | 
         
            +
            Opusparcus was created in order to produce a *sentential* paraphrase corpus
         
     | 
| 287 | 
         
            +
            for multiple languages containing *colloquial* language (as opposed to
         
     | 
| 288 | 
         
            +
            news or religious text, for instance).
         
     | 
| 289 | 
         | 
| 290 | 
         
             
            ### Source Data
         
     | 
| 291 | 
         | 
| 292 | 
         
             
            #### Initial Data Collection and Normalization
         
     | 
| 293 | 
         | 
| 294 | 
         
            +
            The data in Opusparcus has been extracted from
         
     | 
| 295 | 
         
            +
            [OpenSubtitles2016](http://opus.nlpl.eu/OpenSubtitles2016.php), which
         
     | 
| 296 | 
         
            +
            is in turn based on data from http://www.opensubtitles.org/.
         
     | 
| 297 | 
         
            +
             
     | 
| 298 | 
         
            +
            The sentences have been tokenized.
         
     | 
| 299 | 
         | 
| 300 | 
         
             
            #### Who are the source language producers?
         
     | 
| 301 | 
         | 
| 302 | 
         
            +
            The texts consist of subtitles that have been produced using
         
     | 
| 303 | 
         
            +
            crowdsourcing.
         
     | 
| 304 | 
         | 
| 305 | 
         
             
            ### Annotations
         
     | 
| 306 | 
         | 
| 307 | 
         
             
            #### Annotation process
         
     | 
| 308 | 
         | 
| 309 | 
         
            +
            The development and test sets consist of sentence
         
     | 
| 310 | 
         
            +
            pairs that have been annotated manually; each set contains
         
     | 
| 311 | 
         
            +
            approximately 1000 sentence pairs that have been verified to be
         
     | 
| 312 | 
         
            +
            acceptable paraphrases by two indepedent annotators.
         
     | 
| 313 | 
         
            +
             
     | 
| 314 | 
         
            +
            The `annot_score` field reflects the judgments made by the annotators.
         
     | 
| 315 | 
         
            +
            If ´the annnotators fully agreed on the category (4.0: dark green,
         
     | 
| 316 | 
         
            +
            3.0: light green, 2.0: yellow, 1.0: red), the value of
         
     | 
| 317 | 
         
            +
            `annot_score` is 4.0, 3.0, 2.0 or 1.0.  If the two annotators
         
     | 
| 318 | 
         
            +
            chose adjacent categories, the value in this field will be 3.5, 2.5 or
         
     | 
| 319 | 
         
            +
            1.5.  For instance, a value of 2.5 means that one annotator gave a
         
     | 
| 320 | 
         
            +
            score of 3 ("mostly good"), indicating a possible paraphrase pair,
         
     | 
| 321 | 
         
            +
            whereas the other annotator scored this as a 2 ("mostly bad"), that
         
     | 
| 322 | 
         
            +
            is, unlikely to be a paraphrase pair.  If the annotators disagreed by
         
     | 
| 323 | 
         
            +
            more than one category, the sentence pair was discarded and won't show
         
     | 
| 324 | 
         
            +
            up in the datasets.
         
     | 
| 325 | 
         | 
| 326 | 
         
             
            #### Who are the annotators?
         
     | 
| 327 | 
         | 
| 328 | 
         
            +
            Students and staff at the University of Helsinki (native or very
         
     | 
| 329 | 
         
            +
            proficient speakers of the target languages)
         
     | 
| 330 | 
         | 
| 331 | 
         
             
            ### Personal and Sensitive Information
         
     | 
| 332 | 
         | 
| 333 | 
         
            +
            The datasets do not contain any personal or sensitive information.
         
     | 
| 334 | 
         | 
| 335 | 
         
             
            ## Considerations for Using the Data
         
     | 
| 336 | 
         | 
| 337 | 
         
             
            ### Social Impact of Dataset
         
     | 
| 338 | 
         | 
| 339 | 
         
            +
            The goal of Opusparcus is to promote the support for colloquial language.
         
     | 
| 340 | 
         | 
| 341 | 
         
             
            ### Discussion of Biases
         
     | 
| 342 | 
         | 
| 343 | 
         
            +
            The data reflect the biases present in the movies and TV shows that
         
     | 
| 344 | 
         
            +
            have been subtitled.
         
     | 
| 345 | 
         | 
| 346 | 
         
             
            ### Other Known Limitations
         
     | 
| 347 | 
         | 
| 348 | 
         
            +
            The sentence pairs in the validation and test sets have been selected
         
     | 
| 349 | 
         
            +
            in such a manner that their Levenshtein distance (minimum edit
         
     | 
| 350 | 
         
            +
            distance) exceeds a certain theshold. This guarantees that the manual
         
     | 
| 351 | 
         
            +
            annotation effort focuses on "interesting" sentence pairs rather than
         
     | 
| 352 | 
         
            +
            trivial variations (such as "It is good." vs. "It's good."). The
         
     | 
| 353 | 
         
            +
            training sets, however, have not been prefiltered in this manner and
         
     | 
| 354 | 
         
            +
            thus also contain highly similar sentences.
         
     | 
| 355 | 
         | 
| 356 | 
         
             
            ## Additional Information
         
     | 
| 357 | 
         | 
| 358 | 
         
             
            ### Dataset Curators
         
     | 
| 359 | 
         | 
| 360 | 
         
            +
            Mathias Creutz, University of Helsinki, Finland
         
     | 
| 361 | 
         | 
| 362 | 
         
             
            ### Licensing Information
         
     | 
| 363 | 
         | 
| 364 | 
         
            +
            CC-BY-NC 4.0
         
     | 
| 365 | 
         | 
| 366 | 
         
             
            ### Citation Information
         
     | 
| 367 | 
         |