modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-22 00:45:16
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
570 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-22 00:43:28
card
stringlengths
11
1.01M
pucpr/clinicalnerpt-diagnostic
pucpr
2021-10-13T09:33:19Z
200
5
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Uretrocistografia miccional, residuo pos miccional significativo." - text: "No exame, apresentou apenas leve hiperemia no local do choque." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Diagnostic The Diagnostic NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-disease
pucpr
2021-10-13T09:33:02Z
104
9
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "DEVIDO AO FATO DE TER DPOC E APRESENTADO DISFUNÇÃO RESPIRATÓRIA AGUDA COM INFILTRADO PULMONAR EM BASE DIREITA" - text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Disease The Disease NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-disorder
pucpr
2021-10-13T09:32:51Z
104
5
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "PACIENTE DE 69 ANOS COM ICC DE ETIOLOGIA ISQUÊMICA " - text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Disorder The Disorder NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-finding
pucpr
2021-10-13T09:32:39Z
5
5
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "RECEBE ALTA EM BOM ESTADO GERAL, COM PLANO DE ACOMPANHAR NO AMBULATÓRIO." - text: "PACIENTE APRESENTOU BOA EVOLUÇÃO CLÍNICA APÓS OTIMIZAÇÃO DO TTO DA ICC." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Finding The Finding NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-laboratory
pucpr
2021-10-13T09:32:17Z
5
3
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Exame de creatinina urinaria: 41, 8 mg/dL." - text: "Parcial de urina com 150mg/dL de priteinas, ph de 5,0 e 1034 leucocitos." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Laboratory The Laboratory NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-procedure
pucpr
2021-10-13T09:32:04Z
96
4
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Dispneia venoso central em subclavia D duplolumen recebendo solução salina e glicosada em BI." - text: "FOI REALIZADO CURSO DE ATB COM LEVOFLOXACINA POR 7 DIAS." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Procedure The Procedure NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-quantitative
pucpr
2021-10-13T09:31:50Z
5
4
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Paciente faz uso de losartana 50mg, HCTZ 25mg DM ha 25 anos." - text: "Paciente com Sepse pulmonar em D8 tazocin (paciente não recebeu por 2 dias Atb)." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Quantitative The Quantitative NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
pucpr/clinicalnerpt-medical
pucpr
2021-10-13T09:28:28Z
150
6
transformers
[ "transformers", "pytorch", "bert", "token-classification", "pt", "dataset:SemClinBr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "Hoje realizou avaliacao de mp-cdi, com eletrodos atrial e ventricular." - text: "Paciente encaminhado a câmera hiperbárica no período da tarde." datasets: - SemClinBr thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # Portuguese Clinical NER - Medical The Medical NER model is part of the [BioBERTpt project](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/), where 13 models of clinical entities (compatible with UMLS) were trained. All NER model from "pucpr" user was trained from the Brazilian clinical corpus [SemClinBr](https://github.com/HAILab-PUCPR/SemClinBr), with 10 epochs and IOB2 format, from BioBERTpt(all) model. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
S34NtheGuy/DialoGPT-small-Harry282
S34NtheGuy
2021-10-12T17:21:19Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational --- # DialoGPT chat bot model using discord messages as data
lewtun/xlm-roberta-base-finetuned-marc-500-samples
lewtun
2021-10-12T15:12:51Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags:text-classification ---
biu-nlp/alephbert-base
biu-nlp
2021-10-12T10:58:33Z
82
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "language model", "he", "dataset:oscar", "dataset:wikipedia", "dataset:twitter", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - he tags: - language model license: apache-2.0 datasets: - oscar - wikipedia - twitter --- # AlephBERT ## Hebrew Language Model State-of-the-art language model for Hebrew. Based on Google's BERT architecture [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805). #### How to use ```python from transformers import BertModel, BertTokenizerFast alephbert_tokenizer = BertTokenizerFast.from_pretrained('onlplab/alephbert-base') alephbert = BertModel.from_pretrained('onlplab/alephbert-base') # if not finetuning - disable dropout alephbert.eval() ``` ## Training data 1. OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/) Hebrew section (10 GB text, 20 million sentences). 2. Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/hewiki/latest/) (650 MB text, 3 million sentences). 3. Hebrew Tweets collected from the Twitter sample stream (7 GB text, 70 million sentences). ## Training procedure Trained on a DGX machine (8 V100 GPUs) using the standard huggingface training procedure. Since the larger part of our training data is based on tweets we decided to start by optimizing using Masked Language Model loss only. To optimize training time we split the data into 4 sections based on max number of tokens: 1. num tokens < 32 (70M sentences) 2. 32 <= num tokens < 64 (12M sentences) 3. 64 <= num tokens < 128 (10M sentences) 4. 128 <= num tokens < 512 (1.5M sentences) Each section was first trained for 5 epochs with an initial learning rate set to 1e-4. Then each section was trained for another 5 epochs with an initial learning rate set to 1e-5, for a total of 10 epochs. Total training time was 8 days.
m3hrdadfi/xlmr-large-qa-fa
m3hrdadfi
2021-10-12T08:36:53Z
339
5
transformers
[ "transformers", "pytorch", "tf", "xlm-roberta", "question-answering", "roberta", "squad", "fa", "multilingual", "dataset:SajjadAyoubi/persian_qa", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - fa - multilingual tags: - question-answering - xlm-roberta - roberta - squad datasets: - SajjadAyoubi/persian_qa metrics: - squad_v2 widget: - text: "کاربردهای لاپلاسین؟" context: "معادلهٔ لاپلاس یک معادله دیفرانسیل با مشتقات جزئی است که از اهمّیّت و کاربرد فراوانی در ریاضیّات، فیزیک، و مهندسی برخوردار است. به عنوان چند نمونه می‌شود به زمینه‌هایی همچون الکترومغناطیس، ستاره‌شناسی، و دینامیک سیالات اشاره کرد که حلّ این معادله در آن‌ها کاربرد دارد." - text: "نام دیگر شب یلدا؟" context: "شب یَلدا یا شب چلّه یکی از کهن‌ترین جشن‌های ایرانی است. در این جشن، طی شدن بلندترین شب سال و به دنبال آن بلندتر شدن طول روزها در نیم‌کرهٔ شمالی، که مصادف با انقلاب زمستانی است، گرامی داشته می‌شود. نام دیگر این شب «چِلّه» است، زیرا برگزاری این جشن، یک آیین ایرانی‌است." - text: "کهن ترین جشن ایرانی‌ها چه است؟" context: "شب یَلدا یا شب چلّه یکی از کهن‌ترین جشن‌های ایرانی است. در این جشن، طی شدن بلندترین شب سال و به دنبال آن بلندتر شدن طول روزها در نیم‌کرهٔ شمالی، که مصادف با انقلاب زمستانی است، گرامی داشته می‌شود. نام دیگر این شب «چِلّه» است، زیرا برگزاری این جشن، یک آیین ایرانی‌است." - text: "شب یلدا مصادف با چه پدیده‌ای است؟" context: "شب یَلدا یا شب چلّه یکی از کهن‌ترین جشن‌های ایرانی است. در این جشن، طی شدن بلندترین شب سال و به دنبال آن بلندتر شدن طول روزها در نیم‌کرهٔ شمالی، که مصادف با انقلاب زمستانی است، گرامی داشته می‌شود. نام دیگر این شب «چِلّه» است، زیرا برگزاری این جشن، یک آیین ایرانی‌است." model-index: - name: XLM-RoBERTa large for QA (PersianQA - 🇮🇷) results: - task: type: question-answering name: Question Answering dataset: type: SajjadAyoubi/persian_qa name: PersianQA args: fa metrics: - type: squad_v2 value: 83.46 name: Eval F1 args: max_order - type: squad_v2 value: 66.88 name: Eval Exact args: max_order --- # XLM-RoBERTa large for QA (PersianQA - 🇮🇷) This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the [PersianQA](https://github.com/sajjjadayobi/PersianQA) dataset. ## Hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 - mixed_precision_training: Native AMP ## Performance Evaluation results on the eval set with the official [eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ### Evalset ```text "HasAns_exact": 58.678955453149, "HasAns_f1": 82.3746683591845, "HasAns_total": 651, "NoAns_exact": 86.02150537634408, "NoAns_f1": 86.02150537634408, "NoAns_total": 279, "exact": 66.88172043010752, "f1": 83.46871946433232, "total": 930 ``` ## Usage ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name_or_path = "m3hrdadfi/xlmr-large-qa-fa" nlp = pipeline('question-answering', model=model_name_or_path, tokenizer=model_name_or_path) context = """ شب یَلدا یا شب چلّه یکی از کهن‌ترین جشن‌های ایرانی است. در این جشن، طی شدن بلندترین شب سال و به دنبال آن بلندتر شدن طول روزها در نیم‌کرهٔ شمالی، که مصادف با انقلاب زمستانی است، گرامی داشته می‌شود. نام دیگر این شب «چِلّه» است، زیرا برگزاری این جشن، یک آیین ایرانی‌است. """ # Translation [EN] # context = [ # Yalda night or Cheleh night is one of the oldest Iranian celebrations. # The festival celebrates the longest night of the year, followed by longer days in the Northern Hemisphere, # which coincides with the Winter Revolution. # Another name for this night is "Chelleh", because holding this celebration is an Iranian ritual. # ] questions = [ "نام دیگر شب یلدا؟", "کهن ترین جشن ایرانی‌ها چه است؟", "شب یلدا مصادف با چه پدیده‌ای است؟" ] # Translation [EN] # questions = [ # Another name for Yalda night? # What is the ancient tradition of Iranian celebration? # What phenomenon does Yalda night coincide with? # ] kwargs = {} for question in questions: r = nlp(question=question, context=context, **kwargs) answer = " ".join([token.strip() for token in r["answer"].strip().split() if token.strip()]) print(f"{question} {answer}") ``` **Output** ```text نام دیگر شب یلدا؟ «چِلّه» کهن ترین جشن ایرانی‌ها چه است؟ شب یَلدا یا شب چلّه شب یلدا مصادف با چه پدیده‌ای است؟ انقلاب زمستانی # Translation [EN] # Another name for Yalda night? Cheleh night # What is the ancient tradition of Iranian celebration? Yalda night or Chele night # What phenomenon does Yalda night coincide with? Winter revolution ``` ## Authors - [Mehrdad Farahani](https://github.com/m3hrdadfi) ## Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.9.1+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
geckos/deberta-base-fine-tuned-ner
geckos
2021-10-12T08:05:37Z
399
2
transformers
[ "transformers", "pytorch", "tensorboard", "deberta", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: deberta-base-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9563020492186769 - name: Recall type: recall value: 0.9652436720816018 - name: F1 type: f1 value: 0.9607520564042303 - name: Accuracy type: accuracy value: 0.9899205302077261 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-finetuned-ner This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0501 - Precision: 0.9563 - Recall: 0.9652 - F1: 0.9608 - Accuracy: 0.9899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1419 | 1.0 | 878 | 0.0628 | 0.9290 | 0.9288 | 0.9289 | 0.9835 | | 0.0379 | 2.0 | 1756 | 0.0466 | 0.9456 | 0.9567 | 0.9511 | 0.9878 | | 0.0176 | 3.0 | 2634 | 0.0473 | 0.9539 | 0.9575 | 0.9557 | 0.9890 | | 0.0098 | 4.0 | 3512 | 0.0468 | 0.9570 | 0.9635 | 0.9603 | 0.9896 | | 0.0043 | 5.0 | 4390 | 0.0501 | 0.9563 | 0.9652 | 0.9608 | 0.9899 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
geckos/distilbert-base-uncased-fine-tuned-ner
geckos
2021-10-12T05:59:22Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9303228669699323 - name: Recall type: recall value: 0.9380243875153821 - name: F1 type: f1 value: 0.9341577540106952 - name: Accuracy type: accuracy value: 0.9842407104389407 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Precision: 0.9303 - Recall: 0.9380 - F1: 0.9342 - Accuracy: 0.9842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2459 | 1.0 | 878 | 0.0696 | 0.9117 | 0.9195 | 0.9156 | 0.9808 | | 0.0513 | 2.0 | 1756 | 0.0602 | 0.9223 | 0.9376 | 0.9299 | 0.9835 | | 0.0304 | 3.0 | 2634 | 0.0606 | 0.9303 | 0.9380 | 0.9342 | 0.9842 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
gauravtripathy/distilbert-base-uncased-finetuned-cola
gauravtripathy
2021-10-12T05:57:36Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5264763891845121 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7550 - Matthews Correlation: 0.5265 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5296 | 1.0 | 535 | 0.5144 | 0.4215 | | 0.3504 | 2.0 | 1070 | 0.4903 | 0.5046 | | 0.2393 | 3.0 | 1605 | 0.6339 | 0.5058 | | 0.175 | 4.0 | 2140 | 0.7550 | 0.5265 | | 0.1259 | 5.0 | 2675 | 0.8688 | 0.5259 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
shokiokita/distilbert-base-uncased-finetuned-mrpc
shokiokita
2021-10-12T05:56:42Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.7328431372549019 - name: F1 type: f1 value: 0.8310077519379845 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5579 - Accuracy: 0.7328 - F1: 0.8310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 23 | 0.5797 | 0.7010 | 0.8195 | | No log | 2.0 | 46 | 0.5647 | 0.7083 | 0.8242 | | No log | 3.0 | 69 | 0.5677 | 0.7181 | 0.8276 | | No log | 4.0 | 92 | 0.5495 | 0.7328 | 0.8300 | | No log | 5.0 | 115 | 0.5579 | 0.7328 | 0.8310 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
V3RX2000/distilbert-base-uncased-finetuned-cola
V3RX2000
2021-10-12T02:10:11Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5396261051709696 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8107 - Matthews Correlation: 0.5396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5261 | 1.0 | 535 | 0.5509 | 0.3827 | | 0.3498 | 2.0 | 1070 | 0.4936 | 0.5295 | | 0.2369 | 3.0 | 1605 | 0.6505 | 0.5248 | | 0.1637 | 4.0 | 2140 | 0.8107 | 0.5396 | | 0.1299 | 5.0 | 2675 | 0.8738 | 0.5387 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
lincoln/barthez-squadFR-fquad-piaf-question-generation
lincoln
2021-10-11T15:24:58Z
425
4
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "seq2seq", "barthez", "fr", "dataset:squadFR", "dataset:fquad", "dataset:piaf", "arxiv:2010.12321", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - fr license: mit pipeline_tag: "text2text-generation" datasets: - squadFR - fquad - piaf metrics: - bleu - rouge widget: - text: "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus, des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\ Elle est souvent associée aux <hl>données massives et à l'analyse des données<hl>." tags: - seq2seq - barthez --- # Génération de question à partir d'un contexte Le modèle est _fine tuné_ à partir du modèle [moussaKam/barthez](https://huggingface.co/moussaKam/barthez) afin de générer des questions à partir d'un paragraphe et d'une suite de token. La suite de token représente la réponse sur laquelle la question est basée. Input: _Les projecteurs peuvent être utilisées pour \<hl\>illuminer\<hl\> des terrains de jeu extérieurs_ Output: _À quoi servent les projecteurs sur les terrains de jeu extérieurs?_ ## Données d'apprentissage La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf). L'input est le context et nous avons entouré à l'aide du token spécial **\<hl\>** les réponses. Volumétrie (nombre de triplet contexte/réponse/question): * train: 98 211 * test: 12 277 * valid: 12 776 ## Entrainement L'apprentissage s'est effectué sur une carte Tesla V100. * Batch size: 20 * Weight decay: 0.01 * Learning rate: 3x10-5 (décroit linéairement) * < 24h d'entrainement * Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) * Total steps: 56 000 <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAj0AAAGOCAYAAAB8J7JHAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAEKXSURBVHhe7d1/sB11fcf/zHSmihVDKagFpqQDrVUcQptaf9Q2acf6ox1M1NKpVkpGWrRaJ3FqW6f/JPX7HR0pNSkWKUMRqEz6bRSj8kMRh8QREWyQYtFCgw0Uwy8hKYIKWrvf+9y77+Rz9+45d2/u+bHnfJ6Pmc/cuz/Ont09e3Zf57Of3V1WSJIkZcDQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JGm0I9+9KPioosuKu64446qz9Lt3Lmz2Lx5c/l31B5//PHi0ksvLd+fMizDnn4bXZiHQZmmZdF0MPRo4nEQXrZs2VgOxl31gx/8oFwnf//3f1/1aS/CTd041/OqVauKFStWFGvWrCnLsAx7+qlNmzYVe/furboO4f2nJSiwvRh61CWGHk08Q898Swk9HIx5bV2EoVGv50996lPl/Hz5y1+u+gzPKENPr212mkIPy2HoUZcYejTxFht6brvttuLAgQNVVzPG2bVrV9U1H7/QGd70S30hgwwNzGeTYYSehfSalzrWWdtxwUFzoflpOz3G6zduPfTwWX3rW9+quuZjO2I7WGh7Ckwvtpm2oYdxHn744aqrGdM8nG2xrcVMP11GqWsMPZp47GTbhJ44mEc57bTT5u2cN2zYUBx11FFzxjvrrLOqobMHTU6zpMPpXggHxXXr1s15Hd1xsGQ+6Ee7lbqY7xiXv+vXr58zLQ6U6YG3KfSsXr26LHX04z1QX0dR0Gs9s87ScZvWK/0ZL10HrOcdO3ZUYzRjuWL8KPH+vAfvlQ6r1yrEPKefW9M6CBF6tm3bNudzfte73lWNMatpO2Be6oEq1nm6Xtme0tdFieWK0EM5+uijDw4///zzy+Eptpd0e+X/9PPp9XlS4jPvh2nVp1//zJgOy5jOS0yb/+ufiTROhh5NvDiwpTv7utj5b9mypQwHHJxWrlxZHrgiLNAvxgn0S3fyy5cvLw9a8RoOvE1Bpe7EE08s3y/mkb/0IwQEDhwcOOsYLw1e/M98xPsyf3RzsAyHG3oQ66quaT3X1yvDYr2mGIcDIuOzziiMR780rNUxPQ6a8b5RkK5TpsE8MF66LAyjH+OyvhivHshSEXpOOeWU4pZbbikeffTRg++fHrzjc495T7enFOuWZUznM94/lqmO9z/99NOLjRs3ltNlHPr9xE/8xJx55/2ZRqx7SgSqGI+/vD4tzBPjMO1+4vvANJkOJabPdALrO5Yxvivx/oxr6FGXGHo08dgB13fEdRFWUrFTj5ATB/A4kDVh+EK1E3VxcIoDQWA6af+m8WLZ0oMJ3emBHfHaGG8UoYf1RHd9vcZ4zFOgu/7e9en1EqEjFQGn/to4KId4j/r66oVwwfgf+chHqj6zTj311OK4444rnnzyyarPfLE9pfMUAaP+2aM+bmAenvvc5855zfbt28vx+RvYpqk9qyPgNfVHbCfpZ9PL2rVry/eoY/oMC7G99FpGQ4+6xNCjiRcHtqYDCJoORiENQxEo+LW+devWxl/CUTvBr/BPfvKTVd/+4pQZO/+0MI10vggRzE96gGbeOMiEfstK/3jtKEJP23lBvRsRmppen2JdMV5qEPPYJEJPvQ3NO97xjrL/N7/5zarP7LT5DHlNFMZJQ3GvdY5e88V0OH1Zx/isC8S2ynjpNkVh+216z1gX6efw3e9+t7j++uvnFLYdsK2n4SbU1z3dTeEI6TxLXWDo0cRb6MDWb3j9oETQIWiwE+c1HEDSgxgHakIMQYThbXbqTJ/pxXvVSxqueO84RcJ7EZbSX+1Rw8GwOvrHAW3coad+wEznLdXr9SnWb31+mOemU4G95rGtCC919enG58DnxXKxjcQ46XL2WudIp5fi/Zu2KcaP/vFerOd4j7TUa3rYxtiWIuCHPXv2lNNJy/79+8th/N/0mdW3D7p5zyaMZ+hRlxh6NPHqB6S6+FWchpdA/16nAjhQRM1OE6YbB4B+pwuipqeNdFniVER62qDXskatSRykFhN66rVL9YNaqL93dPdar+k0690hnV4vTaGn1zwyL+k0Yx7bitqaXjU99957b9nddAqJ7YVx0uXstc6RzmeqTeiJbbrNaSq2DYI023Ld//3f/xU//OEP55TQa95Z7rRmh+Xtt4yGHnWJoUcTLw5s/Q6eHKTqv+Djdf0OHHEQTYNHHcObDughptEUDpowr/wi50BSP1AxHxxw6r/Yo+Yhao2aQg/zGLVIIdZBOv8Rtuqa1nPTvMTr0+Wtv0eoT69JU+iJdVr/7Fhn6QE55rmtCD1NbXq4QWJgnPry0F3v3ys4gHGjPVmqTehB0zZdR+ChRoztqKl2sB8+V8J6+jr+r9cYsbyGHk0KQ48mXhzYmto3xA43DpJcLcV9VWizw847DRUcgJgGbXUYh79xwADvw0GG1zKcEpdg9wtF4CDBeLQBidcynaZTNBxEmDfGbwpkcXCNabGMdKc1D02hJ9ZTug44cFLSA3XUIjBeug7j9fwNEXDq67V+EGSc9D1CfXpNYvnqeA/eKz6P+CzSsBXz3BafL+U5z3lOeYn4VVddVZx55pnlNHisR4hAcNlll5XvzWfBdsJ46XL2Cz2MTwiNdRzbUNvQE8vG+DEf/GUbjnlgm2Cc9LOMstB6J+AQINlGmW58H+iXbu+8l6FHk8LQo4lH7UYcXJpK4GBINzttDjgcENJfsRwE4ooVdtZR4xLjsKOnOw5ujMf4Cx08AqGK9+e1FP5vCgK8T8x7On8pwkZMi/mpT4fTFL/+679efOITn6j6zOJ1Mf+8nnXHeqiHq1gXMR/RjwNsfXljvca8pOErMLwpwMU89MNBk/dtwnuly5MGHsS20RbvQ/n6179enHHGGeV0qeW55JJLqjFm8bmwLcS2wrqKzy1dTuavaX2A8RnGa9L1wPs3BYWm/kyD92ZbZT5im41ppdOvl6bPoy6dfmzv9c+L6fRaxl7LIo2LoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDT4Mbb7yxOPLII4uXvvSlFovFYrFkUZ71rGfNuaHpNDL0NLj11lvLu61ec801FovFYrFkUV74wheWd9+eZoaeBv/xH/9R3oZekqRccBd3HjcyzQw9DQw9kqTcGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRp8b773e9aOlwWYujJlKFHktr54Q9/WOzbt6+48847i2984xuWDpf//M//LB544IHqk5vP0JMpQ48ktfPII48Ud911V/Hoo48WP/jBD8oQZOleeeqpp8rPivDz+OOPV5/eXIaeTBl6JKmde+65p7j//vurLnXd3r17e35ehp5MGXokqR1qeQ4cOFB1qeseeuihMvg0MfRkytAjSe0YeiYLoYfauSaGnkwZeiSpnUkLPRz0//Vf/7Xqyo+hR/MMOvRceunMip5Z02vXVj0kaUpMWuj56Ec/WrzgBS+ouvJj6NE8gw49O3fOhp7Vq6sekjQlDD2TxdCjeQw9ktQO9+eZ9NBDw97NmzeXZSc77JpLL7304HCeQp4ub31Y1xl6NI+hR5LamfTQs2PHjuKoo44qNmzYUGzatKlYvnx5+X/g/5UrV5bDKGvXrj0YjFbP7NTrw7rO0KN5Bh16brttNvSsWFH1kKQp0XR666UvHW15zWuqN26hHnpOPPHEMrCE22Z22Mtmdtj8Bf831f6g37CuMvRonkGHHhB6KJI0TZpCz5FHHtrnjaIcbuhhvtOAE6i92bJlS/k/tTkrZn6xbt26tXG80047rXFYVxl6NI+hR5LaaQo9t9xSFDfdNLryjW9Ub9xCGnqiVqc+/wSdqP1hGAGIU1ec+iLkxPgxjPGZzpo1a+ZNq2sMPZrH0CNJ7Uz61VuEFdr1pAg39X5gOeunw0IMixqirjL0aJ5hhp4J2jdI0oImPfScddZZc2pv6Ca8RDdXZ8X/XOXFqS76oWlYU1jqEkOP5hlG6OHKLULPhLV5k6S+Ji30XHnllXNCD/POFVrU+FA4VZW2z4lTV1HSK7to0xP961d9dZWhR/MYeiSpnUkLPbkz9GgeQ48ktWPomSyGHs1j6JGkdgw9k8XQo3mGEXo41UvoaWj0L0kTy9AzWQw9E4DLA2ldHw3GFoMW9dxifDGvG0boIewYeiRNG0PPZCH0cFxsYujpCC4LpHD/g8WGHlreRwv7tgw9ktSOoWeyGHomCM84WUx4iTtlUlNk6JGkwTP0TBZDzwRZTOjhQ+UGU/ztQujhXlbMwgQ8hFeSWjP0TBZDzwRZTOihhiduB75Q6Ln++uuLZz/72QfLT/3UT5U3mhokrtpiFriKS5KmhaFnshh6Jkjb0BOntcJCoeeJJ54o7rzzzoPl2muvLcPPIBl6JE0jQ89crIvf/d3fLfbv31/16Y3xPvGJT1Rdo2HomSBtQw+Bh2eg8MRbCv/zOv5v8/j/YZzeMvRImkaGnrnuv//+8nizb9++qk9vr3zlK4v3ve99VddoEHq8ZH1CtA09XOlF7U4UQhCv4/9eCTc1jNDD2zLrM/lLkqaGoWcuQ0+3TUToIexs3ry5WL9+fbkx8T8l7Nq1q3j6059e3HvvvVWfuRY6vVU3jNADZmERsyFJnTeJoYdjCjX/HBc4E7B169ZqyOxT1utPSmf8tdVVKPxwXrduXflaCveQY3hYSuhJp818bdy4sRoyi2NZ3HeOv+kDTvsNSxl6JgA1N9TW1Ev493//9+J1r3td8fDDD1d95orXt2XokaR2GkPPrbcWxVe+MroyMw9t0cSBYMBxAQQWrvSNbsICgShF4IkQQTBJQxFtSLnwJdbB4YYeXk/QIXTxf8wXYQbMN+8TASvGQYwbZzLSYXWGHs1j6JGkdhpDz5FHHtrhjaK85jXVGy8sDTAhvfiF4EBoSQME3U3tQRmHMw2ElQhChxt6CF0ElxT9qLUBIYb3iflK8d4Mm/c5NDD0aJ5hhR6uguf72WK7lKSJ0Bh6XvSi0ZZFhB7u0E9AiAtdKJyiol9gnKhhIRDRHVhWXkOtS5x14P8Y/3BDD6+vn5Eg6DCtWL8ENrqZX5p4RH/+8lqGMW/psDpDj+YZVuhheyb09Kh1lKSJM2lteggHcQ+3XqhhiRBE4EnHp5Yo2veENCQNI/SkqOlh/hg3DWroNywYejSPoUeS2pm00ENooaakH5aHsEHY4W+6fASKCDggaDDOUkNPtDVKT18xzfoprxDvm44fmsJSMPRoHkOPJLXDDV0nKfQwr9TMxGkgClcG12t/aFBMcKjX6kQ7G17HVV9MK21wfLihB7xXnLriyi2mE22FmD7zybDLLrusvMorTrvFMOaHwrLFsDpDj+Yx9EhSO5MWegIhh1ofCv/Xa0yoeSFMNDVgJvgQihjO6+iOq6W4w/+5555bPP7442V3P9u3by9uvvnmqmsW02Ke6u/N+9TnOdZ7v2F1hh7NM6zQM7MNl6GHv5I0DSY19OTK0KN5DD2S1I6hp7cbbrih+PjHP95Y2tQEDYOhR/MYeiSpHUNPb29961t7lgcffLAaa7QMPZrH0CNJ7Rh6JouhR/MMK/TQCJ/QU7sYQJImlqFnshh6NM+wQg+N+wk9XMUlSdNgz549xbe//e2qS133rW99q+fDuQ09mTL0SFI7tE35r//6r+J///d/qz7qqqeeeqo8vu3fv7/qM5ehJ1OGHklqh/vSsM/8xje+UdYgWLpZOKXFZ8QdtH/4wx9Wn95chp5MDSv0cO8rQk+PR6JI0kT60Y9+VDz22GPFAw880Fi4S/GgStP0R1ma5mmUpWme2paFLpM39GRqWKEHhB6KJEldYujJlKFHkpQbQ0+mDD2SpNwYejI1zNCzfPls6PG2FpKkLjH0ZGqYoccnrUuSusjQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdTwww9PmldktRFhp5MGXokSbkx9GTK0CNJyo2hJ1OGHklSbgw9mRpm6NmxYzb0rF1b9ZAkqQMMPZkaZujhqi1CD1dxSZLUFYaeTBl6JEm5MfRkytAjScqNoSdTwww9e/fOhp4VK6oekiR1gKEnU8MMPSD0UCRJ6gpDT6YMPZKk3Bh6MmXokSTlxtCTqWGHnuXLZ0PPgQNVD0mSxszQ0xEHZtLBrl27is2bN5eljSuvvLI4++yzi9e97nXFhRdeWNx9993VkIUNO/T4pHVJUtcYejpi9UxKWL58ebFy5cqZsLDwLBOMTj755GL9+vXFxo0bi9NOO6183Y033liN0Z+hR5KUG0NPR1DTg50zKaFN6Ln22mur/2bdeuutxdOe9rTibW97W9WnP0OPJCk3hp6OaRt6mpx66qllzU8bhh5JUm4MPR1zuKHnuuuuK193ySWXVH3muu+++4pLL730YHnf+95XHHPMMdXQwfNJ65KkrjH0dMzhhp4Xv/jFfRtA7969u1i7du3B8pu/+ZvFT/7kT1ZDB8/QI0nqGkNPxxxO6DnllFNaX/EVhn16y9AjSeoaQ0/HLCb0PPjgg8WP//iPF6961auqPu0ZeiRJuTH0dEyv0EObnL/7u78rvvOd75TdXK11wgknFG95y1vK7sUadujZsWM29KxdW/WQJGnMDD0dsWnTpjLs1Eu44YYbyu577rmn7OZ0Vn1cypo1a8rhCxl26JnJbjPzM3sVlyRJXWDo6Yi9e/eWtTz1Eh577LHipptuKp588smyu2ncKG0YeiRJuTH0ZMrQI0nKjaEnU8MOPXv3zoaeFSuqHpIkjZmhJ1PDDj0g9FAkSeoCQ0+mDD2SpNwYejJl6JEk5cbQk6lRhJ7ly2dDT/UAeUmSxsrQk6lRhB6ftC5J6hJDT6YMPZKk3Bh6MmXokSTlxtCTKUOPJCk3hp5MjSL0+KR1SVKXGHoyZeiRJOXG0JMpQ48kKTeGnkwZeiRJuTH0ZGoUoWfHjtnQs3Zt1UOSpDEy9GRqFKGHq7YIPVzFJUnSuBl6MmXokSTlxtCTKUOPJCk3hp5MjSL07N07G3pWrKh6SJI0RoaeTI0i9IDQQ5EkadwMPZky9EiScmPoyZShR5KUG0NPpkYVepYvnw09Bw5UPSRJGhNDT6ZGFXp80rokqSsMPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoyNarQ45PWJUldYejJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQk6lRhZ4dO2ZDz9q1VQ9JksbE0JOpUYUertoi9HAVlyRJ42ToyZShR5KUG0NPpgw9kqTcGHqmAAHm+9//ftXVzqhCz969s6FnxYqqhyRJY2Lo6YhLL720WLNmTXHUUUfNhIR2s3zHHXcUL3nJS8rxjzzyyGLz5s3VkIWNKvSAxWm5SJIkDY2hpyM2bdpUli1btrQOPS960YuKM844o9i3b19x8803l8Hnoosuqob2Z+iRJOXG0NMxO3fubBV6br/99nK82267repTFO985zuLVatWVV39GXokSbkx9HRM29Czffv24ogjjqi6ZsVrqflZyChDz/Lls6HnwIGqhyRJY2Do6Zi2oeeDH/xgceqpp1Zds+K1u3fvrvoc8oUvfKF4/vOff7CcdNJJZfuhUfBJ65KkLjD0dEzb0EPbn16h59/+7d+qPofs37+/+NKXvnSwbNu2rTj22GOrocNl6JEkdYGhp2Pahp4dO3b0PL317W9/u+rT2yhPbxl6JEldYOjpmLah57777ivH47L1wCXrr3jFK6qu/gw9kqTcGHo6Yu/evcWuXbuKrVu3lmGG/ymBU1LHH398GXbCS1/60vKSdV7L8Gc+85nFJZdcUg3tb5ShxyetS5K6wNDTEdyccPXq1fNK+OpXv1r82q/9WvHggw9WfWZvTviyl72sDDs0TO7qzQkNPZKkLjD0ZMrQI0nKjaFnSGibc6DDN6Yx9EiScmPoGQBOTXEJeVi5cmXZLof74BB+umiUoWfHjtnQs3Zt1UOSpDEw9AzA2pmjOZeQg7/Lly8va3kIQgzrolGGHnIfoSdpoiRJ0sgZegaABsdRo7Nhw4birLPOKv8n+Jx44onl/11j6JEk5cbQMwCEHJ6QDkIOp7vAw0ANPYYeSVI3GHoGgPvkcEqLdjy054kGzJ7emjWzesrQs2JF1UOSpDEw9AwIQad+xRY1PQSiLhpl6AGhhyJJ0rgYeoaA4MPdlLsaeGDokSTlxtAzAJzGisbLiEvWKXFVV9eMOvQsXz4beqrmTpIkjZyhZwC4estL1vtj9RB6jjqKmrCqpyRJI2ToGYB+l6wTgLpo1KEHXL1F8Fm3ruohSdIIGXoGIC5ZJ+SsWLHCS9Z7oIlTnObq6Fk/SdIUM/QMgJestzezSjzNJUkaC0PPANWfs+Ul683iNFfS9luSpKEz9AwYwYfL1aO2p6vGGXrS01y1nChJ0tAYegaENj08VT0uVaesX7++Gto94ww94KkdhB7u0uxpLknSKBh6BiAuU48GzKDGh/Y9XM3VReMOPZhZPWXw6egqkiRNGUPPAHD1Vhp4AsHntNNOq7q6pQuh57bbZkMPxdNckqRhM/QMAFdoNYUeGjJT29NFXQg9SE9zSZI0TIaeASDwcH8eQk7gqi1qeTy9tbA4zUXwscZHkjQshp4BIdxEA+Zo0Mydmrt6FVeXQg9ZkXs4Enwo3LG5o1f6S5ImmKFngKjdoVEzJa316aIuhZ7Aqa64lJ2bF27dWg2QJGkADD1DRENmanu6qIuhB9TwcBPrqPWhHbinvCRJg2DoGSJDz+Ej6KSnvDZurAZIknSYDD1DZOhZGppDxdVdlDVrvJGhJOnwGXqGyNAzGDSPirY+nO7qeHMpSVJHGXqWgFDDc7Z6la1bt+YTeqiC2bx59rzUEDD5uLSdRs5DehtJ0hQz9CxBXKLer2QTeuI8FOeghoTgw5PZeRtKw/0gJUnqydCTqaHU9ETL4yGnkbSdT4ef6SqpLfYfHIiGWFsswdCTqaG06SHskERG8EwJ3ira+QyxcknSsNA4j5tx8QWOXzFRuEMpQUgaMENPpobWkHlEtT1IGzhv2VL1lNRdfGmpnuWHURpyKDQF4LE96R1Kd+yoXigNhqEnU0MLPeykYoc1gl9q1ISP8O0kHS4CD1/UCDn8QKKRHvuM9MvLHUoJQDEeNUE+l0YDYujJ1NBCD2KHReObERjx20laLEJN1O7whW1z34n0HDZhyefSaAAMPZkaaugZcfVL+nb+IJQ6hn0AN9jiS8p9JxazT2Dc9Lk0BKfLLhvJfkXTydCTqaGGHoy4+iUuZeevpI5YSuBJ8csmfS4Nv3BoG+SVXlokQ0+H3H777cUll1xSXHTRRcX1119f9e3tO9/5zsx3fmdx3nnnFdu2bSvuuuuuasjChh56qL6OHdQIql94ixG+naQ2CCZ8KTlNNYjaGU55pe19KNT+cOrL2h+1YOjpiB07dpQhZP3MTuKcc86Z+S4vK7Zv314NbfaqV72qePWrX138+Z//eXHmmWeWryEEtTH00IMRV7/E23kJu7RE/HLYtavqOExp4GnThmcxmD+u9EprfyjW/GgBhp6OWDNzpN7Mjbkq/L9q1aqqa77HHnts5ju+rLjjjjuqPkXx2te+tpxOGyMJPSOufuGHXrR7dN8nHQa+pxFWKNSibNy4+NBCIOH1wwg8dVz9FbU/tP+R+jD0dMCTTz45831dVtb2hL0zOx/6XX311VWfuR599NFyeFobRM1Pp0IPYuc3ouqXETwNQ5o+/GJIww6lXotC2xxOI/X6AUO4oXaIceI1yT5tqJj/eM8R/MDS5DL0dMDdd989811dVuzZs6fqM4t+tO/pZcuWLcUb3/jG4mMf+1jx3ve+t3j5y19e3HDDDdXQuR566KHi05/+9MHy4Q9/uDjmmGOqoUM04uqX9O1Gtb+VJhZfGGqY0/vncJ44ggNBJr1hYBQCEL8smm4yGGUENyidI67y8k6l6sPQ0wG0wyHg1NEvPeVVd8UVVxTHH3/8zP7ntOLII48szj777DLcNLnpppvKUBSFU2dHsaMbhah+YUfJTnbI2NfyduyPJfXQL+w04VdENJxrKlydxWkmwsc4fnHwnsyHX3z1YejpgDiV1VTTc/nll1ddc9028wuM4buqxoYPP/xw8eY3v7l4wxveUHYvZGSnt5BWv7CTHUGNzwifhiFNnvhlQCGoLOY7yfeZ8SnDbq+zWPHFH8E+RpPJ0NMRBJhrrrmm6iqKffv2lf0+//nPV33mogao3n4naozaXME10tADfkGml5rSOHKItT7xo4+MNYLKJWmysO/gCzJtp4KiDeGIrhjV5DH0dMQZZ5wx51QW/3MaKjz44IPl6awnnnii7P7Upz41891eVuzevbvsRlzq/u1vf7vq09vIQ0+IU10UqqGH+IsszVjs41m9bOu2c1TW+M7xpaD2ddp+EfDlZtn8taMeDD0dcdVVVxXHHXdcGX7injvplVk0UKbfPffcU/Upil/5lV8pVq5cOfPjZkOxbt26mX3Y8uIDH/hANbS/sYUeUCXO+X92TpQk7A0Sb1O/ACUK+8SZVVbe0V7KSjT4ndaH1cWvHc9tq4Ghp0O+9rWvFRdccEFx/vnnF9ddd13Vd9YjjzxSBqPvfe97VZ9Zn/3sZ4tzzz23uPjii+fU+ixkrKEnpLU+NHIeUq1PNEHg7dgf1i9EocLJ8KMsRE0IZVprQqK9EvsUqcbQk6lOhB6QRtLqGM5DDfGUV2Dfz74xfWvDj6ZeXH01zW1eCHPxy8Zz2aox9GSqM6EH7KSohkmrYEYUfmD4URb4nsUl6tMeBiLc0bBZShh6MtWp0BM6GH54+yi0AaL5URS+N9N6hkBTKE4nc4532tGgj2Ul5EkJQ0+mOhl6QlP44fx8v1vgD1A9/CxUmDWuwF/q8xmloYpanhH9iBi7+BJ7a3YlDD2Z6nToCU3hJ1LGCAIQx4a0sO9kdqKkl8SnhVohZk/qjGjcy1WTueAeRCzzYh9Cyn6FXzAUDo5U6/qLZmoYejI1EaEnReLgPH09AMV152Nso0AgoulAehU+xR+Y6gzO1bJREn5ywY+m+DL22j/QnwetRi3YQoVfNIagiWboydTEhZ4UO+6410ha4jzTGDfoqJxidjjO2OZHY0cqZ4PkdE9uej2ElC8m4SX2HWlhPVGNS+H1Tb9ookQI4ofXUoMQ88Q0mBbTZF8WDQqjsI9L35txGL9rjwPpMENPpiY69AR2EhGA6jVAsVPgPNMYdgixj/TiEY0d3wM2xhyfPh7Po+EXSCBQpDU71CC3qSlmf8P0+oUgSlwBQSDhvfoVapni8xlEqb9vTL9eRtQ+sosMPZmaitBTxy9aqlmadkjs5NgB8KtoBNUvcfEIxR9hGpvYEPlRMILtvpOiQTMhIE7zUajJYZ9xuCIEsc/hh1e/INS2RO1SNBxk/tISOxPem+5478VceZGWhdpH8j7UPrHuaEqQBif2pxGuokSNV4e3NUNPpqYy9KT40lELxK+4ph1CesnVkH7x8IMw3ko6bEvZRr1fzaEvYhT2BwSGYeGzikCyUGEfxbiDCAlpEKpPPy292kdGDRDhhf/TgHi4hZ0f06qHI96H7TotIwpKhp5MTX3oqeMXEtX7/Cpq+nJS+HLGzXj4UiwxDPEdjryV45kFHQa2U34tE8jT9hvpNhq/sBcKQxzg4nVDCvYTgWVnHfBlJATokAhAsZ00FWqfCI71AEV3GrAo7F8Zv6m5wWIK23m6rfN+A2LoyVR2oaeOLxFfUr6g/aqGF6r+XUA0KeDsWs7HnYlGeh3GL1I2CHa+7NTZwde3vSicNlnKQYSDWu788vXHNk2IiVNr7LgGsc4iIKXBiEKIYt+bln7bOK8ZEENPprIPPU3SXy9NX8LDDEBRuUQlkjqMHX/a8DP97OuFqv+oFVwoCLG9ME4acHpdIk0AZ4OharD+65b3oB/D4qDR70BBWGIcD/iaJLGdp/vj+ndhCQw9mTL0tNTr/Hecq+7XmK8qD/x/u4rf/oldxeplu4ov/r+H+jcdjDw+jQmnlXq1YeCzj1+jlKZxKG3v9RIlphs79X7BSdJAGHoyZeg5DL0C0ADKY0evKP71mWuKncvWFFuWbSw2/uwni/e+ZW9ZOeCxcMj4XCOwUDsSvzL7ISTxKzRqXGqf55yShiYCzqBOHUhaNENPpgw9SxQHRk41cCBLC8EoDnJJ2f3M1cWuZauLPcevLh5ftbp48ukLh6cDy44qdixbV1z43M3F+1+9q/j0e28r9vxjVVMUp0uicLql6d4gac2TB9u5WD+xvvnclpIwTadS5xl6MmXoGT0qB5I8c7BQEfBnb9hbfPH/qYLUTHA6sHJ18f2nDb5G6WCJG6jF6bn6JaTTHo4IKCx7rA8vr5OyYOjJlKFnPOKWIQQdKhY409EX4WNmpPvesKG476TVxcPPOPFgjRFl87JNB8u6ZTuK9x69pfjkaZuKf1s3E5w2NNQ8He6puWjDVC9LqVEieKRBKy3slGptowaGeYvLwVkfC53KkjQ1DD2ZMvSMz6COsUwnbVbSlGeo0KFCg/ww5+wLB/6YAMGofglpv8v4u1Da3jitKayl7XeofpOUDUNPpgw904ljOGdquOq5HoI41h/2MZ4XEpLqJdox1WuU2oQmxkmDVlpYgHrbqKZpHG5h+rbBkbJj6MmUoScPEYIiMxB8qNyZeNRUtQktvcKapCwZejJl6MkPFSdR0TEVwUeSFsnQkylDT544CxXBh6vbJSknhp5MGXryRS1PBB8aOUtSLgw9mTL05I1mLdHQmQucbNMrKQeGnkwZekQb3wg+ca/CXiVuxbPQLXNoX8xw9inxminfv0iaIIaeTBl6BGp4uF1NnO5abIlQtNCzNhnOqTQDkKRxMvRkytCjQPCpX9FdL3ErnoVumUPNEcO5DU68ph6qIgDNu2GiJA2ZoSdThh4NQoSihZ46wXDuF1QPQJxWW/BRHJI0IIaeTBl6NC5NAYhTZAsFJ0laKkNPpgw96gIun08fl0HD516nvAhFPAyegBTjx6O1uOdQNJqmIbWnzSQ1MfRkytCjriCgpHeL5pQXp8wQQSceir7YEg2tCURMh0BkjZKUL0NPpgw96hqCTnrKq+lB6jSQpnYoanKiTVE0mmY4DanrD1utFx/DIeXJ0JMpQ4+6ivY+EVr4Sy3Q4TZ2JhDxWgJRPLQ9go+P4ZDyY+jJlKFHXUZNTpziGjQfwyHly9CTKUOPckbtT9QmGXykfBh6OuTd7353sWLFiuK4444rzjjjjKpvf+95z3uK5z3veTM772Vl2UyLzRYMPcpd+hgOnz8m5cHQ0xFvf/vbi1NOOaXYvXt3sWfPnmLNmjXFOeecUw1t9id/8ifF8ccfX1x55ZVl986dOw090iIYfKS8GHo64thjjy0+9KEPVV1FccUVV5Q1N4SgJvfee+/Mznp5cdVVV1V9FsfQI80i6MRVYwQfgpCk6WTo6YCHH364DDg333xz1WcW/bZt21Z1zfWJT3yiHH777bcXF1xwQVl6BaQmhh7pkDT48GwwH40hTSdDTwfceuutZYB59NFHqz6z6HfeeedVXXOdf/755XDa/6xfv75405veNLOzPqr4wAc+UI0x10033VS8/OUvP1hWrVpVji9pFsGH+/wQfChc0u7pLmm6GHo6oF/o2cJNSxrQn+EXXnhh1Yed9May3/79+6s+hzz00EPFpz/96YPlwx/+cHHMMcdUQyUFvnKEHoqnu6TpYujpgH6nt7Zv3151zUV/hu/bt6/qUxR33XVX2a/NaS5Pb0m9EXTS0108wkLS5DP0dASnqf7xH/+x6irKBsr9Agz9GZ42ZI7an//+7/+u+vRm6JH649RW+kywdes83SVNOkNPR3BqikvWaXtzxx13FC972cvmXLL+la98pTj11FOL+++/v+pTlMO5tJ1L1a+99tri5JNPLk4//fRqaH+GHqmd9EaG1PoQfuLhpW0QlBjX02TS+Bl6OoQbDZ5wwgllGKnfnPCWW24pXvCCF8w5nQXGY3xe1/YePTD0SO3xZPb0Yahpiae4E2wIQ/xPMKJ/r/FpJL2Y4CRpMAw9mTL0SItH+OHZXZz26hWC6oVaIh50euKJzcMp3hhRGg1DT6YMPdLSEVQ4/bVhw2ywIQzxRHf69XpgKv25QqwenKgdkjRchp5MGXqkbqD2KNoMeVNEabgMPZky9EjdEfcGWrHC01zSMBl6MmXokbqF02MEH06VSRoOQ0+mDD1St3BJe7Tv8fJ2aTgMPZky9EjdQyNoQg9Xc0kaPENPpgw9UvfQnicubScASRosQ0+mDD1SN3FJO6GHuz9zZZekwTH0ZMrQI3VXPPOLuzdLGhxDT6YMPVJ3cZor7t3DHaDrGM4jLHjkRVouu2y2fxQbREtzGXoyZeiRuo2wE6e5CDOEGmp+6KZ/2+IND6VDDD2ZMvRI3Rf37qmXeJ4X9/ShwXMUTovRnxKPuKBfzqgV42o4AiOP+khrxLwRZH4MPZky9EjdR0NmAszatbOhhkbObRs3x31/uMtzzqJheL9CGDIA5cHQkylDjzT94vL3nNv2EBZZB9R4ccqQbkJkvRaN/pp+hp5MGXqk6RdXgfFsr1wRcFgHvdo2pTVB3iJg+hl6MmXokaYfB3oO5tRq5IrTewsFmgiHnObSdDP0ZMrQI00/2qlELUaObVYIOiw7Db/7Yby4RQA1P5pehp5MGXqkPETblRwvXV9MTVe0/fGGkNPN0JMpQ4+Uh7Qhb25i2ds0UqYmLBp+N90QUtPB0JMpQ4+Uh5wvXV9sLVfcEJJ15SXs08nQkylDj5SPaK+S29VJcffqxSx3BKVBXMLO+3ITRI6x8agQGktzCo1gxd8o69cfGoeSU9sigjnraevW2WVnXcQ6GtRnEQw9mTL0SPnI8dL1to2Y6+IS9jZPuY/Hg2zceCi8xIF6EIXpDhs1WiwHd62O9407WLNsg7x7NeEmAiDhL33PfsXQszgzq0x1hh4pH3HahnvW5GIpl+tHSOzVDoog0Cbc0EaI96dw4KYwXwQrAhV/o8SNEyk8XiSmQfgYxqk2jvsEj3R+FyosM8u+GPVA1VR4ZArriPXN8rMuYh0NmqEnU4YeKR8cNOMAkwsOniwvfxeLg23TJez1sEOoYfrUoEV4GdTdr5lWzAPvudTpxikkao/qD60lDKeNtxmXcMayNd29eqHww/bGqap6MCTcMD2my/QHta4Ww9CTKUOPlJd4AGnbRr2TLg7Uh7u8EZriNE8aFAg7aUgYFsJDfG6Uhd6T8eP0UbSL6fVUfqZLWFtMbQrvH1e4Uerhh/cf17pqy9CTKUOPlJc4ZcLfHMSB93BPkXAATw/wFIJUWvMzKnG6jUKYCSwboYN+bU8hEeaWWsPSFH7qNUjjWlcLMfRkytAj5YUDEAcjDo7TjjDAsi62EXMdtURdOYATNNLTXfVTR1Ei2MQpt8MNfW3Uw0+8fxfDTjD0ZMrQI+UnDprDPBB2QRpWlooan66ghiYNGXyetJGJgDMuhB9qo7ocdoKhJ1OGHik/HCA5WHapjcUwRHsc/k4bQhif3zgaAU8DQ0+mDD1SfjhYEgYIP9OMGh6WM5dG22rP0JMpQ4+Un2jrQoPTabbURsyaXoaeTBl6pDxFm5BJaH9xOAbViFnTydCTKUOPlKe4dH0a27tgkI2YNX0MPZky9Eh5ilAwrZeuT3MjZi2doSdThh4pX4QCSpcuxx4UGzGrH0NPh9x+++3FJZdcUlx00UXF9ddfX/VtZ+fOncUuHqzSkqFHytc0X7puI2b1Y+jpiB0zP0sIIevXry/OOeecmS/tsmL79u3V0P54LeNT2jL0SPniZnbsLnjK9jTV9tiIWQsx9HTEmjVris08qa3C/6tWraq6ejsws8c6auanzdqZn26GHkltEA7SRxpMy43ubMSshRh6OuDJJ58sAws1NmHvzF6JfldffXXVp9lZZ51VbNq0qSyGHkltEXTSJ3gnv7kmlo2YtRBDTwfcfffdZWDZs2dP1WcW/Wjf0wvteFay15qxUOjZv39/8aUvfelg2bZtW3HsscdWQyXlKi5hp3BF11LbwhCmaF7YqwyzVslGzFqIoacDCC9NgYV+6SmvFKe1VqxYUb4WC4WeL3zhC8Xzn//8g+Wkk04qT4tJEruRuGkhu4WtW6sBLRBiGJ/2QdGIeKEyrFBiI2YtxNDTAf1qej7ykY9UXXNtmPl5Rgme3pK0FDRo5knZ7EYo1PqsWTMbZvjtVS+9Qg7hiRqXphKn02hHNGg2YlYbhp4OiDY911xzTdWnKPbt21f2u/baa6s+c62e2YMwvKlE7U8/hh5JTaiFiUbObQohh7DE5e9taliiRmnQ7W5sxKw2DD0dcfrpp8/8epr5+VSpX7316KOPFp/5zGeK733ve1WfuazpkTQo1Prw24lCmCCg1EvbkFPHNNlVUUs0yMvlmadhhClNF0NPR1x11VXFcccdV5xxxhnFmWeeWQaY9D49N9xwQ9nvnnvuqfrMZeiRNCni5ojUEA0KNTxM00bM6sfQ0yFf+9rXigsuuKA4//zzi+uuu67qO+uBBx4oLrvssuKJJ56o+szFJe5tTmsFQ4+kcYn2N5RBXc1lI2a1YejJlKFH0jjF6SgaSy9VhCgbMWshhp5MGXokjRPteaLB9FKfAWYjZrVl6MmUoUfSuBF2CCtcwr7YRs3c6LB+6byNmLUQQ0+mDD2SuiAaIC8UWGiyuHHj7Okwxq8XLoUf5t2eNR0MPZky9EjqgvQS9nojZLqpzaEmqB5yuNEh92fl1JaNl9WWoSdThh5JXRF3guZUFTgm8X8acqjJIeQQkgZ5fx/lxdCTKUOPpK6gpiYaNddrdQhE3ntHg2LoyZShR1KXxCXsFGp1tmyxRkeDZ+jJlKFHUtcQdBZxj1Vp0Qw9mTL0SJJyY+jJlKFHkpQbQ0+mDD2SpNwYejJl6JEk5cbQkylDjyQpN4aeTBl6JEm5MfRkytAjScqNoSdThh5JUm4MPZky9EiScmPoyZShR5KUG0NPpgw9kqTcGHoy9bnPfa74sR/7seLnfu7nWpcTTjih+Omf/unGYZZDhXXkelq4uJ7aFdbRz/zMzzQOs8yWn/3Zn3VbalH8zv1cccQRRxTvfe97qyPhdDL0NNi/f3/x0Y9+tLjllltal3/4h38oTj755MZhlkPlT//0T4t169Y1DrMcKr/xG79R/NVf/VXjMMuh8uxnP7vYtm1b4zDLbLn66quLpz/96Y3DLIfKBz7wgWLVqlWNw3Ip//Iv/1Ls3bu3OhJOJ0PPgOzatas49dRTqy71wo7lj//4j6su9fKGN7yhuPjii6su9UIN69e+9rWqS02+9a1vFc94xjOqLvVy5ZVXlj82NN0MPQNi6GnH0NOOoacdQ8/CDD3tGHryYOgZEENPO4aedgw97Rh6FmboacfQkwdDz4AQev75n/+56lIvrCPX08JcT+2wjgw9/RF63JYWRuhxPU0/Q48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9AzA9u3bi3POOac4/fTTi7/8y7+s+k4PblbFrck3b95cliYHDhwo/uIv/qJYs2ZNsWHDhuKOO+6ohhzyT//0T8X69euLN77xjY3T4TW8tt80eB2v7zWNcfriF79YvOtd7yrn/7zzzivuuuuuasghX/nKV4qzzz67eM1rXlO8+93vrvrOxXL99m//dnHmmWeW21bdl7/85fImj/22N6bBTSDZLpumMS4PPPBA8f73v79cR/3WU3ynWIZenzPrj/XI+mS91jGNM844o3j961/fOI3/+Z//Ofh5ve1tb2vc3rrgtttuK+d/586dVZ9D6N/v+5B+p/h+8j2ti2nw3eQ7Wsc0WD9Mg/XFeusK5r2p1PE94fvS6/sQ21u/71TbafTbZjV+hp4luuaaa4ply5aVG/lVV11VfinY0U6T1atXFyeeeGKxcuXKclnrHn300eK0004rl5sdM+vilFNOKfbt21eNMbtz4rUXXXRRccUVVxTHHXfcnB0D0+A1S5nGODHPK1asKD74wQ+WV/IxX0cffXRx4403VmMUxWc+85niyCOPLIexrbCs9W2Fgw/LvWPHjuL8888vl5dxA9vbM5/5zIPTaNrefv/3f7/vNMaJ9cQybtmypZy/d7zjHfO2KeaVfsw747AsLFMq1h3jsi5Yr6zfENOIbYVppNsKB3/uvss00u3t/vvvr8boDr53y5cvLzZt2lT1mcU88x1g+VhOljddRr47sdwsI8vK95TvWjicabDemsLTODBfhDH+piXFPPM9iW2FZeR7FJq2N16TOpxp1LdZdYOhZ4n+6I/+qPiDP/iDqqsobr755nLj50swbdjpsWx17DCPOeaYqmtW7CgD3dyjJ7BzYGcboaZpGjzWoz4NXhfq0xgn1k3dq171qvIXcvizP/uz4rWvfW3VNYtljm2FWhDWbxqUWGfsbAPbWzoNajjS7Y3Lk+vTYPtMp9ElDz/8cHHssceWB9zAvFLrEFgWlollA8ta31ZYJ6zfwDTS7Y1psK1EqOHA1DSN+gFz3AiHa9euLX94pKGHbZ7lSb8PLC/fkcCy8B1Kscx81xDTuOCCC8puNE2jaZtl/XUB80fo6SXCCPvlwPeB71FgW2nah6fbW30arJN+06hvs+oOQ88S/dIv/dK8A15UA0+bXqEnThGk2Bm96EUvKv9/7LHHytfV1xP9uDcG2k6jLp1G13B6Kj14s62wTCmWOQ7Wn/rUp+YtY6zz+HXeNI1XvOIVB7e3ftN45JFHqj7dwrxdfvnl5f/MI93pr2jQj2UD66tpW2HdgHXF+E3bWzxButf29uIXv7jqGj9OK1PDSq1KPfSwzbM8qfic+a6A707T9hbbCtPgmVwEz1CfBuuj3zTGjXljfpjvPXv2VH0PafqcGTe2ldje6tsKr4ntrde2stA06BfTUHfMP4poUfiV2vSF4Y6604bl5ItcR9Vv+qsHl156afGc5zyn/J82Abyu/quHfh/60IfK/5umwY6lPo26dBpdwrriicVpDQbbSvzKDixzVIPzi7v+yzxqbqK9SdM03vKWtxzc3pqmEZ9bl27ix2f7zne+s3yy81lnnXXwIYfMI/Naf+ghyxQ1Eqyv+rbCOmHdoNf2xjQuvPDC8v+m7Y1pPPe5z626xo+gw/co/k9DD9s8y5iqbyt8d+qBhWWOUzdM4+d//ufL/0N9GqyPpm02pjFuLB+nldnnMt+cerv22murobN3Nm8KPbGtxPbWtA+P7a1pGun21msa6Tar7jD0LNHTnva0xi/Mq1/96qpresTBs47TODSSTNG+gl+RuOmmm8rXfe973yu7A/1o1IqmabBDq0+jLp1GV/DLnLYTNGpMsa2kO2SwzLGtsBz1mgbWGctI42U0TeM973lP32nE5xbT6AI+WxrYvvCFLyz+8A//8GDIYR6Z1+9+97tld2CZ4nNmWevbCuuEdYOYRn17S6fRtL0xjdjexo3TWgSdUA89LAfLmIpthe8KWJZ66GGZWXYwjZe85CXl/6FpGk3bbExj3KgRfOKJJ8r/d+/eXe57CT6BbaUp9NS3laZ9eLq91afRtL3Vp5Fub+oOQ88SnXTSSY1fmLe+9a1V1/SIg2cdV89whUyKX5G/8Au/UP5/3333la+7/fbby+5Av/gV2TQNdtj1adSl0+gKfgU3HRTYVtI2GGCZY1v56Ec/WjZ+TrHOWMZ777237G6aBjUf/aYRn1tMo2t+8Rd/8eDBgXlkXr/61a+W3YFlYtnAsta3FdYJ6wYxjfr2xjT6bW9MI7a3cSIAHnXUUcVll11WNoqnEKI5Vcr/YDlYxlRsK3xXwLLUQw/LzLKDadRrtpqm0bTNxjS6Jrb1O++8s+xmW2kKPfVtpWkfnm5v9Wk0bW/1aaTbrLrD0LNEfBnqVZhUe9Z3NtMgdih1LGva+BEc+NPGs7wuvcyTX2X0+/znP192t50Grwv1aXQB89zrs2db4VRU6nnPe97B8WP9pqd2OL1BY9Onnnqq7GYab3/728v/wa9cqvfr04iDFtg+02l0DQdzLvMF88i8nnvuuWU3IvDGQYVlZb2lWK9xYIppxKkhsE7TbaXXNNLtbVy4RJ2anbRw9Rbte/gfLAfLk34f+H7RL7As9dNQfMdiW4lppAfrpmn022a7JvYJcXqO+YzTUIHvQ31badqHp9tbfRp8B/tNo77NqjsMPUtEGwF2JPEl+5u/+ZvyC9CFK4oGLQ6odeyk6R+hJsaLK4rAgS3dAdPINz3ADGIa48a89WvLVd9WWNb6tkIbi/SAUg9Rbba3+jR4+n/aPU7MR8w7+Nzr88v/zHOgO217whVYxx9//MFthemxTqK9DnjN7/3e71Vds939trevf/3r5TTS7a1LCDv1S9ZZHr4DgW0lbTgfVx3FgZdlpZtlD22mwSlI1g+apjFO8fmBBuxvetObyh8BcWozrlDje4Je20q/7W0Q01B3GHoGgB3Fs571rHKjT3ek04KdLctVL6kPf/jDZT9+/cR9ZFJUAf/O7/xOccIJJ5TVwumBO/CaftOInU2/aYxLhLR6iV+DgStBOHVBuwOG17eV66+/vtxZcjqDHS3bVh3T6Le9xTRYPzRmbZrGuMRnzLwxj/zfNH/0Y95jPJYpFQdf1iPrs+lqIqbBATDWRX1bSbfZuH9SVzWFnvg+8F3gO9G0jCwT3yWWkWVlmVP1afAd5buaYhqsn17TGCfmh1qYmDe2h/oVU7Gt8H3he7PQ9sa49e/UYqfRtM2qGww9A8LlktzHIb3x17Tg1AAH9Xqp49JX+tevmkmxk6XdQL2RaeC1/abB63h9v2mMQ7pe6qWO9dlvW+GUFfffaboENwxiGuPCPPVbP4HxWIZoqFrHsrMO0tOBdQttb2222S6gZqVpOdt8H+I7lV6anopp1ANTaqFpjEu6LfXbDmJb6fd9YFi/71TbafTbZjV+hh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPdKE4r4kPIupfn8SHngaz2gaFu6LUr/x4jjxLKQ3v/nN5Twxb5LUxNAjTai4U3Y9fHDQp/8wjeI92vr+979fzst5551XztdiQ0/T3Y4lTSdDjzShOFCvXLmyfBhleqDPLfQwLzyKgIdALjbwwNAj5cPQI00oDtRxwOZZXaEeSJoO6mm/GP/KK688+EywX/3VXy2H/fVf/3X5HCGeA/b+97+/7Id4Dc8kInjx/+tf//p5j3PgPXg2FsP5u2PHjmrIofnfsGFDOZz/m3C6jodgMg6Fmq144GU8yysd1oQnrsd8UHgmF5iH9PWUwGsYj368duvWrdWQQ8vPQyh5bhXPY6ov/5YtW+a851lnnVUNkTQuhh5pQkVoIBRQ28NBGocbeggQ/M8zmF75yleW3Zwyos3Q5z73ueKII44odu/ePec1v/zLv1w+WJFuAsdv/dZvlcPB9AlE0eYoXhPdETjq81ZHWGA6LGd0R2hBvHc/vE8auNKnhDetH8ZlncZ4/KU7phHLwvvyf8xDPIiS8Rmevk/6v6TxMPRIE4oDddSO8H8EgTggh7ahh4cphne+853F0UcfPechljxh+qKLLir/j9d8/OMfL7tx7bXXlv2uuuqqspv/06CB9H35e+KJJ5b/91OfDuGHfswDInD00zQvodf6oaYmxThr164t/4/l/9jHPlZ2Y9u2bWW//fv3Hww9MY+SusHQI00oDsIcnEEQIEBQ2xMH5NDroB796uODWp56kKCb/ojXcIBPnXzyyQdrhxjeVOJ90/nvpWneQM1POv8LhR7WCzU1nG5at25d+ZrQtH4Ytz7flJjfpuXnKdz0i9qwOG1HGOX0nDU90vgZeqQJVQ8N0QaFGg0OtoHaifpBfVCh59Zbby27EQf9yy+/vOzm/34H+vr8N4nwFKfEQv1U00KhJzBuhJGYt6bQQ6iK04VNmpafmjL6ffOb36z6zGJcPgMCV305JI2WoUeaUE2hgdoeAgAH38B4aSiIIDGI0PO3f/u3ZTe4V84znvGM4qtf/WrZTXAgYPTSJvSAZUpDSbx3BAi624aeELVioI1QfT7p12+aTctPDdcpp5xSdc3H+LxO0vgYeqQJ1RQaOJBzcKWECDmcYiG0cMBPg0QcwFNtQw8Nd+lPoTuGg5oUamQ4nUR/CleZxYG/beiJmquNGzeW06DGpB6CFgoo1IDFPDA/zFeEJtZZnPZiODhdSGhjftPX1dcZ89+0/AxPX8twpidpvAw90oTiwNp0CoYDcxoKQACJ/hzseR2vB3/jYB3iYJ1K+6WvueKKK8r/achcR3jgveK908bEvea/CfMc04j5Dk3zX8f7xutpoBxXggXWT4yTSl/H//E63pOQc9dddxUXX3xxce655xaf/exny2FgvPS1LGf9PSWNnqFHkhYpQo+kyeK3VpIWydAjTSa/tZK0SIQeiqTJYuiRJElZMPRIkqQsGHokSVIWDD2SJCkLhh5JkpQFQ48kScqCoUeSJGXB0CNJkrJg6JEkSVkw9EiSpCwYeiRJUhYMPZIkKQuGHkmSlAVDjyRJyoKhR5IkZcHQI0mSsmDokSRJWTD0SJKkLBh6JElSFgw9kiQpC4YeSZKUBUOPJEnKgqFHkiRlwdAjSZKyYOiRJElZMPRIkqQsGHokSVIWDD2SJCkDRfH/A6MbJSagde/sAAAAAElFTkSuQmCC"> La loss represente des "sauts" à cause de la reprise de l'entrainement à deux reprises. Cela induit une modification du learning rate et explique la forme de la courbe. ## Résultats Les questions générées sont évaluées sur les métrique BLEU et ROUGE. Ce sont des métriques approximative pour la génération de texte. <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAApcAAAGFCAYAAAComticAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAF4vSURBVHhe7d0JmBTVvffxvO/z3PfebGq84d7EJSEBlwQl4hIWNe67EEVJVNAQNSioUaLBNQIuERQxgqgoiiBCQAERQRYVFFllEUQEhn3f91XU/9u/U1VQ01M93TA91U3398NzHqZOVVdXdfdM/fqcqlPfMQAAACBLCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXAIAACBrCJcAAADIGsIlAAAAsoZwCQAAgKwhXKKgjRo1yvr37+9PZUebNm1cyWf9+vU7KLazIqZPn25dunSp9P3M9euoz7CeX/8f7AppXwCkRrhEbL7zne+kLJV18P7Tn/5ktWrV8qf2T6pQcfbZZ7uSr5o1a2ZVqlRJu52aF34Pvve977nX6kD3OXl94RJep6ZTrUvLaX46kydPdsvVqFHDrStqm7Mlefsri54j6nkUxLQNhRIuC2VfAKRGuERsdFAJgkBUqQwVCZepQkVlbm9FrV271m33M88849ekpvci/H7cd9991qRJE/vhD39od911l7+UJ1i2PMnrSy6B4HMQRctpfjqtWrWyqlWr+lOVK9XnINtS7XuhhUvtJ+ESKGyES8SmvFBxoDZt2mQzZsxwRT8nq4xwmcqOHTts5syZKbcl2bRp02zhwoX+VGbSPcf+BJEgDCZTXXJwS7VsWCbLSHmfg0zDZXnPle4zkUzvQ3nCn4ONGzfahAkTbP369W46it7T8ePH28qVK/2a1Hbt2rX3vco0XC5btsydEqDPQiqrVq2ySZMm2fz58/2a7ArWP2fOHNu5c6dfm1omn0cAhYNwidiUFyoCderUsYsuusif2keBSo9/8803/RpzrXOqC5fkFrvkcBkcqJMlH9jD6wxKcICMCjavvPKKHX300aWWT94WPYce17t3b6tevfre5e644w5/ifLp+ct7Dq07PE8lCEVRovZDWrRo4bqbw1ItG5bJMqLtSrVcqoAVCN6/5BLI5DMRbGf4fSjvdQrmN2rUqNR6dV5rmF43nY4QXqZp06ZlvkCoPljff/3Xf+2dDj8uKBLss/4Pb4O2fejQoW6ZgELfNddcs3cZlQsvvLDUNmjfw/PDJRO33nprqcdUq1bN3nvvPX+uR/XaJ5XDDz/cTWv7w/sCoHBl9tcEyAIdVNKFjyeeeMItt3nzZr/Gc9NNN9nPf/5zf8ps8ODBbjkdbEtKSlwJDryaFzjQcBksp/rgoBgcEINwEojalptvvrnMtmhdepwO/joY64Cv8KHlkoNKMoWGdM8RbKPqkrc5SvJ+qDWuV69e7vEtW7b0az3Jy0bJZBnR+lMtl/w+JAv2KXiu8D5m+pkIHlu7dm3r27evrVmzptzXKVjnX/7yl1LrPeKII2z58uX+UuZOKRg4cKB7X8Ov5d///nd/CU+wvhtvvNE+/fTTvfsQ7HswHWyT/lf9sccea+3atXMtot27d3ety8mvY/369V2dLnTSfulzVbduXbvgggv8Jfa9huFyxRVXuMemE2yj/tc+qhVXX0SSXwsto/VdffXVNm7cOPflMHiuYB8BFC7CJWKjg0qqEhxsli5d6qZ18Ar7j//4D7v33nv9Ke8gquVmzZrl15j7WXXhg+SBhkuJ2g4JwkkgaltEB//wtgTP8dZbb/k15gKA6u68806/JtpVV12V0XPsz8Fb+6Blw0V1UY9N3ucoUesLSnidmk61rqj3IUrU9mT6mQi2M9NRBLRs8mkC6kpXfdTnI0zzTzvtNH/Ko8fVrFnTn9on1b4H7+kll1zi13j0BUD1QagbOXKkm37sscfcdGDQoEGuXv9H0fm1J598ctpufHXhK0Qmb/urr77q1h9+LTSt31l9vsP25/MJ4OBFuERsdFDRgV0HlqgSUDfeMccc40+ZdevWzT02fG7cL3/5SzvllFP8qX1Up3mBOMJlqm259NJLS22L1hWeDiSvL0qmzxHsX/j1TCV43uD1f+edd6x169Z2zjnn2O9+9zvbsmWLv2Rm25i8vuQS0PalWleqgJUsantSvUbJnwk9TlfGZ0rbo9c52WGHHeZaj8PUUnj77bfbxRdfvHcbv//97/tzPVrf9ddf70/tk2rf9dqpXq2VYUF98Nrq90QXY2k9yeV//ud/ypweIKr7wQ9+YFOnTvVrvJCaXGTevHnu+W677TY3HQi+IN1www1+jbeP5557rj+1T/I2AyhMhEvERgeV5EAQpUePHm7ZESNGuGl16SWHBs3XQTOZ6jQvEEe4zHRbNB21/8nri5Lpc+zPwTvV8959991l1pHJNmayjGjdqZbTVeDh/Ukl6rn0uExeo0y3M5BqvcnrUTf3z372M/f6KQjq9QtOewjLdDsDqd7T5Ho9XudhBtuVXJKfs0+fPu7xyedtdujQwdUH5cwzz3T1qbZDgucIaLmofSxvHQAKR/q/4kCW6KASPgCVR11qOsdt7ty57nEvvfSSP8ej87xStVKFL0ZJFS6Tu+uCc/PCNB11gEw+kKbaFnXFhrdF64ra/+T1Rcn0Ofbn4J3qeXWhS/K+Z7KNmSwjQQCKcuWVV7r56UQ9V6rXKPkzkel2BvRaaLuSqeWyefPm7uclS5a45Tp37uymA7pYS/Vhya9tQHXJy0qq9zS5/o033nDTyWExis751bIvvPCCX7PPnj17yhRZvHixe4zCc5iuXld98FqIpqP2MdW+ACgshEvERgeVTA/q6mLTwfvxxx93jwt30YoGClf9xIkT/RpzP6tO8wKpwqW6EMMUPlQfpmldGZssOZxEbYvo/LTwtlQkXGb6HPtz8E71vLp4SusIX2SUyTZmsow0aNDArT98AYjoQhh1IUd1QSeLeq6o1yjqM5Hpdgb0+F/84hf+lCc457Jr165uWkP+aFpjhYb99re/dfVhmo4KXkGLvYb3CUv1nibXBwPLpxp9IFivhknS71byWKaZUFDXhUVhzz33nHve4LUQTRMugeJFuERsdFDRQV0HnagSNmXKFLf8SSed5M7BTBa+Q4taX1SCgKh5geRwKQpkWk7PqQOi1q+fVRcWXI2r8+o0PzggJoeTqG05/fTTy2yL1hEVapLXl0omz7E/B+/gebVdKmqR0rTO29Nrtn37dn/Jsssml0yXCWgbdWFI+/bt3bbqf02rXlcWpxM8V1imn4mox5ZHj9f2a4gsXQGuUq9evTKtpLroSut9+eWX3edKF+Bce+217vFhwfqS6cprzdOV2+HXLNV7GlX/1FNPuToFTHXN66IlvbbarmC5448/3m1b8Bzhko72S+tXS7++fATPl/xaqC5qfan2BUBhIVwiNsFBPapEHYhUf9ZZZ9nbb7/t15Q2ZswYF7B+/OMfu6KfVRemcKmLU5Lp4HjIIYe459BBUs+vn8PUvah6dT1rXnBA1M/Jy3755Zeu6/R///d/U25L1HNI1PpS0XaX9xzaRq0rk4N38LzhovMGg2FswqKWDUrw3kXNC5dk2hddaKOwof8VzjTkUiZSrTOTz0Sqx6aiZbWPKhq+SBfAKFwmh2B9joLTK4LHBO9HWDAvigKhxsvUMsHjUr2nqeq1HQp7P/rRj/beBlQtqsFywbqjSiY0pqu+fOiiKI27GvW+aV1R+5hqmwEUFsIlAAAAsoZwCQAAgKwhXAIAACBrch4uNa6dTmDX1Ys6VykTOtfpjDPOcBce6MrFVOcvAQAAIF45D5fBid86wTvTcKnbqenEeQ1loissFTLDw2AAAAAgN/KmWzzTcDl9+nS3XPhWgBp2I3koDAAAAMTvoAuXGmbju9/9rj/lCR6bPCgzAAAA4nXQhcuOHTu6wZbDgseGB0oOjB071urWrbu3nHDCCS6chusoFAqFQinkonF9dTclIA4FFS6nTp3q1+yzevVqGzJkyN7y7LPPul+ycB2FQqFQKIVc1LDy2muv+UdGoHIddOGyvG7xlStX+jWp6U4qusMJAADFQncqS3W3MyDbDrpwqXCo5cK3XtPV5ple0EO4BAAUG8Il4pTzcKlQGRSFxuDnwMSJE61GjRqlLtapU6eOG4po4cKF7pxK3es306GICJcAgGJDuEScch4u1eqosS6TS2DSpEl24okn2ooVK/wabxD1evXquVBZrVq1/RpEnXAJACg2hEvEKW+6xeNCuAQAFJtch8vt27dTKqns3r3bf5XzB+ESAIACl4tw+c0337hT2ubMmWOzZs2iVGLRaYLr16/3X/ncI1wCAFDgchEuN2zY4I65Cj1fffWV7dmzh1IJZdu2be7UwZKSEv+Vzz3CJQAABS4X4XLx4sXcOS8mu3btci2Y6ibPB4RLAAAKXC7C5dy5c/Oqq7bQ6fQDtRbnA8IlAAAFjnBZ+AiXOUS4BAAUG8JlesnjbEfR8Ihr1qzxp/IL4TKHCJcAgGJDuExPwTI8znaUY4891vr06eNPVa5BgwbZAw88YJdddlna7RLCZQ4RLgEAxYZwmV6+hUvdIKZVq1ZuuzK5PbbC5caNG/2p3CJcAgBQ4AiX6QXhUncBfPnll+3JJ5+0YcOG+XM9UeFy+PDh1qlTJ+vSpYtNnjzZr/UoIIa72vXz/txVUAiXBwHCJQCg2BAu0wvCZY0aNdz/Z511lgt16p4OJIfLxo0bW9WqVa1p06bWqFEjt7yCZkDTyeEyk6AYRrg8CBAuAQDFJl/C5ZVdxsZarn5hnP/M6SnEKViGw2GHDh3s9NNP96dKh0u1VCaHPrVKHnHEEf4U4bJoEC4BAMUmX8LlqY+NtJ/f+25sZX/DpUJceJsnTJjg6ubPn++mw+FSLZWtW7feWxQsVcJBUD8TLosA4RIAUGzyJVzOWLbJpizeEFuZs2qL/8zpKcRVr17dn/Jo+xXsgnMpw+Gyfv361rZt270lCJcqAcJlkSBcAgCKDedcpheEuKFDh/o1Zr1793Z1wRA/4XB577332kknneR+TkXnY2odgSeeeIJwWYgIlwCAYkO4TE8hThfynHnmmfbhhx/undZwQIFwuFy6dKkLfc8884xNmzbN1UnXrl39n8yuv/5615Kpe6xrfZdffnnG4VLLB0WPCU9HIVzmEOESAFBsCJfpKbQpTCoMVqtWzQ455BB3XmWYwmX46vFly5ZZw4YNXa5QAFRp1qyZP9dcd7qmVV+3bt29QTET2o5gneFCuMxDhEsAQLEhXBY+wmUOES4BAMWGcFn4CJc5RLgEABQbwmX+qVmzpv3yl7+MLNu2bfOXyhzhMocIlwCAYkO4zD+7du2ynTt3RpYDQbjMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAuCx/hMocIlwCAYkO4TK+8WysGdFtI3ZUnHxEuc4hwCQAoNoTL9ILbP5YnfG/xyqZbTwa3fNT9zjt27OjPiUa4zCHCJQCg2BAu08u3cHnGGWfYwIEDbfPmzTZ69Gj7/e9/X+Ze52GEyxwiXAIAig3hMr0gXC5evNhatWrlgtzgwYP9uZ6ocDlkyBC7+eab3fK9evXyaz1t2rQp1dWun1V3ILp06WKHHXaYP1UW4TKHCJcAgGJDuEwvCJePPfaYC2kPP/yw65J+5513/CXKhsvGjRtb1apVrWnTpnu7sTt37uzPTYSsxHRyuFTdgXjqqaesXr16/lRZhMscIlwCAIpNLsJlSUlJ2XD53G/jLa9c5D9xegp+NWrUKNX62LVrVzv99NP9qdLhUi2JyUFRrZJHH320P5W9cLl27Vr3uPK65AmXOUS4BAAUm7wJl08dY9b6kPjKfobL//f//p+753fg888/d6Fu/vz5bjocLtVS2bp1671FwVIlHB6zFS71mPLOtxTCZQ4RLgEAxSZvwuW6ErM1s+MrGxf7T5yegl/16tX9KY+2X8Fu8uTJbjocLuvXr29t27bdW4JwqRLIRrhUa2q6YCkKlxs2bPCncotwCQBAgcubcJnHguA3dOhQv8asd+/eri4IbeFwee+997rpFStWuOkoOh9T6wg88sgj+xUua9eunVGwFMJlDhEuAQDFhnCZnsKlLujRmJIaLD2Y1pXjgXC4XLp0qQuKLVq0cMtqyCBdXd6kSRM3X66//nrXkrl8+XLr3r27HXPMMRmHywYNGrhgqXWHSyqEyxwiXAIAig3hMj0FN4VJhcFq1arZIYccUqbVUF3U4ZZN3a2nYcOGLlcoNKo0a9bMn2uuO13Tqq9bt+7e58iElosqqRAuc4hwCQAoNrkIlwfbUEQHO8JlDhEuAQDFhnBZ+AiXOUS4BAAUG8Jl/qlZs6b9+Mc/jizbtm3zl8oc4TKHCJcAgGJDuCx8hMscIlwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTL9NLdXlEGDRpkCxcu9KfyC+EyhwiXAIBiQ7hML5NbM4bvLV7ZtC3BLSX1vNdee62NGzfOn1uWwuXGjRv9qdzKi3B5zz33WNWqVe2II44ocx/PKLrv50UXXeTu+6n7fN50003+nPQIlwCAYkO4TC/fwmXXrl39n/ZtW3kZiXAZ0qJFCxcQdXN33eReL174pu9RqlSp4gLm8uXLbfDgwS7VazoThEsAQLEhXKYXBLh+/frZ9ddfb5deemmZbBEVLrVMw4YNXfDTY8M0L9zVrp8zzSvJ9FjlnVQIlyEKip07d/anzHr16uVePIXNKFEvrt6o8l7wMMIlAKDYEC7TC8Jl0GDVunVr93M4oySHS80/6aST7JlnnrFWrVrtfWxA08nhUnX7a968eXbNNdfY6aef7teURbj0rVmzxr3IEyZM8Gs8quvdu7c/VVbt2rWtY8eO7me9mGr5vOOOO9x0OoRLAECxyZdweW6/c2MtN7x3g//M6QXBr0ePHn6N2QsvvGBHH3207dy5002Hw2VUw9YDDzxQqk4/VyRcBs+hUwe7d+/u10YjXPqmTJniXrTkD5/qOnTo4E+VtWDBAmvcuLGdcMIJ9v3vf9/at2/vzynrk08+sdNOO21vOfHEE+2www7z5wIAUPjyJVye3fdsO+G1E2Ir+xsu/+M//mNvkJRp06a5TKLgJuFwqQtsFP6Sy3/+53+6+aLHViRcBtQIp3XXrVvXrymLcOkL3rSocNmpUyd/qjS9cGeccYY7V3PgwIH29NNPu2ZsvehR1q5da8OHD99bunXr5rriAQAoFvkSLtfuWGurt6+OrWzYmfnQPAp+1atX96c82n5lkuBUvXC4rF+/vrVt23ZvUTd6UALZCpcSPHbp0qV+TWmES59eBL1QUd3iAwYM8KdKU6BUOFSXeuC+++7L+M2iWxwAUGw45zK9ILwNHTrUrzF3ip7qgvEjw+Hy3nvvddMrVqxw01GSu7Pvv//+jPNKMl0spMfOnj3brymNcBmi8yXVmhjQ1d+HHnqoLV682K8p7ZVXXnFDFoWp1bK8FzyMcAkAKDaEy/QULtUTeuaZZ9qHH364d1oX6gTC4VItiMoe6knVsps3b3YZpkmTJm6+6KpzZRSNbqOsU6tWrYzDpcKr1quidfzyl79025MK4TKkZcuWLmBqYNCZM2davXr1rHnz5v5c7zyD4447zr0xwbTemOeff961Xo4ePdouv/zycl/wMMIlAKDYEC7TC8Kkgly1atXcWNrJ40oqo4wcOdKfMlu2bJkbhki54n/+53/snHPOsb/+9a/+XHPd6RpeUblF50sGz5GJW265xU499VT3WOUgrae8RjTCZRJ1ax911FHuzUl+Iz/99FOX9MPNzq+99ppddtll7o3XByDdCx5GuAQAFBvCZeEjXOYQ4RIAUGwIl/ln27Zt5Zb9RbjMIcIlAKDYEC7zT82aNd0YmlGFcHmQIVwCAIoN4bLwES5ziHAJACg2hMvCR7jMIcIlAKDYEC4LH+EyhwiXAIBiQ7gsfITLHCJcAgCKDeGy8BEuc4hwCQAoNoTLwke4zCHCJQCg2BAu0wtus1ieXr16uf3KR4TLHCJcAgCKDeEyvUxuzRi+t3icdOtI3QZS25gK4TKHCJcAgGJDuEwvX8Pl008/bWeeeSbhMp8RLgEAxYZwmV4QLvv162fXX3+9XXrppWW6yaPCpZZp2LChNWrUyD02TPPCgVA/p+t6D5s5c6YLlR9++CHhMp8RLgEAxYZwmV4QLhXiFABbt27tfu7cubO/RNlwqfmNGze27t2720svvWTnnHNOqfCYHAj1s+oy1aBBA3vggQfcz8nrSka4zCHCJQCg2ORLuJx7+hk2u9bJsZVF1zX2nzm9IPjpop1A165d3b2+d+zY4abD4VIh8kc/+pH7OfDYY4+VCo8VCZddunSxU089de9zEy7zGOESAFBschEuFXaiwuWs446PrexvuDz88MP9KU9JSYkLdeqelnC4vO666+zee+915f7777eHHnrIHn74YTvssMPcfDnQcDlv3jz7yU9+YoMGDfJrCJd5jXAJACg2+RIuv9mxw77Zvj228u3Onf4zp6fgdsQRR/hTnoULF7pQN2PGDDcdDpf169e3Zs2a2ejRo0uVcAA80HCpVlF1saubPih6XPBzFMJlDhEuAQDFJl/CZT4Lgt/w4cP9GrOePXu6uk2bNrnpcLhUi2XVqlVt0aJFbjrKSSed5LrWAy1btsw4XCYXPS74OQrhMocIlwCAYkO4TE/hUq2CZ511lvs5mO7QoYO/ROlwuXTpUhf4WrRo4ZZVAFU3dpMmTdx8ufXWW6127druPM727du7rvRMwmUUPU7PkwrhMocIlwCAYkO4TC8IkyNHjnRXaR9yyCEuDIZdcsklrus7sGzZMjcMkXKFzpFUV/ngwYP9uZ6g1VFd6MFzHAg9jnCZpwiXAIBiQ7gsfITLHCJcAgCKTS7C5cE2zmXcFAT1+kSVb7/91l8qc4TLHCJcAgCKDeEy/9SsWdNq1KgRWbZv3+4vlTnCZQ4RLgEAxYZwWfgIlzlEuAQAFBvCZeEjXOYQ4RIAUGwIl4VP4XLDhg3+VG4RLgEAKHC5CJcrVqxwd7hB5duyZYvNmjXLdu/e7dfkFuESAIACl4twuXXrVhd4FixYYGvXrrV169ZRKqEsX77cZs+eXe6dguJGuAQAoMDlIlzKtm3bbOXKlTZv3rzIMn/+/FhK1HNno0Q9VyYlal0HWhYvXuxOP/j666/9Vz33CJcAABS4XIVLFCfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOhEsAAAoc4RJxIlwCAFDgCJeIE+ESAIACR7hEnAiXAAAUOMIl4kS4BACgwBEuESfCJQAABY5wiTgRLgEAKHCES8SJcAkAQIEjXCJOeREu77nnHqtataodccQR1qhRI7+2fPfdd58dd9xx9p3vfMeVNm3a+HPKR7gEABQbwiXilPNw2aJFC6tRo4ZNnjzZSkpK7Oyzz7ZmzZr5c6Np/pFHHmn9+/d306NGjSJcAgCQAuESccp5uKxSpYp17tzZnzLr1auXa4lU2Iyies0fPHiwX7N/CJcAgGJDuEScchou16xZ44LihAkT/BqP6nr37u1PldavXz83f/r06a7VU93oqssU4RIAUGwIl4hTTsPllClTXFBcv369X+NRXYcOHfyp0p555hk3X+dnNm3a1Bo3bmyHH354ym7xMWPGWK1atfaWX/3qV3bYYYf5cwEAKHyES8Qpp+Fy2rRpKcNlp06d/KnSVK/5L7zwgl9jdtddd7m6DRs2+DX7rFu3zj744IO9pXv37q4rHgCAYkG4RJxyGi43btzoQmFUt/iAAQP8qdJUr/nLly/3a8zmzJnj6lKdpxlGtzgAoNgQLhGnnF/QoyvFu3Xr5k+Zu1Dn0EMPtcWLF/s1pale88MX9ARd5UuWLPFrUiNcAgCKDeESccp5uGzZsqULmOPGjbOZM2davXr1rHnz5v5cs/Hjx1u1atVs2bJlfo25+RqySEMQDR061KpXr27169f355aPcAkAKDaES8Qp5+FSNCD6UUcd5UJf8iDq6uo+7bTTbOXKlX6NR8tpeT0u0zEuhXAJACg2hEvEKS/CZZwIlwCAYkO4RJwIlwAAFDjCJeJEuAQAoMARLhEnwiUAAAWOcIk4ES4BAChwhEvEKSvhUnfG0VXdGhoo3xEuAQDFhnCJOFU4XDZr1swNYK7AFoRLjV3ZtWtX93O+IVwCAIoN4RJxqlC4fPnll+3aa6+1vn372sMPP2yjR4929WPHjnWDoecjwiUAoNgQLhGnCoVLDWTevXt39/ODDz64N1xu3brVfvCDH7if8w3hEgBQbAiXiFOFwmXDhg1d66XoLjtBuHzvvffcLRvzEeESAFBsCJeIU4XCpW67eOmll9qUKVOsVatWLlwOHDjQrrnmGjedjwiXAIBiQ7hEnCp8Qc+NN97oLug577zz7KSTTnI/V61a1Z+bfwiXAIBiQ7hEnCocLmXEiBHu6vAOHTrY4MGD/dr8RLgEABQbwiXiVKFwefbZZ7uu8YMJ4RIAUGwIl4hThcJlixYtCJcAAOQ5wiXiVKFw+cUXX1iNGjWsW7dutmDBAr82vxEuAQDFhnCJOFUoXKrVUhfwRBV1mecjwiUAoNgQLhGnCoVL3e6xvJKPCJcAgGJDuEScKhQuD0aESwBAsSFcIk4VDpczZ8503eO6W49uB6k79eRrq6UQLgEAxYZwiThVKFx+8skn7vzKY4891ho3bmzNmjWzU045xdUFt4XMN4RLAECxIVwiThUKl3feeWfkhTtqyaxZs6Y/lV8IlwCAYkO4RJwqFC4VLFN1gav1Mh8RLgEAxYZwiTjRcgkAQIEjXCJOnHMJAECBI1wiThXuu1bAbNCggWvBDEqfPn38ufmHcAkAKDaES8QpP0+MrESESwBAsSFcIk4VCpcDBw5051cmU11UfT4gXAIAig3hEnGqULi8++677amnnvKn9unXr5879zIfES4BAMWGcIk4VShcMhQRAAD5j3CJOFUoAd56663WokULf2qfjh07MhQRAAB5gnCJOFUoXE6YMMGqVKlit99+u2vBnDZtmj3wwAN21FFHMRQRAAB5gnCJOFW471qtlAqY6gYPSqNGjfy5+YdwCQAoNoRLxCkrJ0bu2LHDZs6caTNmzLBNmzb5tfmJcAkAKDaES8Qpq1fdLF261GbPnu1P5SfCJQCg2BAuEacDCpe6SrxLly7+lKdu3bp7u8WrV69uQ4YM8efkF8IlAKDYEC4RpwMKlz/84Q9tzZo1/pTZ4MGD7Re/+IV169bNdY+ff/751rRpU39ufiFcAgCKDeEScdrvcDlnzhzXOhlWv359u/nmm/0ps1deecWOO+44fyq/EC4BAMWGcIk47Xe4nD59uguXq1atctMlJSVuunfv3m5aNCxRcgDNF4RLAECxIVwiTvudAHVl+BFHHGE9evRw0506dbJatWq5nwMKl9z+EQCA/EC4RJwOqHmxTZs2rmVSF/bo/z59+vhzPJofdeeefEC4BAAUG8Il4nTAfdfDhw93IXLZsmV+zT6q79evnz+VXwiXAIBiQ7hEnPLzxMhKRLgEABQbwiXiRLgEAKDAES4RJ8IlAAAFjnCJOBEuAQAocIRLxIlwCQBAgSNcIk6ESwAAChzhEnHKi3B5zz33WNWqVd3g7I0aNfJr01u4cKEbZ3N/7gZEuAQAFBvCJeKU83CpwdZr1KhhkydPdreS1MDszZo18+eWT0E0GNA9U4RLAECxIVwiTjkPl1WqVLHOnTv7U2a9evVyYVFhszzdu3e3c845h3AJAEAahEvEKafhcs2aNS4YTpgwwa/xqK53797+VFkrV660n/zkJzZlyhTCJQAAaRAuEaechkuFQwXD9evX+zUe1XXo0MGfKuvPf/6z3Xbbbe7ndOHy448/thNPPHFvOfbYY+2www7z5wIAUPgIl4hTTsPltGnTUobLTp06+VOl6Z7lRx55pG3fvt1NpwuXWvdHH320t/Ts2dN1xQMAUCwIl4hTTsPlxo0bXTCM6hYfMGCAP1WaLv5RoAwXLa//R40a5S+VGt3iAIBiQ7hEnHIaLkVhsVu3bv6U2eDBg+3QQw+1xYsX+zWlJQdLFcIlgGQ7vvra5q/ZZnNXb/VrgOzatOMrW75xh81ZtcWmLt5on5SstaGfr7Q3Jy+17mMXWucPSuxf78+15z4ssRdGz7OXP55vr36y0HqOW2RvTFxsfT9dYv2nLLNBny23d2essGEzV9rIWats1Ow19vHctbZ0ww7/mSqOcIk45TxctmzZ0gXMcePG2cyZM61evXrWvHlzf665+p///Oe2bNkyv6a0IFxminAJHPx0QNfBXAdyHcTbD5ttLftOs+tenmDndhhtv354mP383nf3loue+cgeHjTTHcDXbd3trwWBXXu+sfXbdtvi9dtt1orNNmnhevtw9mobPH259Zm0xIUihaTH3p1l9/efYXf0nmp/7j7JrnlpvPv/9sT0fYn6RxPztZyW750IT+8kQpPWM3HBevti+WZbtG67ex49Xy6E91Pbk24/tV/avz90HW+XPPux/e7JD+3kR0fYcQ+9V+rzVVlFoTRbCJeIU87Dpdx333121FFHudCXPIi6LvpR4NQV4lEULjU2ZqYIl8DBYduuPTZ23jrrkjjA/qXnZLs0cXDXgT3qILy/5eynRtnf+n3mWo/mrSmcls0N23fbwnXb7LMlG+3juWtci1iPcYus0wcl9sjgL9w+3/Tap9boxXF2/tOj7dTHRka+PnGUq54f60JbXEX7HLUdFSn6ElP78ffdF5oGnT+JfN6KFIXzbCFcIk55ES7jRLhEPti++2tbs2WXLVi7zWYs22QT5q9zXWHvfb7SBk5dZv+etMReG7vQXvzIa0lp996X1vadL+z+ATPsb30/sxa9ptiNr01yLXVRB6VMyh8T5bY3prpWP3XTjZ6zxmav2mI7v/ra38r4qEXp00UbrNuYBfbXPlPtvETwiTqYB+WUR0da/c5j3Gug1+TZxGuk1+yjxD58uWKza50KUxe5wtZTw2dHhoxaj4yw5onXVK2geny+0f6oRVH7pxY27a9a1rT/eh30eiTv0/6Wmm2GW712H9iFHT+yq18YZ026TXSvyT1vTrc278y0p0fMsWdGpi8dE8tpeT1Oj7/+lYlufVqv1v+btsMjnz+uoucP76e2L7yf2n51Yb8+fpENSPwujvhilY1P/H5OX7rJnWaxOvF7u233Hv+dOXgQLhEnwiXylro+1W2lP+wHS3lm5Fz7x9ufuwDYrOdka5wIfw2e+8SFpTr/fN9OaF26uzZfy4mJoKHQcnOPT631oJn20sfzbciMFfbZ0o1Z6VbWgVqBttVb012LZNQ2qFz0r4/dQV+hT93gKzZl7xw0ddWqRa9Jtwllujm1/2rh0zlwUe9zZRWds6cvEfryoJa90xMhKLxd6Yo+X3rMZZ3GuM+e1qPw/WTiC4S+qCiAa5/0ZUYhWq+nvujkkoLa5h1fuQCt4KZtUre1vnjpfFmFan0B0/sf/D3Q/5pWvebrnEctr8fp8VqP1qf1HoxBsDIQLhEnwiVySgcEtZgpPKjbrumrk1wXU9SBs5DKsQ8Oda1lZ7b/0C5OBCi1pqmlSGFO53kpUD048HN3DluH4bPdhQE6H0ytKf0+XeK6y9Siota45ICSaVGQ0brUMnp3v89cK6jOKYva3uRyeSK8RLWGpisK2lHrU9FzKwx1TYQgBb+4W1DVlawAptbAGknnbOa6/Pofw+ycDqPs2pfG213/nuZasnVhiAL/lMUbbFkWL/xAYSJcIk6ES8RCF17owK2uPIUYdUtFHUTDRd1XCiNRIaWyiw7i6i7Tyfw630/dZgp9umhEwU/78dDbn7tuNJ38327oly4EqiXslU8WuNCmA78C3ORFG1x3swLAxu1f+a9Iflu1eadrGVKIVRehgq6C/wUdP3JBJ+r92p/y28ffdy2DCs0fz11rW3bmX+vS58s2uW5eBbnHh8xyLYpqldZ7//fEZ0CfBV3YokCsz4hCqT4zwakK+l/Tqg8+Q1pej9Pjg8+Q1us+Q4nn0Wfo+cTrrSuIx81b584HzXXLIgoD4RJxIlwiLZ0Lp/OQdLD8fSLsqatSF0Som/ektiPs+ApcOamT4XUgVmDThRu6alNdr1t30ZV1MFDwCbo0dQ6puiSXRHRpTlvidWmqO5artYH4ES4RJ8IlytDYar0mLLJbXp+8361UunpS3b11n/jAdeNp+I4ru4y1G16Z6FppdMHG+7NWu3OkAADxIFwiToRLuCtpNXCvunijrtLVBQIaw05dpDq/S+PD6apJdfOqFUpDxgAA8hfhEnEiXBahb7/1zidTN7SGo6n2wJBSYfK0x0a64WB09whdsQ0AOLgRLhEnwmUR0LAcGkPx+VHz3Dh/yePMafgSXXCgq0/prgaAwkO4RJwIlwVELZK6kGLw9BXuylOd5xg1uLLOo/zTqxPdkC9qwfxGDwQAFCzCJeJEuDxIffX1N+4qXA2KrOFRGj4/NvLim+oPDHWDYWsoGQ2PoyFxCJMAUFwIl4gT4fIgo2FfdLcUjROYHCRVdL9g3T9YXdwaFBoAAMIl4kS4PEjojioagPmYB4fuDZK6u4vuDa2wqTua6KpvAACSES4RJ8JlHtPA1LqrjQYsDwKlur41LJAGGgcAIBOES8SJcJlndDrkmJK17pZy4SGCdN5k74mLbdtuxpQEAOwfwiXiRLjMExqMXONOqqs7CJQ1Hh7mLsT5ciXDAwEADhzhEnEiXOaQrtrW+JPNek62avfva6XU7RJ1ZfdOzqEEAGQB4RJxIlzm0K2vTy7TSjl39VZ/LgAA2UG4RJwIlzl03EPv2a8TobL/lGV+DQAA2Ue4RJwIlzmiMSjVYnl5pzF+DQAAlYNwiTgRLnPklU8WuHD58KCZfg0AAJWDcIk4ES5zRIOfK1y+PW25XwMAQOUgXCJOhMscqfPPD1y4XLJ+u18DADjobV9vtnGx2aqZZovHmy36xGzJBLNlk81WfObVr/nSbF2J2YaFZpuWmW1ZabZtrdmOjWa7tpp9tcNfWfYQLhEnwmUOrNy80wXLWo+M8GsAHLCdm8zWzvUO5F8MMpvUzWx0O7N3/2bW93qz1y4zG/p3b54O4Dhwu7d5QUjBaPlUs4VjzJZOSvxR+9yr27TUe421XL7atcVs62ovAK6ZnQh8073wt+Ajsznvmc0caPZZb7PJ3c3GP282pqPZh4+bDbvfbNDtZv3+ZNbrKrNXLzF7vq7Zv040a/8Ls9aHZL989JS/0RVHuEScCJc5MGTGChcuNb4lgHJsWGQ27wOzsZ3Mhj9oNqCZ2etXmr14htnTx0cfkNOV504ze+cOs2m9vEBULHZu9gKVWs/mj/ZC1KeveuFp5MNeGNfr2+c6sx71zV4626zzqd7r/M8jo1/LTIoe+2Q1s2dqeOvTe9ftAi/09/y9F9R6X2P27yZmbzY163+z2cBbE0Eu8R4Nvsv7YqBgNyKxje+39YLeiH949VpGy+uxWo/W+fJ5Zi/UM+t0slnHX3nB7/GfRG9bvpcxz/hvXsURLhEnwmUOPDL4Cxcudd9wFCB1a21bkziQLzFbO8ds5QyzJRPNFnxsNmeY2ReJP/Cf9fFaRia84B1ARj3hHTz3HjD/YtZXB8yrQwfM0xMH51MSB8xfJw7Wv0wcMH8afUDan/JU9cSB/nyzfjd4AWNiV7O5I7yuu90xnbKhLsBlU8xm9Eu8Dv/0tuXFM6O3N6ooOKj1SIGl9x+91qUPHvHWFRSFEe1n1OP1Gug5J77k7ffBQi2Iy6d5n6lPX/H2c8g9Zm/d5AUt7W+nWpXXqnawFn1e9JooeCqAKojq90u/Z3rdFFQVWPV7qN9HfXb02o591vudnfGm18Kp7m61eq5P/B1XS2gldGVnE+EScSJc5sAVXT5x4XLSwvV+DcrQH2q1Ki0a64WdWe944WNKDy8AffIvs9Htzd5vY/befWaD7zQbcIsXEhQwejQw635pqGUkUaeDhrq09raMJEKIWkZ0QHYtI4mDiFpGXBhR0GvltXAlBz0dtCsj6OVjUauTuv7e+EPidbrbe91nDvC6Qw+kzPvQe//0nuk9Stf6qOd/5SKzt2/zWtj0/s8e6p2/pla4/bVnp9f9qdYvvZdRz9nuZ17r3bjO3jlwcVM3v87Jmz/KbNobZh8/7b322iaFoANtsX0s8XdPj9X7qd+Nfzf2XtfhD3ndr/qiM/V17/QBtWzqC9HqWV5Xd0VeB+2PgrDOL9T6FIgXj/Oeo+R9L6h9Odj70qXgpi9eU3t6QU6BX13TarnW+//Rk97vp35WvZZRF7ZaYbUevbfq4lboU5e3PiMKfuoKL3KES8SJcBmzPd986271WO2BIbZrzzd+bZHRCexLP/UCo4KGAqKCocKGuiwr0gWXD8W1jFT1DuRqOdLB/OVzvQP66w29g7palxRuFWx1cFfYUYgY38XrqnQHzESIU5DSQVjnE6o7UwdMdRVvXeV1c1aU1qUWGD2fDtzaJr0P2u6ofaus8uxJ3pcAhfopr3n7G9f5kQojeu31/AfTlwSF4C61E1+grkh8WWpu9sGjiUD8nBdI9bnRa6iQqs8Kih7hEnEiXMZs8qINrtWyQefEAf1gopZEtSKWjEzdiqgWhVRF4Urdj1EHyVRFrYJqJdTBs8+1Zm/+2eztFmbvtjQb9oB3MI16rmyUXAS9fKR9Uyuh9l/vs1rQ1IqpoHygRef1qaVs9hCvJSvfqGVNrZZqKdT2qrWw61le96nCnLpS/1XT+3x2ONZruX7i6OwF00f/x1u/LhjROYj68qGWu8/f8n4H1y/wNxTIHOEScSJcxuylj+e7cNnmnTw/t0thSWFKIe6lc6IPggdadDBWt7K6mdXtrDCn7i11f6s7S+crAgCyhnCJOBEuY3br65NduHznszwbPF2BTuctqZtW3bhRobBLndKtiOqKC7ciqjVKLSyTXvbO3VJLy5fvelf76vwtXeACAIgd4RJxIlzG7KS2I1y4XLYhx1cW6uR6DcWibuZnf1M2SLY5zBsy5L17vW7wHRv8BwIADjaES8SJcBkjBUoFy9gHT1erpM6V1JA3amFMde7jKxd6F9fMHe4NpwMAKAiES8SJcBkjdYUrXN7yeiUOnq7he3TxhUKirn5NFSQ1LMlrl5sbX1HjLwIAChbhEnEiXMao9aCZLlx2zdbg6bqqVVdsa6BftTqmuguFrj7V3Tbe+at3PqRu1wYAKBqES8SJcBmj+p3HuHCp4YgOyDd7Ejsw2LtVWtvDo4OkxlfUrds0fImGC9I4d98W6XiaAACHcIk4ES5jogHTf3GfN3j6nq+/9WszpICoO8hoCJ9wkHzmBG9A7tHtvGGDNDg5AABJCJeIE+EyJhMXrHetlr9/LsPB03XLNN36rOvvSgfKDsd5rZKrv/AXBACgfIRLxIlwGZPnR89z4bLt4HJCobqvda9d3f/60Sr7AqXu/KHbI+pew3RxAwD2E+EScSJcxuTmHp+6cDl4+gq/JmT9fLP323r3og4CZdsfefeh1nmTuvUigHKt27HOFmxaYNNWT7PRS0fbO/PesddnvW4vfPaCjVw00jbu2ugviWK3fc9227J7i23YucHW7FhjK7ettKVbltqizYusZGOJzV4/275Y94VNXzPdpqyeYhNXTrSPl35sIxaNsHfnv2tvzX3L3pj1hr3y+Svu89VxckdrN7GdtRnXxu4fc7/9bdTfrMX7Leym4TfZjcNutL+M+IvdOvJWu/2D2+3OD++0v43+m7X6qJU9MOYB+8fYf7jHPTb+MXti4hP25KQn3fqenfKsTV6VvZFFCJeIE+EyJsHg6Ss2hYKi7lOtq7yDQKmibnDd13j7On8hoLht2LXBPl/7uT09+WlrPba13TXqLnfAvuqdq+z8N8+33/b6rZ3w2gkZld+//Xt7dPyjNnTBUFu9fbX/DBAFLQUsBSsFKoWp9xa+54JUjy96uBD11KdPuSCkYKTw9Odhf3YB6pYRt9ht799mf/3wry5Yab5CVhCcHp/wuAtfevwzU56xzlM7u/W9NOMlF9C0foW1vrP7uud7u+RtF+L0/Ppi8OGSD23MsjE2bvk4F/Q0PXj+YPv37H+7xz837TkXzB765CH3/NqeG4beYFe8fYVd+NaFdnqf0yM/D/leus3o5r87FUe4RJwIlzFYtG67C5a1H3/fr/E9U8MLlB1/5Y1LuXaOPwP5buvurbZ2x1pbtnWZzds4z7VyTF091SasmGCjloyyYQuH2dvz3nYHy55f9LSXZ7zsDoAdPu3gDrQ66Lb6uJVrxVCLhg7STYY2sWvfvdYaDW5kVw660hoMbGCX9r/UHRzP63eendX3LDujzxlW5406dmqvUyMPRvtbzuxzpjUc1NBtw8NjH3bb2G9OPxcsZq2b5VoDK5tajfTaKUzoYKowou3R/u/PfipA6PXSa9j0vaalyjXvXhP5mAvevMAFoT6z+7gWq4OdWmfV+jZj7Qwbu3ysDZk/xHp/2dtenP6itZ/U3h4c86BrPVPwUtA+u+/Zka9LoZdTXj/F/R7pM6Pfq3P7net+z/T5qT+wvvv9a/ROI/dZ0u+lPkMKrArP+rzo91ctjQrLnad1ti7TulRKUcjPFsIl4kS4jMGAqctcuGzeK/SHYsMiL1iqKxxl6CA5f9N8m7JqimuxeH/x++5A2X9uf3ewfPXzV+3Fz160f035lztoth3X1nUxqbtJLSg3D7+5TMDYn6KDikLXZQMuc8FOoS5bge5gLGohbDyksWs1VDiOOhBmUtRipfdJLY86mEc9V7hoGbWMKXCqlUthXa1Z41eMd4FeLW3q3szErq93uccpQCvMRz2fwoYCv7rTFa5zTV9iFBYVvvU7oOCvoKj3QJ91vY4K4fp8Ru3P/pR6veu511utfQqfzUY0c62Aag1Uq2BlhCi1YiqgqVVT+6T3WcFNrZ4KcdpHBTr9Tivc6bOg906tpn//6O9ueT1eraBq/VSrpz4f+nKkYKZWWH1G1Cqr97+YES4RJ8JlDB4c+LkLly9/HBo8XYOfK1z2beJXFD618k1fO9217CkkqjVPwVAteDonSWHuYGlJqf1Gbfvdv3/nWr4uH3i5Xf3O1S586eCnA1/LUS3tvo/vcwc/HZh1DtXznz3vuvB6zeplb85503Xr6Ryuj5Z+5Lr6FKR1jtfMdTPdQVEtaQoWOjiu2LbCnRumg6TClM4Zy8bBctX2Va6VS8FFob3T1E6udUvvh1q26vSuE7n/2SxqKWr+fnN3rpm6RvVlQvsdBwUQhVYFqdN6nVZm29T1HvXlozKLWq71hSZ5WzIpao1TQNQ69AVL4UxfvHT+nr6Q6fdO3cyfrvrU5myY41qNd+zhnO5iQLhEnAiXMbjk2Y9duJyyODR4+ls3eeFSd8zJc+t3rncXSKilUCFJYUmhSeFJIUrBQKEq6kAZlKgDYXlFB0l1UQUtKHd8cIdrqXjwkwfdOXMKImoF6zq9q2uxULfmwJKB7lw6nY+lLsFJKycdcPlszWf25fovbeGmhS7Y6TXY9tU2/xUpLgqxCnsKJGo97j6ze2QrVCZFAVutS3qNFWzyjc7tfG3ma661TC15UZ/NOIu6bxUW9cVFX1rUiqigqM+8vpyoJVafU84fRTqES8SJcFnJUg6eHtzzO8/Os1Q3nIKZWnN0bpbORYo66B1I0bp0HpO6t9Ttpe4steTp5H09p7o58zFwAGrdc1cX79p3dbFa4hW6dc5t8tXFyV9W9qdoHWqtVus0kC2ES8SJcFnJxs5b51otr+wy1q9JUKBUsFTAzCG1SGmoC7WCqPvs4v4XR4ZClevevc61WKorWxeo6Nw3XbCiC1fUza0LWXRemA6wOtjqwKsLXhRWAQC5RbhEnPImXK5fv95Wrsy81WrhwoU2ffp0fypzcYfLzh+UuHD56LuhiwPUFa5w2f9mv6LyqWtXQVIhUF3LOmk/KkSq6HwtXTmsiwfy4aIGAEDFEC4Rp7wIl/fcc4995zvfcaVRo0Z+bbQ2bdpY7dq19y5fvXp1u+OOO/y56cUdLv/cfZILl0NmhAZP10U8CpdTe/oV2aPuOV0tqasndbWluqHLu8pZV0Pf+/G97upYDT5d7FdUAkAhIlwiTjkPly1atLAaNWrY5MmTraSkxM4++2xr1qyZP7cshctOnTrZzJkzbc2aNdalSxcXMlWfibjD5a8fHubC5crNO/2ahHY/88LlxiV+xf7T1cUaQFjnLeocxnTDumigaV0UoCuBdZ6jWjCL9QIVACg2hEvEKefhskqVKta5c2d/yqxXr14uLCpsZqpatWp2xRVX+FPlizNczl+zzQXLOv/8wK9JWPm5Fyz/daJfkRm1KOoqaLVGRg2ZEpSL3rrIXb2tIX7Ura2u8DgGwgYA5C/CJeKU03CplkcFyQkTJvg1HtX17t3bn0pPYVFd65mIM1y+OXmpC5e3vREaPH3cc164HHS7X5Hazj07bfjC4W4gYw1JEg6RuvJa40NqeCAtM2s950YC8s327bYn8bdl98KFtnPmTNs+6VP7ev16fy6K3Tdbt9qedevsq2XLbNe8ebbzi1m2Y+pU2zZ+vG0dNco2v/eebRr4tm3s29fW9+hh67q+ZGs6d7Y1HZ+x1U89ZaueeMJWPfqYrWzdxlY89JAtv/c+W37P321Zy5a29PY7bGnzFrbkL81s8Z9vtEU3/MkV/ay6Jbc2Tyxzuy278y5b9re7E4+911Y88KCt+MfDtrJtW1v12OO2ql079zx6Pn12s4VwiTjlNFxOmTLFBUldzBOmug4dOvhT5Wvfvr07BzN5HYGPPvrIfvWrX+0tauU87LDD/LmV677+M1y4fGXMAr8m4Y0/eOFyel+/ojQNP6KxBHWXkORAqQG7Nb6jzo38NvEPhefbnTvtm23b7OtNm9wBcM+qVfbV8uW2e/Fi2z1/vu2aO9c7GM6Y4Q6IOvhsnzjxwMqkSbarpMS+3pzZHW7i9O2uXbZ70SK3jZsHD7Z1r7xia/71rK185BFb3qqVLb3tNlvctKktbPQHm3fJpTb3d2fZ7FNOtVnHHZ+ylJx/gTugb+j1RuI1/MJ/JgQUwL9assR2Jr6Ab5882baO/sg2DxlqG/u9aeu7v2Zrn+uSCD7tXRDS67jkllts0fU32KIm1x9Q0fundSz7653uPdV6Vz3+T1v99NPuuda93M3Wv/66e/5N77xjW0aMcNukELjlgw9s06BBtqF3b7fcmk6dXDBbcf8Dbn2Lb7rZFl3X2OZfXt9KzjnX5vy2duRnIt+Lgm22EC4Rp5yGy2nTpqUMlzqvMp1hw4a5ZQcnDj6pbNiwwcaOHbu3qEVUXfFxuLDjRy5cTluy0av4NhEIH/+pFy637hv0WMP1aFgfjSt58usnlwqUGh5IA5frLioo39ebN9tXK1a4ALYj8dlSMFEA2zF9ugsTu+bMcS0VCmpqtVBwU4D7euNG15rxTSLYBRTytL49q1e75d06E4Fu+6eTbdsnn9iW99+3ze8OsY39+9uGN3rb+ldftbUvvOACkFoeVrZpa8vvu99rzWiRCEI33pQ4oDaxBVddbfMvu9wFnbln/s7mnPZb+7LmbyIPLHGWL39zks278EJ30FdwUMuJAoXChYKGQke27Fm50r0nW0aMdEFvzTP/cq0/at2Zf9llNvvU0yK3MdtF+6xWJT2/Qove74OdPssK5Xp99TlVMNdrvPb5523VP//pXme1nrnglXit555+RuRrU8hl9km1bE6dulZy9tk276KLbcHvr7CFf/ij+yyodVGtj2qJVKukWihXP/mUC6+u9TLmot+9bCFcIk45DZcbE38IFQ6jusUHDBjgT0XTBTxarl+/fn5NZuLqFt+2a48LlqUGT1+W+EOhYNn5VDepO57oYpxwmFTRFdy6C0ehDwOk1imFO4W3nbO+dMFt66jRXmjr2y8R2Lr7rSXtbMU//pEIPX/zWksSAWjBFVfYvAsutLn1Ts+LcJbN4g5+p53mDoBzzzjTHQRLzjvfHQgVCHQwXNDwqsQB8Q8uJIRbg/arJB6rQKmQFbUdUUUtQPPrN4heXwal5OxzItcbVbTfC/94jTvY6yCv8L6+Z08X6DcnvlhuGzfefYlQ8Ffrrlp7y7Pz889d0FI3pD4/Uc85/9LL3JcCdYnqy4g+k/u+oMxydWpBLvMFJfHcyV9QsuGbLVu8sJjYBhfG//1vFxRXPvKoLburpQtE+kzosxK1P/tT9EVHrXxq7dNnQ1+I1Aqo1kC1Cq559lnXSqjWQrUa6gvW9sTf7shW8QzKtsSXfbVA6gvMpsTf+w19+ngtpC++6J5LraTqKtb7pS88rrVaLZLX3+C1eCb+HrjWzsRy+juhvxdq5dT69GVBwUx/V/TFSK2y+ntTzAiXiFPOL+jRleLdunXzp8y1Qh566KG2OPHHO5UDDZYSV7j8eO4aFy4bPh8aPH1MRy9cvvs3Nxm+wlv3ce48rbO7328+U5eta81bsMAdrHWQ0EGmVBdVxDdw7yB4uQsXOohFHdzyragFTS07CnYKHQuubOgddP/8Z9f6o4O7zpla+XBr1yq0+umOtrbL864Ld0OvXrbxzbdcy5FCwdaPP/ZaUj+bbrtmz3aBQS14rtV0R+7v7ayucXWRbxs7zh3odYDXgV2trgqxamWNeo0OpMypW88F5CXNmrnWIX0+FOa2fvSRC3AKa3FQ2FAAUSjROXAK2lHbW9Eyu9bJZb8snH9BxJeFP7rPlz5nJWedHbmudEWnBpSce55bhz6n2i99PnX+3rrE31l9JtW97E6JSHwOv1qxMi8+f6h8hEvEKefhsmXLli5gjhs3zg0vVK9ePWvevLk/11xX9pFHHmlLly5106+//vreYDlq1KhSJRNxhctnRs514fKx8ODpPa/wwuUXg2zuhrkuVF7S/xJ3C7l8oxYTHYDUkqAANb/B7yMPZhUtaglzrSVqDWucCG433ey1ljzwoDv/St1ROiiqVcOdd/XBB267dKGGLtjQhRscHOOjLxbuS0WpUw68Fr1MTznIZ9pWBWy1kurCDLW2KvTpdAaFQH1BUihUONQXJYVFhWWFR4XIqM94RYpalfX7oRZctdbp90JBUb+X+n1Q663OkdTvAVAewiXilPNwKffdd58dddRRLvQlD6I+NXEA0y/FqsTBSjRfY2FGlUzEFS6vf2WiC5fvfe7fdeibPWaPVvHC5Y6N1nV6VxcuHxv/mDc/h9R6odCmVjd1PanlI+pAp6KDnbqi1cqz4MorEwffJomD3q2u26pUF1Xi4Le3i+ojv4sq8doHXVRAMXAXaCUCa8oLtBKhXOE8fIHWjs8+c78nuuodyBbCJeKUF+EyTnGFy2Dw9PXbdnsVi8Z6wfKF092k7tWtcDlm2Rg3Xdl0IFPI0zlrm94eZKvbP+mu1pxTu05kiFTXtc5tUuuhzvPaMWWK6xIHABx8CJeIE+GyEsxZtcUFy3rtQoOnj27nhcthD9iGXRvsxNdOdLdl/Oqbr/wFKkZdw2oB0YUOOu9R57LpPMd0F1CoK0/n1K148EE3ppu6BOM65w0AEA/CJeJEuKwEfSYtceHy9t5T/ZqE7pd64XLOMOs/t79rtdRYlgdC57up63n531u588F0zldUcAwXXSCglkiFyHUvvewGC9a5cQCAwke4RJwIl5Xgnjenu3D56icLvYo9u8zaHm7W5lCz3dvsjg/ucOHy7ZLMftF1vpaGX9HVyeWNATjvwovcOG0apmT9az1s64cfuossAADFjXCJOBEuK8G5HUa7cPnZUn/w9PmjvFbLl89z3eDqDle3uLrHU9Gg37piVV3WySFS50nqqlGdC6lubFogAQDlIVwiToTLLAsGTz/2waH7Bk9/v60XLhP/f7T0I9dq2XhIY2+eT+dMbhn5vjtXssyYgsf/yo2Dp2F5NPSLu9MPAAAZIlwiToTLLPtw9moXLq9+YZxfk/DyeV64nD/a2o5r68LlyzNedkPy6CIa3QmjVJhMFN2lRQNYbxo4kKF7AAAVQrhEnAiXWdZh+GwXLv85xB88ffc271xLnXO5Z5ed2+9cFy5LNpbYwmuvKxUo511yqa164gnX1Q0AQLYQLhEnwmWWXffyBBcuh830B0+fO9xrtex+qbtXuIKlAqYGUVag1B0+dPGNpgEAqAyES8SJcJlF33z7rTvXUuFy7+Dpwx/0wuXo9tZlWhcXLp+Y+IQ7f1LhUvekBgCgMhEuESfCZRbNWrHZBcsz2n/o1yS8eIYXLhePt0aDG7lwOX7FeHdvYoVL3acZAIDKRLhEnAiXWdRrwiIXLv/axx88fcdG73zLR6vY6m0rXLDUMETbpk5xwVIBEwCAyka4RJwIl1nUsu80Fy5fG+sPnj7rHa/V8vWG1nd2Xxcu7x59t61s09aFy7XPP+8tBwBAJSJcIk6Eyyw668lRLlzOWLbJqxhyjxcuP/mXNX+/uQuX78552+ac5t1lZ89K/6IfAAAqEeEScSJcZsmG7btdsNQFPbqwx+lS24XLnUsmWq2etdxdedYMH+KC5cJrrvWWAQCgkhEuESfCZZaM+GKVC5eNXvTHqNT5lmq1fPyn9sHi912r5Q3v3WDL7rzLhcsNb/T2lgMAoJIRLhEnwmWWtHvvSxcu2w390qv4/C0vXPa51v4x9h8uXPb49EX7ssYJNuvXNezrzZu95QAAqGSES8SJcJklf+g63oVLtWA679zhwuW3E16wM/uc6cJlSc8XXavlkltu9ZYBACAGhEvEiXCZBZGDpz97kguX0+e87YLlRW9dZIuuv8GFy81DhnrLAAAQA8Il4kS4zAJdHa5geWYwePrW1V6XeLuf2bNTnnXhstPwNjbr+F/Z7Fon27e7dnnLAQAQA8Il4kS4zAKNa6lweee/p3kV097wwuWbTe3KQVe6cPnZ0w+7Vsvl997nLQMAQEwIl4gT4TIL7ug91YXLnuMWeRUDb3XhcvWEzi5Y1nmjjs279FIXLreN9a8mBwAgJoRLxIlwmQWnt/vAhcuZy/0rwJ+q7sLlG5O9LvF2b9ziguXceqebBWNgAgAQE8Il4kS4rKAyg6evn+91iScC5l9G/MWFywkP3OrC5ap27fxHAQAQH8Il4kS4rKDhM1e6cHnNS+O9ismvuXC5c0Azq9mjpv0mUebUrefC5c4vvvCWAQAgRoRLxIlwWUEfz11jVz0/1p4eMcerePPPLlwOH+0NnN722YYuWJacf4E3HwCAmBEuESfCZba1+5kLl/d/2NKFyzG3NHLhcu0LL/gLAAAQL8Il4kS4zKbVs1yw/LZTLXeFeK1uJ9iXtWq5cLln5Up/IQAA4kW4RJwIl9k0sasLl1MG/Mm1Wv6jzTkuWC66rrG/AAAA8SNcIk6Ey2z6dyJEJsLl08NudeHyoz9e6MLlhj59/AUAAIgf4RJxIlxmi4Yh8s+3vLz/pXb68yfYrF/92r6scYJ9vdkf/xIAgBwgXCJOhMtsWTHdBctlL9R2rZYP3emda7m0eQt/AQAAcoNwiTgRLrNlbCcXLnv0b+RdJX5RXRcuN7/3nr8AAAC5QbhEnAiX2dLrahcumw5oYOf9q4YLlrNrnWzf7tnjLwAAQG4QLhEnwmU26HzLx39qWx/5kbsrz2M3euFyxf0P+AsAAJA7hEvEiXCZDUsnuVbLId3qui7xSbV/48LltnH+LSEBAMghwiXiRLjMho87uHB5T9+L7arHvFbLufVO91o0AQDIMcIl4kS4zIYeDeybRLis8/qp9uw1v3bhcnX7J/2ZAADkFuEScSJcVtQ3e8werWIT2v/UfvPqCTb1JK/lcuesL/0FAADILcIl4kS4rKhFn7gu8Xbd69qfH/CC5fzLLvNnAgCQe4RLxIlwWVHzPjTrdr5d1KuOdW/wKxcu13Xt6s8EACD3CJeIE+EyC+ZtnGenvnSCff7r423W8b+yPStX+nMAAMg9wiXiRLjMgm4zutmdLb0LeRY1buzXAgCQHwiXiBPhMguaDG1i/c/3usQ39u3r1wIAkB8Il4gT4bKCNuzaYGd1PsG+SATLL2ucYF9v3uzPAQAgPxAuESfCZQX1n9vf/nGL1yW+9Lbb/FoAAPIH4RJxIlxW0NbdW23qeWe4cLll+HC/FgCA/EG4RJzyJlyuWbPGli1b5k9lZvr06bZjxw5/KjPZDpc7E+tTsJx98il+DQAA+YVwiTjlRbi877777Dvf+Y4rjRo18mtTmzlzpp122mlu+e9///vWpk0bf0562Q6X63v2dOFyxUMP+TUAAOQXwiXilPNwedddd1mNGjVs8uTJVlJSYmeffbY1a9bMnxutTp06LoQuX77cJkyYYD/84Q+ta4YDl2c7XMquOXNsV8k8fwoAgPxCuEScch4ujzjiCOvcubM/ZdarVy/XIqmwGUXhUPOnTZvm15jdcccddsopmXVLV0a4BAAgnxEuEaechkudZ6mgqNbHMNX169fPnypN9d/97nf9Kc+oUaPcY1ZmcGccwiUAoNgQLhGnnIbLKVOmuFC4fv16v8ajumeeecafKq1jx45Ws2ZNf8oThMupU6f6Nfto3jHHHLO3HHnkkfZ//+//LVVX0fLTn/7Ujj766Mh5FO/1UYmaR/GKXp+qVatGzqPwGcqk6PX55S9/GTmPwmdIjTKPPPKIf2QEKldOw6W6tlOFy06dOvlTpSl0pgqXn332mV+zz8aNG23ixIl7y/vvv2/PP/98qbqKFgXWHj16RM6jTLT777/fzjvvvMh5FK/ovOEBAwZEzqNMtNtuu82uuOKKyHkUr/yf//N/bOTIkZHzKBPtT3/6k91www2R84qh9O3b1xYuXOgfGYHKldNwqeCnUBjVLa4DbZSBAwem7BZfu3atXxMvtRakOkcU5i62ymQUgGL2ox/9yObPn+9PIVm7du3SXuhX7BQuk7+oYx+NSnLvvff6UwAqU84v6NGV4skX9Bx66KG2ePFiv6a0pUuXuiCZfEHP+eef70/Fj3BZPsJleoTL8hEu0yNclo9wCcQn5+GyZcuWLmDq/MtgKKLmzZv7c80++eQTdwGOQmWgbt26LqysWLFi71BEr7zyij83foTL8hEu0yNclo9wmR7hsnyESyA+OQ+X8tprr7mhhE444YQyJxyrhfLcc8+1VatW+TXm/oAqrOiArHlvvPGGPyc3+vTpQ7gsh14fFaSm14dwmRqfofT0+hAuU+MzBMQnL8IlAAAACgPhEgAAAFlDuAQAAEDWEC4BAACQNYTLCtLVh/Xr17fbb7/dxo8f79ce/HQB1fDhw+2pp56yNm3a+LVl3XPPPXbJJZfYTTfdZJMmTfJr99HtOq+//nq79NJLI9ezadMmN2KARgm49dZbbebMmf6cfXTBli7gatiwYbnbErf+/fu7/b7qqquse/futm7dOn/OPum2PZP979mzpzVt2tSuvfbayHXoMXfeeadbh/6PWkcu6PXRNqnoc6LPUzKNdduqVatyt137rH2vyP6nW0eu6bOj7YratnTbno39z2QduaBtjSphce1/unUA2IdwWQEKDQqWgwcPdn9sfvCDH9iQIUP8uQc37U/16tXdGKIaVzSK9l8l2H8NCfXee+/5c83V67G625IGv9eQU/rDHFCw0CgBWocGwtc6tIyGmAqoTuvQcEYaA1XzVZdr2gYdiLp16+YC9OWXX261atWy7du3+0uk3/YD2f8jjjii1Dp0dbAek7yO5cuX+0vkjl6fu+++2733urOWRnZ44okn/Lnetp900knlbns29j/dOvJBkyZN3OulEhbH/i9btsw9RnXhdag+1/R6aHuSSyDY9nT7r33Wvkftv5ZNt//hdeh11OsZXgeA0giXBygITuHWugYNGtjNN9/sTxUG/bHVfibT/v/4xz/2pzzaf4WJgIJ3+/bt/SlvzFKta+XKlW5aoSNqHeE/2gpfyevQH/lwAMsF3UY0bM2aNW7f1MoYSLftmey/DnLhdXTp0sWtIzh46mCXvA59KcjHA5/GGdS2B6K2PTjIB5L3X19U9nf/tY7w7WST15FrGoqtTp06keEyk/3X/oaF93/Hjh32ve99r8z+q07zRC3HUetQfa4F4TKVVNu+P/uvZcvbf73Wes2T16H3BkA0wuUBUldm8l2B9Efq5JNP9qcKQ6pwqRAZ1coS7L9aVPQ4PT5MdYMGDXI/B93BYVpH7dq13c+bN29OuY63337bn8of1apV2zuYfybbfiD7rxD7X//1X67LWVKt47TTTvOn8kfr1q3dF45Aum0v7zXMdP+DdSQLryOX1Dqmm0TojmTaj/C+ZLr/2t+w8P7rsan2P1ivnjNqHcmvay4E26ZtjWpJTbXt+7P/Wra8/ddrnWodeo8AlFX2NwYZ0Xl2N954oz/lUStClSpV/KnCkOqP8zXXXFOmlTa8/zpnSY9LPiCoRUCtb6KurKh1/OQnP3E/l7eOF154wZ/KD2rJ0LYGg+lnsu0Huv/HHnvs3lumRq1DB0YFlnygbVHRebn64jFy5Eh/TvS269zVYNtT7b/qMt3/YB3JwuvIpeuuu86djyoKM+FAl+n+a3/DwvufSbhKFdDC25Ir2oaf/exnduKJJ7pt1vTQoUP9uam3fX/2X8uWt/96rVOtQ+8RgLLK/sYgIxdffLHr5gvTH73//M//9KcKQ6o/ztr/5G6z8P7r4iY9Luh6CqhVLjjv7qKLLopch1rmZNy4cWnXkQ+C1yh8gMpk2w90/9WFWt46tB3BOnJN26Kiz4tCwquvvurPid52nbObbv9Vl+n+B+tIFl5HrqiV+/jjj/enyobLTPc//LmT8P5nEq5SBbTwtuTK2LFj/Z+8fVHLt043CaTa9v3Zfy1b3v7rtU61Dr1HAMoq+xuDjNxyyy2u9S5MrVfqGi0kqf44a/91BXRYeP/VzafHTZ8+3U0HDj/8cHv99dfdz7rSOmodwQFX95NPtQ618OWDrVu3ljlPUDLZ9gPdf7VslrcObUs4tOQL3R9cF33t3LnTTUdtu1qJ0u2/6jLd/2AdycLryJWf//zn7vcrKAozKkHoyXT/o4JRsP+pfn9VFzxPqoAWhKt8ou0K70+qbd+f/dey5e2/XutU69B7BKCssr8xyIj++FStWtW2bdvm15i1aNEiL/8gV0SqP87a/+OOO86f8ug0gWD/d+/e7U6CVzdnYOHChaX+qKdaR/i8PC0ftY7kC2pyIei2HDZsmF9TWrptz3T/dTV6IHg/wutIvrBAXaXhdeSLYP+nTp3qpjPZ9uT912kH+7v/Wj587//kdeSKfldSlUAm+6/9DQvvf/B5idr/4PdQzxe1jvB25IsOHTq4vyuBVNu+P/uvZcvbf73WqdYBIBq/HQcouIJQ40CKgoYOcvl2LmBFBX+ck+mK5yOPPHLvgS9q/3Xg+8Mf/uBPedM6XzAwbdo0t+5gHV988YVbh65ED2h8x/Affq0jHBxyRdusbdfBLpV0234g+68xQ8tbR/B+hdeRK+Ft0AVe2hd9IQuGa8pk27Ox/+nWkS8UZpIDXTb2X+eHJ69DdQGdB6vH6LESrCN8fmwuaDuCbRJ9OfnjH//ozt8NBNuebv+1z4Hk/dey6fZfr3l4HXo99d4AiEa4rAD9QdMfoZo1a9ohhxzirtwsFApC2rfkEv5jH+y/zoE67LDDIvdff4QVwjWeoYJl8gHr+eefd+vQQVVdpnreMHWvX3bZZS6U6PEKX/lwEr22N/y6BCW8/Zlse6b7f9RRR7lTDqLWEbxXWofGWk1eR65om/QFJHit1EobjBQQCO9/1LYHX1oqsv+ZrCMfaPtVwuLafz1Gj021jlwIQp5+f/Q3Rj+fddZZZQbjz3T/te8Huv/hdeh1jFoHgH0IlxWkFpkJEya4b9WFRH/YU5WwTPa/pKTEjQcaPoUgTMPraL3JV8WG6Q+5zj1LvrghV5Jfk3BJlm7bs7H/emy6deRC+HVZu3atX1tauv3XPmvfK7L/mawj14LXKVlc+59uHXHTublffvnl3telvO2KY/8zWQcAD+ESAAAAWUO4BAAAQNYQLgEAAJA1hEsAAABkDeESAAAAWUO4BAAAQNYQLgEAAJA1hEsgDwVj+82fP9+v8QT1lUnrTx7MO5d0v/UmTZq4barsfQcAVBzhEshDukPImWeeaVdffbVf41G40t1IKlMcz5EpDaStbdFtNrVd+xsuFUiT77YCAKhchEsgDykQKRgpWA0bNsyvLb5wqW2pUqWKtWjRYr+DpRAuASB+hEsgDwXh8s4777Tf/va3fm3Z4BcVnsJ1wfL9+/ffe3/m008/3c1r27atu+e57v3+xBNPuDoJHqN7x//mN79xPzds2LDMbfFefvlld199za9bt6517drVn7Nv+/X/Mccc435OpVWrVu5+zf/7v/9b6nn0WK07KKnWoe294oor3P3ttZzu+yzJj1cJ6DHaZtVpH9q3b+/P2bf/Tz31lLuX9CGHHFJm/1u3bm2/+93v9q63vP0DgGJDuATyUBDOli9fbt/97nftueeec/VB8AkEAS4sXBcsr2n9rHuUX3jhhW5aXc26J/yIESPcc0yePLnUY0499VQbOXKkm9Y6L7jgAjdfGjVqZFdeeaUNHTrUTStY6jHBdLD91157rS1YsCDlvecVLLWcnmP8+PH2hz/8odTzBM9dHs3v2bOnbd682U3rMQHNS359unXr5rZV4Vn0/9FHH713Otj/YLuCbdA+BzR/9OjRtmXLFjfdvXt39z8AgHAJ5KUgnAU/V61a1Xbt2rU3+ASiwlO4Llh+woQJblruuOMOO/zww23Hjh1+jbnWu6DlMXjMW2+95aZFoVF1gwcPtvXr17ufu3Tp4s/1hJ9X/2uZdevWuekoGzZscMv07t3brzHXOhg8jwTBrjyar3C5atUqv2afqNenXr16ZequueYa10oswf6/+eabblq0jarTNivw62ctF4RLAMA+hEsgDyn8hEOVuq4ffvjhvcEnEBWewnXJy0vyuiXqMQpSYdWrV3etnRMnTnTzo0qwjqjnSKaWUj1GYTVMQVfPI9qWdOvRc9WpU8etS8E5aD2V8H4FwtsbLsHzRO1/EKiD1t3bb7/dvR56X/72t7/ZmDFjXD0AgHAJ5KXkcKZu8e9973sZhcuTTz55b11FwuWUKVPctAThqkePHm54JP2sbuFUop4jWbCecKuq/Pd//7d7HskkXAaGDBlif/rTn9w6x40b5+qiXh+dZ6rzTVOJ2n9to+qSh4bq06eP3Xzzze41V8syAIBwCeSlqHCmC1VuvfVWF3ICOtdPF+oEPv30U7dcclAMi1p3OIQFj3n66afdtGisSYXbqVOnumk9x1133eV+DgvCV9RzRNHFRUErpSgUhp8nXbicN2+e/9M+Wj7oatdV5s2aNXM/B5o2bRq5zuA0gaj91zYGFwolB0zR8nPmzPGnAKC4ES6BPBQVzgYMGOBCTDgszpo1y44//nh3kY6ClLqU9bhshEtdwKJ6FU0H80Uh8Pvf/767Ylrdwpqn5fVYiXqOKIMGDXLnf2rZiy66qMzzpAuXmt+gQQP3mGAbzjjjDH+ud7GO1qlAGV6vroKvVauW/eUvf3H1eu2C+cH+n3XWWZH7r/l6Hk3//e9/t4svvtjOP/98Nw8AQLgE8pKCSxBmwqLqdVGJWup0gY2uBg8voyCUvHzUOlI9plevXu7n8HmMAbX0qVtYwxglLxNeXzrTp093267W0eHDh/u1nqjtT6arzIPn05igwVXjAdU9+eSTZdajsK4hiFSvYZUCQbhUS6Tq9djwWKOifdXj1KKp5bmwBwD2IVwCQEgQLgEAB4a/oAAQQrgEgIrhLygAhChcqgAADgzhEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFlDuAQAAEDWEC4BAACQNYRLAAAAZA3hEgAAAFli9v8BHM980numNw4AAAAASUVORK5CYII="> <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArMAAAGJCAYAAACZ7rtNAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAFhNSURBVHhe7d0JuNXUvf5x732ePm2tA7Xyby/iLS2I9aJUxRG1Uue5Tljn4lCqiHVGbK2ibVUUoYJoqVBAEAoKiMiMAjIjg+gBmZEZmQdBHH//864kh7BP9jkBzj575+T78VkPOyvZ2Ul2jnn3ykqynwEAAAAJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWQAAACQWYRYAAACJRZgFAABAYhFmAQAAkFiEWaTS22+/bcOGDfOHKkarVq1cKWR9+/ZNxHLm07Zt26xr1645306F8D0UwjJUhNGjR7v10L8A0ocwi7zbb7/9spZcHZzOPvtsu/TSS/2h+Mo6aGp5CzkYaPkOO+wwa9SokSvZaFz4O6hVq5ZddtllNn78eH+K0u6991479thjrVq1aq7oteoyabtpntm+V42L2oa9e/e2M844w2rXrm3f/e537Sc/+YmdfPLJdvfdd9uSJUv8qTzhZc8s5e1PCxYscNPVqVPHbYdcfp+5nn8gyftsXOXtVwCqNsIs8i44oEaVXB2c9iXMZjtoBstciAYOHOiWe9KkSX5NdkHYDdancePGVrNmTff+yZMn+1N5Vq5caQ0aNCj5Dvv16+eKXqtO4zRNoLzQEcwnLJhXvXr1rEOHDjZ06FB74403rF27dtawYUO3rGHBPKJKeftTixYtXHivDJUZZrNt86jtnURatzjfL4CqiTCLvMvFAXXjxo02bdo0mzdvnn3++ed+7S65CLPZbN682T788ENX9Lo8mveKFSv8oXjK+wxtXy13HEGYDevUqZN7/x133OHXeIL59urVy6/ZRXWZ32152y9z+oULF7q6zOUJfPnll3bffff5Q57MeeyJqHUPaD/S/qT9SvtXeWbOnGlFRUX+UGn6nMxts3btWn+oNI3bk30j2MZlbfPwttq0aZNNnTrVPvvsMzccZc2aNW6abH9X+6q8v9tMUesEIH0Is8i7OOHjlFNOsfPPP98f2kVhQe9//fXX/ZpdASsoOjWt1rywzDAbvCdTuD4IBZklOKDqdeZ6qPUwc3rVhQUBSuFPrYLBdOVtk0B5n6F5Z44va97B8mTS+8L16ltao0aNMn8UaJym0bRSVrCSzGW74oor7LjjjrPt27f7NeUrb/2y0fsyS7Cc2n+0H4XHZX6GhlWvfVHdFKKmCdO21PjHH3/cDjnkkJL5vvTSS/4UHk2jVulgvIq6V2RSvaZVCeYXtW+oBPRa06v1PRinZR8yZIg/xS7XXnttyTQqWqZwWI/az4KS7fsO03KE3xP1dxvsm+F11LzL268AVG2EWeSdDkI6OJXl6aefdtNt2bLFr/Hcdttt9tOf/tQfMhs5cmTJ/DZs2OAOtjpQK1CFT3fvbZgNHzTDRYLPDQwaNMjV6fPVF1MlCA0aFwgO0AoLOnirD2iTJk3cdCNGjPCnihbnM7R8wXpkLnOUYHnCdOGY3t+2bVu/xmzChAmu7vnnn/drStM4TaNpRZ8bLEcUjQtvw1/84hduffZE5jzi0jIF6x5sIxXtN9p/tBzan7RfBdtT+1sgqKtfv35JIM22nqLP0T5Y1ncnmm/nzp1dK7U+u2fPnm6azO9IdZrf1VdfbRMnTnTLqjCrZdC48DoFVK/P1GfMnj3bhdigv3BYsG76V9tD+6n213PPPdefYtffR7hcddVVbn4fffSRP1W0uH+3wfdzwQUXuK4z+lsJPitYRwDpQ5hF3ukglK0Eli9f7oZ1sAv7zne+Yw8//LB7vXPnTqtbt64LE2E63Zv53r0Js1LWQTPzMzR/1c2ZM8evMfdadeHP1sFZdeoHGtApZdXdc889fk20uJ+Rbf2iBIFB71E577zz7Mc//rGdc8457vRv4OWXX3bzVB/ZbDRO02haKS90aFx4Gx544IFZLyQLlzDNI1spT7DuYcG2034UplZ07W/a7ySYTmEzjuB7L++7ixJ8lk77BzSsv4fMrgraPhqXuZ1E9SeeeKI/5FG3DdUHITIImnfeeacbDgQ/pLKt75/+9Cf73ve+Z+PGjfNrou3J322wzRYvXuzXeMpaRwBVX7yjG5BDwQFLB6LMEqZQdcQRR/hD5lqr9N4gZAT9K++66y43HKYr7MMhpTLC7M9//nN3AVQm1WlcQMsVHg6oPrzMUeJ+Rrb1ixJ8blD0vurVq5da52D7q4Usm+DCM00rZW0/0bjwNjzooINK9YkVTRcuYcE89BmZpTzBOodpWPtPposuush9lvY7Cbbx3Llz3XB5NN9s351aM8PUMt68eXPXIhksoz5L/VcDGj7rrLP8oV203hoXtf6qb9asmT/kyZw++J61fpnl9NNPd2cRMnXr1s29J9zCrFCcWaS8v9ubb77ZH/K22eGHH+4P7VLWOgKo+gizyLvgQFme7t27u2mDU+86xRkOA2Ud0IKDf6AywmzmcCBznkE4yZStPizuZ2QOlyXzc3UqV6FF7w+3BCpIqe6FF17wa0rTOE0ThK5g+0X1yVS/Wo0Lr4/6Zep0czZR65U5jz0Rtc01v6jvIfjsYF/Yk20smmfUcmbO59Zbb3XDaqHW/W/1eUE3lPB+qOGo+QXbPDxtIOo9mdNrvH4YBdsmqoSNGjXKvb9Hjx5+jadPnz6uPig/+9nPXH1Zy5c5/6jPk7LmAaDqi/9/XiBHdBCKOghH0WnU3//+9zZ//nz3vn/961/+GLOlS5e6ugceeMCv2UWnq8OhKDPMPvvss+69madoFZhVHyjroKn68HooiEWdLlYA17hAtgN0tvowzSdb6174M7Rc4fUoS9Tn6h6zer/6QAZ01btu2VXWKXGN0zTBFfKLFi1y83nuuefccFjQB1c/WgJ6v05Vhy80CotaLw3H3Z8yRa279hvtP5m0bPos7XeyJ9tY9DnlfXfLli1z88zsbqILwFQf3g+zrfee7LOSOf1rr73mhsPdYLLR96RWfPVxz/TVV1+VKlLe3224e0PU9yNlrSOAqo8wi7yLOqBmo1OOOvX497//3b1v69at/hiPgoD634XpYhhNq9tLBTLDbNC3MzgdLjNmzLD999/f1QeCMBZ1KyrVh9ejadOm7gKWsClTprjpNC6Q7QCdrT5M89H8NN9A1GfsSdDK9rna9ppH+MlpuiBMdXqoQSbVaVz4ojH1j1TYiZr/Qw895KZXy15A97VVXTiYh0Wtl4bj7k+ZotY9uC2Z9qMwfbfhMLon21j0OZo+qs9s8N0F+1vLli3dcOCkk05y9eHwpuGo9Q6CXpx9VjKDoW6VpWF1q4gS9KPWhVunnXZaqdu3xRH37zbq+5HMZQaQLoRZ5F1wQM1WwqZPn+6m1xOm1Ic206xZs9x4taapn6Fush/VepkZZoMr1oNl6dixo5166qnuterC1J9RfRfV11DjgwNo8N5AEAJ0gNcFUCpaFtVpXCDbATpbfVjwGZpvWZ8RtR7ZZPvcDz74wM3jN7/5jV/jCVoJ9RkKTCrB50XdQioIwApsCioDBgwomT7q9mvBOC2Trs4fPHiwe19woZJKmIb1nmylLNnWPWgt1f6k/Ur7lz5H+1tA885clrLoc/QePfhBF1Gp6LXmEf7u1BquaV955RW33hdeeKFdd911brpweAvWO5O6iWhceJ8NRL0nKhiqJV11wXem70BnM7RcwXRaTn2G5pdZwvOKEvfvNtv3E7XMANKDMIu8Cw5QUUUHwkyqP/PMM+3NN9/0a3anU50KBQcccIC7WEQH2XBfT8kMs6IDYfA+fYYOqvp8vQ7TRS1qRVRLlcYFB9Co5dWV3Aqzhx56qCt6nXl1t96X+RmSrT5TnM+IWo9syvpctc5qXLg1UV599VUXRNSlQEWvVZeNlk/9cPUDQq3fupdsly5d/LGl6UEBOg2tz9apZ91dQVfhq4U+8yb/wfJHlaj9KSyYLpP2H+1H2p+0f2g/yez6sCfbWILl0X6mgKowph9Qmd+dTu+rj+yPfvQj9whfvUf7nN4fDm/B/KLoB0Z4nw1EvSdq3hLez3SPVy2LuhOsXr3ajdd7spXMeUWJ83cbzC9TtmUGkA6EWQAAACQWYRYAAACJRZgFAABAYiU2zLZo0cIuv/xyd2W7+prFoT5Z6qenPne6cjZb/zIAAAAkQ2LDrDr7BxdCxA2zumBEF6boynXd8kehNnzbFwAAACRL4rsZxA2zwa1fws9X122DMm/9AgAAgORITZjV7W++//3v+0Oe4L1qqQUAAEDypCbM6kbt9evX94c8wXvDNygP6NGauudjUI4++mgXhsN1FAqFQqFU5XLQQQfZiy++6B8ZgcJEmC1+rx5bmunTTz91T7kJygsvvOD+qMN1FAqFQqFU5aKGnG7duvlHRqAwpSbMltXNIHiCTVk+/vhj99QhAADS4le/+lXWpy0ChSI1YVZhVNOFH0GpuyHEvQCMMAsASBvCLJIgsWFWITYoCqnB68CUKVOsXr16u13cdcopp7hbcy1ZssT1idUzwOPemoswCwBIG8IskiCxYVatqrrXbGYJTJ061Y455hhbtWqVX+M9NKFhw4YuxNauXXuPHppAmAUApA1hFkmQ+G4GlYUwCwBIm3yH2e3bt1MqoXzxxRf+Fk8mwmxMhFkAQNrkI8x+8803rovgvHnzbM6cOZRKKuqCuXHjRv9bSBbCbEyEWQBA2uQjzCpQ6Zi7YcMG+/LLL+2rr76i5Lhs27bNdctcuHCh/y0kC2E2JsIsACBt8hFmly5dypM582DHjh2uhVb/Jg1hNibCLAAgbfIRZufPn+9aZVH55s6da5s3b/aHkoMwGxNhFgCQNoTZdCHMVnGEWQBA2hBm04UwW8URZgEAaUOYTbZevXrZQw89ZBdddNFu9+LPhjBbxRFmAQBpQ5hNNgVYPSAqeFpqeRRmt2zZ4g8lB2E2JsIsACBtCLPlU1BUYNRTRrt06bLb00UHDRpkbdq0sd69e5dap8ynkAbzCZs1a5Z17NjRFc1f4zOnGT58uLVv395NM23aNL92d4RZOIRZAEDaEGbLp6CoFtB69eqVtITKtdde6wJkkyZNrFatWlajRg0bPHiwGyeZ4TIzcGraQw45xL1X89Aj+sPzF73WNDfccIPdcsstVqdOHRdsMxFm4RBmAQBpUyhh9oqOEyq1XP3yRP+Ty6egqCD74osv+jXmWmgVHidPnuzXmF111VUulAbKC7OaVu8JTJo0yY4++uiSMDtjxgzbf//93fsC+lyF5sx7xe5JmKXPbBVGmAUApE2hhNkT/jbSfvrw25VW9jTMHnjggf6Q56abbrKTTz7ZH/K0bt3aateu7Q+VH2br1q3r3hNWvXr1kjDbuXNn11L7+OOP71YOPfRQ1yUhjDALhzALAEibQgmzH67YbNOXbqy0Mm/NVv+Ty6egqFAZdumll1qzZs38Ic/AgQN3C73lhVlN269fP3/Ic/zxx5eEWfXFPfvss+2JJ55wRfVB0bzCCLNwCLMAgLShz2z5osLsww8/7FpRP//8c7/GrGXLlnb66af7Q+b6woZD59NPP71b4NS0zZs394fMVq1a5cYHYVYXl2l4zJgxbrgshFk4hFkAQNoQZssXFWYVCnW6X/1elyxZYl27dnXh9dlnn/Wn8LoiKJiuXLnSzeOSSy7ZLXB26tTJvUfvnTdvnvsMlSDMisZrONw3t2/fvv4rb9mConmHh6MQZqs4wiwAIG0Is+VTMMwMs6L6hg0b2gEHHOAuEGvXrp0/xqPbaDVt2tSFzFNPPdVNn9l6qqBas2ZN9/6zzjrL3dFAITesRYsWri+u3qvSuHFjf4z3/qA+XAizKUWYBQCkDWE2fxYtWuS/8ixYsMAF0ZEjR/o1FY8wW8URZgEAaUOYzR+1nqqVtU+fPq4/rfrgnnjiif7Y3CDMVnGEWQBA2hBm80dhVg9AUL/bK6+8cre+srlCmK3iCLMAgLQhzKYLYbaKI8wCANKGMJsuhNkqjjALAEgbwmy6EGarOMIsACBtCLPpQpit4gizAIC0yUeY1S2oCLP5QZit4gizAIC0IcymC2G2iiPMAgDShjCbfMOGDXOP0R0+fLhfkx1htoojzAIA0oYwm2x6YtgPfvAD97AFvf75z39uI0aM8MeWRpit4gizAIC04QKwZOvZs2fJtly5cqXVq1fPGjZs6IajEGarOMIsACBtCLPl05O6GjVq5J7QVbt2bdcCKmoBPf300+3AAw+0Y489ttQTvPSesGA+YS1atLCaNWva4Ycf7t4ffE5gxYoV7ulgyiea7uabb/bHRNNnVKtWzR8qjTBbxRFmAQBpUzBhdvWHZiunV15ZN8//4PIFIVTB9d1333V1CoWHHnqoexTtkiVLrGvXrlarVi3XdzUQhN6A5hOu07R6j967ePFiu/7660uF2XPPPdcef/xxKyoqstmzZ9sTTzzhwm02eu+ll17qD5VGmK3iCLMAgLQpmDD73BFmjx9UeaXL+f4Hly8Iofo38OCDD9ohhxxiO3bs8GvMmjdvvtsp/vLCrMKx3hNYtWqVGx+EWX0vGlZ9YOjQoa5OXQoy6X01atSwyZMn+zWlEWarOMIsACBtCibMdmpUuWUPw2xm94ALL7zQbr/9dn/I069fPzvggAP8ofLDrLon6D1hxx9/fEmYfeGFF+y4446zE044wU4++WQXlM844wyrW7fubsFadCcDzXvatGl+TTTCbBVHmAUApA19ZssXFWbVveC8887zhzydOnVy3QYCmWF24MCBu9UplLZu3dof8lSvXr0kzGbOL5u+ffu6+erf8hBmqzjCLAAgbQiz5YsKswqcCpCLFi3ya8xuu+02u+iii/wh7wIwBdKAxofDrALxVVdd5Q+ZTZw40Y466qiSMDty5Eg3vfrUZhMsR5wgK4TZKo4wCwBIG8Js+aLC7Pbt210XgOACLgXTzOCpfrV16tSxu+++212U9eSTT+4WZgcPHuz63Woeer+CrLoRBGFWzj//fHe7LdUNGDDA/asuBxJ0W9CyZZZsCLNVHGEWAJA2hNnyKTSGA2ZA66DAqv6zanXVdJmCIKwW2qj5zJo1yzp27OjKRx995C7geu211/yxHt1L9qabbnKtvnq/tp8E84sq2RBmqzjCLAAgbQiz+aMwGt4OGlZLa3kXce0LhdktW7b4Q8lBmI2JMAsASBvCbP4E4VUXgulhDEGXhVwizFZxhFkAQNoQZvNL96lVqFV3g/A9a3OFMFvFEWYBAGlDmE0XwmwVR5gFAKQNYTZdCLNVHGEWAJA2hNl0IcxWcYRZAEDaEGbThTBbxRFmAQBpQ5hNF8JsFUeYBQCkDWE2XQizVRxhFgCQNoTZZNPTxcp64lcmwmwVR5gFAKQNYTbZ9ibM8jjbSqZnHuuJGHpWcePGjf3a7PSFnn/++XbQQQdZvXr13LOS4yLMAgDShjBbPj3UQEWGDBmyW3gcNGiQtWnTxnr37l1qnYL3BMLzCehhCR07drS+ffu64ahphg8fbu3bt3fTafowwmyBa9asmQukekbxggUL3BfWtGlTf2y06tWruy915cqVbgfTY+LifsmEWQBA2hBmy6dwqQxy+umnuwY2vZZrr73W5YwmTZqUNLwNHjzYjRONC9N8wnWa9pBDDnHvVYPdhRdeWCqc6rXec/LJJ7txmn7gwIH+WMJswVMw7dChgz9k1rNnT/eFKtxGydxJJNgJ4iDMAgDSplDC7CPjHqn0Epfyhc74tm3b1q8x69Kli8sXkydP9mvMrrjiChdsA5n5IzOnaNqrrrrKHzKbOHGi/fCHPywJpzNmzLD999/funbt6oZFLbSnnXaaP0SYLWhr164ttZOI6nr16uUPlaZfLsHONm/ePNeye/fdd7vh8hBmAQBpUyhhtlGfRnZ0t6Mrrdw89Gb/k8unEPqd73zHlixZ4teY3XrrrXb00Uf7Qx6FyiOPPNIfKj/M1q1b11q3bu0PeX70ox+VhNNu3bq5XKL3hYvmoVAqhNkCNn36dPdlZe7sqlPflGwWL15sN9xwg9vBfvCDH5TaScLGjx9vJ554Ykk55phjrFq1av5YAACqvkIJsyM+GWFvLXyr0sqElRP8Ty6fAqRCY1hUiAyCZiD8WjLHH3jggdavXz9/yHP88ceXzFf/nnXWWa7o88JF85Ko5SgLYbYSzZw5M2uYVRN7lE2bNrn+LOprO2DAAHv++efL/JLXrVvnOlUHpXPnzq5rAwAAaUGf2fJFhdl77rnHnf0Ne/rpp61Bgwb+kLkGsqKiIn/IXGNcOMwqszRv3twfMlu1apUbH+QWXRT2/e9/3zXwZUOYLWAKpvpCo7oZ9O/f3x/anQKswqi6KARatmy5245TFroZAADSJh9hVt0Akx5mR4wY4fJF0J9VXRAuuOACe+ihh9ywXH755RZ0fdSZY80jnEmeffZZd/GX5rFo0SK75ppr7IwzzigJp7ofrM4yazjo4qDtpmuIAoTZAqdfPGotDejuBAcffLAtXbrUr9mdOmPrSsIwfcHacYK+JWUhzAIA0oYwW76oMCs6U6yMoUCqf3VBV7hfrTKIcskRRxzhxqulNRxmpUWLFlazZk07/PDD7dFHH7UTTjhht+6UupuT7nKg96nVV/8ee+yx/lgvzKouswTdEDIRZivZfffd5wKtru5TM33Dhg3tzjvv9Meaa7VVR2vdhisY1hf40ksvudbZMWPG2CWXXBK5A0YhzAIA0oYwu2+0Hsof4RCbScEyfNY4G7XeKseo8S6TQm22uzntCcJsHqibgH6xKGRmPjTh/ffft+OOO871MQnoyr+LL77Y3UKjdu3a7r60cVplhTALAEgbwmz+KOSqi4Ea33r06GG33367u4BdXS1zhTBbxRFmAQBpwwVg+aMwe+WVV7ozyPpX3RIyrxWqaITZKo4wCwBIG8JsuhBmqzjCLAAgbQiz6UKYreIIswCAtMlXmF2/fr0/hMryzTffuKxDmK3CCLMAgLTJR5hdvXp1mVf/Izd039o5c+bYV1995dckB2E2JsIsACBt8hFmP/vsMxeqFi5caJ9++imlEsqyZcvcNte/SUSYjYkwCwBIm3yEWdmxY4cLWWqhVdE9Viu6BPPe0xI1r8oquVqW5cuXu5ZZdTVIIsJsTIRZAEDa5CvMAnuCMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQSE2ZgIswCAtCHMIgkIszERZgEAaUOYRRIQZmMizAIA0oYwiyQgzMZEmAUApA1hFklAmI2JMAsASBvCLJKAMBsTYRYAkDaEWSQBYTYmwiwAIG0Is0gCwmxMhFkAQNoQZpEEhNmYCLMAgLQhzCIJCLMxEWYBAGlDmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBITZmAizAIC0IcwiCQizMRFmAQBpQ5hFEhBmYyLMAgDShjCLJCDMxkSYBQCkDWEWSUCYjYkwCwBIG8IskiDRYfbBBx+0WrVqWY0aNaxx48Z+bdlatmxpRx55pO23336utGrVyh9TNsIsACBtCLNIgsSG2WbNmlm9evVs2rRptmDBAmvUqJE1bdrUHxtN4w877DDr16+fGx49ejRhFgCALAizSILEhtnq1atbhw4d/CGznj17upZWhdsoqtf4QYMG+TV7hjALAEgbwiySIJFhdu3atS6YTp482a/xqK5Xr17+0O769u3rxs+aNcu16qpbguriIswCANKGMIskSGSYnT59ugumGzZs8Gs8qmvTpo0/tLt27dq58epf26RJE7vhhhvskEMOydrNYNy4cXbccceVlKOOOsqqVavmjwUAoOojzCIJEhlmZ86cmTXMtm/f3h/aneo1/uWXX/ZrzO69915Xt3HjRr9ml/Xr19s777xTUrp27eq6NgAAkBaEWSRBIsPspk2bXAiN6mbQv39/f2h3qtf4lStX+jVm8+bNc3XZ+tmG0c0AAJA2hFkkQWIvANOdDDp37uwPmbuw6+CDD7alS5f6NbtTvcaHLwALuh4sW7bMr8mOMAsASBvCLJIgsWH2vvvuc4F24sSJVlRUZA0bNrQ777zTH2s2adIkq127tq1YscKvMTdet/DSLbmGDBliderUsUsvvdQfWzbCLAAgbQizSILEhlnRAxBq1qzpQmbmQxPUdeDEE0+01atX+zUeTafp9b6495gVwiwAIG0Is0iCRIfZykSYBQCkDWEWSUCYjYkwCwBIG8IskoAwGxNhFgCQNoRZJAFhNibCLAAgbQizSALCbEyEWQBA2hBmkQR5DbN68pbuOqBbZRU6wiwAIG0Is0iCvIXZpk2bugcWKCAGYVb3ju3UqZN7XWgIswCAtCHMIgnyEmZfeeUVu+6666xPnz722GOP2ZgxY1z9hAkT3MMPChFhFgCQNoRZJEFewqweXNC1a1f3+s9//nNJmN22bZsdcMAB7nWhIcwCANKGMIskyEuYvfLKK13rrOgpXkGYHTp0qHsEbSEizAIA0oYwiyTIS5jVY2Qvuugimz59urVo0cKF2QEDBti1117rhgsRYRYAkDaEWSRB3i4Au/XWW90FYGeffbYde+yx7nWtWrX8sYWHMAsASBvCLJIgb2FWRowY4e5e0KZNGxs0aJBfW5gIswCAtCHMIgnyEmYbNWrkuhokCWEWAJA2hFkkQV7CbLNmzQizAAAUOMIskiAvYXb27NlWr14969y5sy1evNivLWyEWQBA2hBmkQR5CbNqldUFX1FFXRAKEWEWAJA2hFkkQV7CrB5fW1YpRIRZAEDaEGaRBHkJs0lEmAUApA1hFkmQtzBbVFTkuhvoaWB6vK2eBFaorbJCmAUApA1hFkmQlzA7fvx41z+2bt26dsMNN1jTpk2tQYMGri54zG2hIcwCANKGMIskyEuYveeeeyIv9FJLbf369f2hwkKYBQCkDWEWSZCXMKsgm61LgVpnCxFhFgCQNoRZJAEtszERZgEAaUOYRRLQZzYmwiwAIG0Is0iCvJ3TV6C97LLLXAttUHr37u2PLTyEWQBA2hBmkQSF2UG1ABFmAQBpQ5hFEuQlzA4YMMD1j82kuqj6QkCYBQCkDWEWSZCXMPvAAw/Yc8895w/t0rdvX9d3thARZgEAaUOYRRLkJcxyay4AAAofYRZJkJfkeMcdd1izZs38oV3atm3LrbkAACgQhFkkQV7C7OTJk6169erWvHlz10I7c+ZM+9Of/mQ1a9bk1lwAABQIwiySIG/n9NUKq0CrbgVBady4sT+28BBmAQBpQ5hFEuS1g+qOHTusqKjIPvzwQ9u8ebNfW5gIswCAtCHMIgkK4mqr5cuX29y5c/2hwkSYBQCkDWEWSVCpYVZ3MejYsaM/5Dn11FNLuhnUqVPHBg8e7I8pLIRZAEDaEGaRBJUaZg888EBbu3atP2Q2aNAg+9nPfmadO3d23Q3OOecca9KkiT+2sBBmAQBpQ5hFElRamJ03b55rfQ279NJL7fbbb/eHzLp06WJHHnmkP1RYCLMAgLQhzCIJKi3Mzpo1y4XZNWvWuOEFCxa44V69erlh0W26MgNvoSDMAkA67fzqG9vw2Re2dMN2m71yi01ZvMHenfupDZq10npNWWqvvLfI/jFqvv3t7TnWst+H1rzXDLul61T7badJduVLE+yyF8fbRS+8Z+e3G2tntRljZz472ho+846d/PdR1uCvI+2XTwy3eo8NsyMfHWo/ffjt3Yrq/q94XP1Ww+24J0fYiX8baac89Y6d0fpda/TcaDvn+TF2/j/es4vbj3Ofo8/rN32Fv+T7jjCLJKi05Kg7F9SoUcO6d+/uhtu3b2/HHXecex1QmOVxtgBybeP2L+zjVVts7Ly19p+py6z9Owvskf4f2m3d3rfLOoy3E4oDg4KEgsI9/5lpnccttvc/2ei/O702bf/Slm/cYR+v3mrTirfHe/PX2YjZa1yoe33acus5+RO3rV58d4E9P2KeC3ePvvmRPfT6LLu7OOD9/tVpdlOXKS7kXf/KZPf61m5TXf2dPae7ae7rM9NN/0hxKNR7n3hrtpvPM0M/tjbD59qzw+bak4Nm25/6f2T39/3A7nptut3e/X27sfNka/zPiXZJcajT96awp+B3THEIzAyIVb28NGah/43tO8IskqBSm0FbtWrlWl51IZj+7d27tz/Go/FRTwYrBIRZFILtX3xta7futMXrPrMPV2y2yYvW26R9KApz73z8qQ0vWm2DP1xlb32w0rXq9H1/mb02Zam9OvET6zJ+sf3rvUXuAKnQ127kfNcSpfCiIDN1yQabt2arfVq8XPmybedXtqI4ZBWt3LLbNpm4cL09URx8FHiufnminV4ccKIO/ntS1ML25wEfue3zUfF3kBTrtu20BZ9ucyF01JxP7Y3py61LcfBsN3KeC4cPvzGreDvNsN/9e4oLhRf84z0XCNUaGLUd0lbUeqrW1PPajnX7koK4AviDxcG71VtF1rY4vL9c/DfSY9In1n/GCve3of0v/PdWWWXV5h3+t77vCLNIgko/pz98+HAXWlesKH0aRPV9+/b1hwoLYRYBBUqdctQBQ6FSLXwzl23ap2CpgPiXNz+y+/t8YE1fnWY3vDLZnTI8+/kxdspTo+zox4dFHmALsei0qE6f6qB/TadJrtVNB3y1rik47U1RK9/jA4tcK6nC1hUdJ7hTrHsbtP7vL8Pc+6/71yQ3T7X6KbQr0Ie/l/EL1lmnsYusWXFo+dWz2YOwTu0qNOuHgIL9vvj8y69dC+jqLZ/bkvXF+9fqrfbB8k3u1HZ42cJFoempwXOsRXEg/UOPaa7lU6eedTr6qL+UPnW9t0X7ofZHtXz+pnj/1PbT6XSFOrWoqjVVLanPDPk48nusiPLCqPnuO+le/EOrT/GProHFP8D0Y2xM8Q8zbaMPiv8Wtc207bQN1Qq/o3ibYu8QZpEEhdlBtQARZpPns51f2Zrig9mitZ+5MKCDvlpL1Gqi1hO1NCokqVVFYeuO4hBwY+cprtVF/c8UJE975h3Xp61QwmTdPw9xAU4tZmo5UwuaAmM+i4LcvgTLiihBgFafRC2TArROVStA63tWyFTgW7h2m/sxsrfUAvze/HXW4Z0FrkvCScWfGbU8hViOfWKE66t5ecfxdnOXKe6Uvn5AqUVR4fC1yUvtzZkrXautQqFanfVjTS26hMH0IswiCQizMRFm80/h9JP122360o0ulOrCC/XNUxjVBRdqJVJr4PF/zW2oUqDUxRgKMmqt02de2mGcC5ZaBp1+VGtV0A9Qy6ZWKwXmoB+gltn1Axzi9QPU6Xu1DOr0vloHdfpfp4PnrtnqTp+rpS4pwqf81bKp9dEp+ZdGL4xsaYtX5ru+mIXUtSGwftsXbpn0PerHUEX00dSPJ/XbVbcItYKqH6j2L7XYK0Sr24T6i6rfqLoItB421/0tqM/qoFmrXOBWN5RlG7bblh3J2XdQeAizSALCbEyE2Yqlfp/qv6eLakbOWRPqvzd/txCjVtJTn37HBciog35Z5RePDnWtqmqNUj9HnXrVxSZqkVKo/PvgOe4KZIWk3lOXuYtYRs/1T1Uu3+SCpMKzWnc3FwcCXdEMAGlCmEUSEGZjIszuGbVWDflotQuMaq30+u+N2uf+e+oDqFYq9ZtUy5RapHRxkk4jj1uwzrUGqp8cAGDfEWaRBITZmAizZdMpzX+PX+KuhtYVv5khVH0adfo/aCVVv0adig9aSXXKPbOVVPdx1Kl2XcihU9cAgMpFmEUSEGZjIszuoquD1TVAV4ArlOp0fji4/qzlYLuwOLCqX6j6S2p6AKiqtn+13TZ+vtFWfbbKPtnyic3dMNdmrZ1lU1dPtYkrJ9q4FePs3WXv2shPRtrQJUPt7UVv25sL37Q35r9hfeb2sdfmvGbdZ3e3Lh91sX99+C97+YOXXdFr1Wlczzk9rffc3vb6vNdtwIIB9tbCt2zw4sE2fMlwG7V0lI1eNtp9jj5v5baV/pLtO8IskiDRYfbBBx+0WrVquYcxNG7c2K8t35IlS9x9bvfkaWNpDrNzVm1xF5boIia1rIaDa1B0hbTCra6E1oVaQJIs2bzEJq2a5IJCh5kdrOV7Le2hsQ/ZsCXDXDjB3nEhb+dGW/3Z6t1C3vtr3rfpn053r2evn+3qF2xa4KZZvnW5m37tjrXuvVu/2OrmE7bty222fsd6F9oWb15sczbMsZmfzrTJqybbmOVjXMBT2NP32WNOD+v8YWfrOLPjXhXtDzcOudEav9XYLh1wqZ33xnn2q//8yk557RQ7utvRBVle+fAVf0vtO8IskiCxYVYPV6hXr55NmzbNPRpXD2Jo2rSpP7ZsCr7BAxziSkuY3fr5V+4iKN2ySldO6zGKmcFVXQbUD1YXaE1YuJ4LoxJux1c7XGBQcFi3Y50LEiu2rXDBYuGmhS5oKHDMWjfLBRAFkTnr57ggoVBR6LZ8scU+3vCxjV0+1rVutZ3W1u4bfZ9dP/h6O+M/Z0SGgcxy8msn2+3Db7cXpr/gWtjWbPcey11V6Hv8dPunLhh+uO5Dm7J6ir2z9B0XCHt93MuFQYW6p6Y8ZX8e92e7d/S99vsRvy8V8s7sc2ZBh7xclQY9GthpvU+zs/qeZRf1u8iuGHiFXfv2tfa7ob+z24bfZn8Y8Qe7a9Rd9sd3/2j3j7nfWoxtYY+Me8T+MuEv1mpiK/v75L/bM1Oesefef87aTW/nhegZHdzrNu+3sdZTW7tpnpz0pD0+4XH3HTz83sP24JgH3b589zt3W7NRzdzn6PPUYltRCLNIgsSG2erVq1uHDh38IbOePXu6cKpwW5auXbvar3/9a8KsT7c3Uh9V3ZNTzwzPDK4qun+o7gKg+3WqDysqngLi8E+Gu9OHClydZnVyoeuvk/7qWgl1sNJB6reDfmuXDLjEHTRP6nlS5IE1H+WM3mfYxf0vthsG32B3jLzDHWh18NVBWS1jAxcOdC1mOu26N0WhSttDYUoHc81fB/E7R91ptw671ZoMbRJZFCailjezKIhpegWM9jPal7TKKUxou5/Y88RS71EQVoD456x/2viV411oLhTbvtjmfoyotVKnoPvO6+uWU9+JwpS22WUDLrPTe59ear2SUPR9aJ875/Vz3H531VtX2fVvX+/WS/ufwrb2Ee0rWufnpz1vL8580W0DBfNuRd3cfhmctu8/v/+u0/bFf4cK8tpf9b2qtVc/4orWF7nWY7Ucq9VYPwDTgDCLJEhkmF27dq0LopMnT/ZrPKrr1auXP1Ta6tWr7Sc/+YlNnz49tWFWt5zS7a900/R6Ea2uKroZv55T/8a05e6m6VWVDvhqiVQrpFog1fo449MZ7uA14pMRexQs1XIXddCl5L+o1Uwth2q1UiuYTsEOWjTIBRT1cYxLLdQKPwq8ClBRn3X+G+fb/aPvd8EpCMSVUdSK13hQYzu779mRy1VeUThs1KeRW69rBl3jQmHzd5pbi/dauNZAhcF/fvBPe3X2q9Zvfj93Gl/9M6eviQ55O7/O//1/UTEIs0iCRIZZhVEF0Q0bNvg1HtW1adPGHyrtlltusbvuusu9Li/Mvvfee3bMMceUlLp161q1atX8scmzfOMO+2PvGaWCq27wrttc6U4CurXVZ18UXn9Xnf5W4Jy3cZ4LmzqIqi+jDqo6uOogq1NxT0x8wh18dRDWwfi6t6+zKwde6Q7QOsirFeqEnidEHswruigcnNrrVNd6pJCgFiQFHS3Lb978jVsuhQa1Jt085GYXjnUK8k/j/uRakhROdPGHTvHqQhFdOKKQrT6GCt7qClBILUMKMVoufT86Da9l1kUrCnVaH51WVZi8Zdgt7vS0Ws/0o0AtaA+MecC1ounUqVrSFJ70nmenPutClE7tK7AphOr7ViujWnoVqNR6pu2iz1XXh0WbF7nuD/qRoh8rubRp5yZ30Y1ab9UCrNActS/ko2hZ1NqslnK1Hj86/lG3HfWdKMirf7C6XqhrAVAWwiySIJFhdubMmVnDbPv27f2h3fXt29cOO+ww277du5CgvDCreY8dO7akvPrqq65rQ9Js+OwL90z7cIB9+I1Z9p+py/b5GfJ7Qqdgl21d5lpxdLXtkMVD7D9z/+NaPRVadLBVuFEouPzNy11rZ67DgVpTdar43NfPdS2sV791tTv4xw6W6wozWCJ/Plr3kWvNz2w53ZMSnArvWtTVnQrXflfqVHhxkM88Fa4fOmodzbxYCtgXhFkkQSLD7KZNm1wQjepm0L9/f39od7pYTAE2XDS9/h09erQ/VXZJ62agOwroAq3/+4vXleDnjwx2j1NdtXmHP0XuqMVHB1+dllfrY1SQ3JOiFk61bKpFUy2ZatlTi55a8tSHUhem6OCvg74O9jrI6wKWD9Z+4JZlyeYlrrVuw+cb7LMvq263CQCoaIRZJEEiw6wonHbu3NkfMhs0aJAdfPDBtnTpUr9md5lBVqUqhtmvvv7WPeNfF20FLbG3d3/fPTo2F9QyqdOWal3Vlc1RYVRFFyupBVStn2r51MU76r+oU7S6j6LCry68UAhVAFX4JHgCQH4RZpEEiQ2z9913nwu0EydOtKKiImvYsKHdeeed/lhz9T/96U9txYoVfs3ugjAbV6GH2W+/NfdI19NCT9+66qUJNnPZJn+KfafT6TrFHlzhfUqv6FvwqIuAugyoC8GElRNy3ncRAJAbhFkkQWLDrLRs2dJq1qzpQmbmQxN0kZgCru5gEEVhVvemjauQw+w7H39q57cbWxJiz//He65uX3xb/N/8jfPd1du6l+Gv+/w6Mrjq3opNRzR1tzPShT9cUAIAVQdhFkmQ6DBbmQoxzKrVVa2vQYhVq2z/GStcK+2e+ubbb9zFK7r/olpVFVIzg6uu0L956M2uS4Eu4NIFXQCAqoswiyQgzMZUSGFW9369rdv7JSFW/WO7T9yzR25+8fUXNm3NNHfltC6oiroBvwKtgq0Crp4KpMALAEgPwiySgDAbUyGF2Uvaj3Mh9hePDrU2w+fa9i++9seUbenWpfaP6f+wm4bcVCq4quh+qLrnp+4KoHu6AgDSjTCLJCDMxlQoYXbr51+5IHvUX4a6e8jGoacc6RngmeFVN/HXvVT18AE9+hIAgDDCLJKAMBtToYTZEbPXuDDb+J8T/Zrs9FQmPU0pHGB1Oyw9PUtPSAIAoCyEWSQBYTamQgmzTwya7cJs2xHZuwHo8a+6SCt4gtYvu//ShVjuNAAA2BOEWSQBYTamQgmzF73wnguzExau92t20SNV1SdWdx1QiK3fvb49Mu4RW7Et+l67AACUhTCLJCDMxlQIYfazL76yn7UcbLUfGWw7v9p1ZwE9i/2fH/yz5CEGx3Q7xu4ffb97jCsAAHuLMIskIMzGVAhhNrO/7M6vd1rXoq52Ru8zSvrENhvVzD3sAACAfUWYRRIQZmMqhDD7pN9f9tnhs93ts8JP5dLjZYvWF/lTAgCw7wizSALCbEyFEGYv9u8ve/1bTUtC7PWDr7epq6f6UwAAUHEIs0gCwmxM+Q6z4f6yx/do4O5UMHb5WH8sAAAVjzCLJCDMxpTvMDtyjtdf9qJ/dnctspe/ebk/BgCA3CDMIgkIszHlO8z+7e05Lsze3O8pF2b/Oumv/hgAAHKDMIskIMzGlO8we4nfX/aaN29xYXbI4iH+GAAAcoMwiyQgzMaUzzC7q7/s23ZSz5NcmNWjagEAyCXCLJKAMBtTPsPsqDmf+v1le7kge0G/C/wxAADkDmEWSUCYjSmfYfbvg73+srf0a+PC7J/H/dkfAwBA7hBmkQSE2ZjyGWYv7eD1l73xrTtcmO0/v78/BgCA3CHMIgkIszHlK8yG7y/bsNdpLswu3brUHwsAQO4QZpEEhNmY8hVm3/nY6y978ctvuCB7Wu/T/DEAAOQWYRZJQJiNKV9h9im/v+xt/du7MPvAmAf8MQAA5BZhFklAmI0pX2H2sg7jvYu/3r7XhdleH/fyxwAAkFuEWSQBYTamfITZoL+sSqM+jVyYnbdxnj8WAIDcIswiCQizMeUjzI6eu9brL/vSQBdk9cAEAAAqC2EWSUCYjSkfYfbpIR+7MNu0fycXZu8adZc/BgCA3CPMIgkIszHlI8xe9qLXX7bpkIddmP33R//2xwAAkHuEWSQBYTamyg6z4f6yF/e/xIXZWWtn+WMBAMg9wiySgDAbU2WH2THzvP6yl3Qc7oJsgx4N7Otvv/bHAgCQe4RZJAFhNqbKDrPP+P1l7xzQ1YXZ24bf5o8BACTKlzvMdm4127HRbNunZltWmm1aarZhkdm6eWZrisxWzTJbMc1s2WSzT8abLRm392XTMv+D9x1hFklAmI2pssPsb/z+sncNe8yF2Zc+eMkfAyCnPltn9ukcL1DoNaoOhUoFSoVJBUmFSAVIfdcLRhX/j36Q2Yevm8141WzKv8wmvGA29lmzUU+YDXvEbNC9ZgPuMOv7O7Ne15q9+huzLuebdfqV2Ysnmf3jGLM2dc2ePtzs8YPyV95r46/wviPMIgkIszFVZpjd+dU3Jf1lrxx4lQuzU1ZP8ccC2CNffGa2cYnZ8qnFf8hvm03rVhxQnjMb0sLs9VvMul9q1vEUs2drRwcDBZQ3bvPCzaoP/JmmlIKgWhQVAtWCqABYNMALf5OKf3Ar+I34ixf6+t3uBb5uF3thr/3xXtD7+0+it3NVL3//Hy/ktv6Ztx3aHlW8b9X3tkvHk81ebuhtp1fO9gJy14v2vnzY1//C9h1hFklAmI2pMsPsWL+/7MUvjrL63evbL7v/0nZ+vdMfi0T7fIvZtjXF4eoTs7Ufm62cabZ0ktmiMWZzhxQHg/5mM18ze79LcTjoaPbe82bv/t1s+KNmgx8we/MuL1j95wazHld6B67O55r969dm/zzD7KVTzV480eyFY83aHW32/C/MnqtTfACtZfbUYWZ/K96How60e1qeqlE8/3rFn3m6Fwb73GT21h/NRrUyG9/ObHpxYJzzlnfKc/VHZptXeKFyXyhIqTVtxXSzhe94LWhTX/EClFrNBtzphafwQf3fF0Yvf1RR0FCoCL//X41KT6dtqHHvPGk2b1jxcm3yFzABtm8o3u/mei2R2temdDIb/bS3/QbeXRzum5j1vMrbbgpXClsKX5nbIMlF37PCpH6kqDVVAfLfF3itrNp/1Oqq1lcFcm0XtcpqH5vQ3vtBo+CufU+tuArz2pZq3VXA1/6pVl/tq2oFrgIIs0gCwmxMlRlmnxnq9Ze9a8BrrlX2hsHFwQUVa+e24oC1vDhofWi2+D2z2QO9FrtxxUFs9FNmIx/3AuTQh70QOegeL0j2/0NxmLy1+IB3sxcoX7vGC5UKdC78FIdKBcr2x3lBUiGyogIkZe/KX/+fF8oU0BTW9L0qmCiQfDLBCyBlUShZPNb7UaHvOOoz9ANCAUg/QhRq8kE/hqZ3934ADW3p/ehRQNP++NwR0cu9p+Xpmt5+rRD4ylnF+/1lxX8HN3rrPvjB4oD/1+K/obZe6Pugl/eDRttOYU8hWkFPP+iQGIRZJAFhNqbKDLOXd/T6y9474m8uzLadVnxwwC5ffW722driELLYC6MKJGoh++gNL5BO7OC1Nuk0cr/fFwfOxl7rZYcG2U8lV1ZRi6aWoe3/ecvz8mnFy3aOWbdLvOXsUxwMtMxv3e0t/8jHvHVRyJ78srd+s/7jhe/5w70grvXXKfSVM7xWUPX3XL/AO7WuwL51tdf38/PNXuuott++0sUsCia6eEUtpVoOtVR90NtrLVWgefdvxevwkNdiqsCjYBW1TeIWtSxXdmtwNksnev0SX708+rS5WjWD1t09LcEPIrWuKzg+89OK+UGkeekUtn6EKXjqR5srxfuX+oa+/2/v9LRCsVob1aVi/ULvTEIVaWXEniPMIgkIszFVVphVf9naj3j9Za9/+wYXZscuH+uPTTC1hG5dVRx+5nuhS4FD/RcVzKZ29sKaCz/FAe7NZl6oU1BQ3zH1Z1T4e+Z/ow/Se1MUDtoc6c1bLXa9r/M+d9ifvIO7gooCkoLx5H96y6iwNLOnt8wf9fMC5dzBXqhc+K63TsumeAFb66kgqdCtdUfVpkCvfaX39RW7n0aV4AeRgn2HE7xwrx9E6h4wsLnXgqxgqv1SgVRhFNhLhFkkAWE2psoKs+/N9/vLtn/X9ZVVn9ntX233xyaEbjEzravZ2/d5B9qoA/K+FrVW6WCuFiy1uva4wmupU1cAdQ3QAV3hQuFToVktaWqxVKAGKssXxX+7ahHfvt5rIVdrsfpLq+VcfabVgqx+02pZ1z6qH0R6rfqSH0TrctfCDJSDMIskIMzGVFlhtvWwuS7M/nHA665V9qq3rvLHFCgdkNVSqT56CpVRwVNFLaG6EEmnThVwdTpV/U3V/1Sniof/2WzMM8UB9EWvBVQtn/NHeAd4Hdh1ylyBAABQaQizSALCbEyVFWav6DjBhdkHRz7nwuxTU57yxxQAtSqpT6IujlKfRV0MkhlaVafuAeqLpyvO1bcSAJBIhFkkAWE2psoIs+H+srcMvdWF2eFLhvtjK5luLaOWUbWWqgVVraqZwbXVwd5FTOprqguTPp1t9u23/gwAAElHmEUSEGZjqoww+978dX5/2bHWoEcDF2Y379zsj80h9etTX73x//BuOaX7k2YGVxXd9FtX3ev+mrq4JEn31wQA7DHCLJKAMBtTZYTZZ/3+svcMGOiC7MX9L/bHVKCvvzBb/r53s3TdM1X3x2xVrXRwffJQs05nehdxzejht7p+488EAJAGhFkkAWE2psoIs1e95PWXfeSd9i7MPjbhMX9MBVg02ru/5BM/LB1cFWb15CPdD1T3CNVthhR6AQCpRphFEhBmY8p1mA33l71j5F0uzA5cONAfu5d0SyDdHUDP/g6HVz0NSU9C0lOQdHN0bogOAIhAmEUSEGZjynWYHbfA6y974Qvv2Uk9T3JhdsW2Ff7YPaSb9ut+q+GnBv21uvdkKd3UHwCAGAizSALCbEy5DrPPDff6y97/5lAXZH/d59f+mJi+2mk28zXvUZi7tcIe4z3JSncnAABgDxBmkQSE2ZhyHWavfnmiC7OPje7kwmyLsS38MeXQwwSGP2rWutauAKs+sK819u44wK2yAAB7iTCLJEh8mN2wYYOtXr3aHyrfkiVLbNasWf5QfLkMs0F/WXcng3fvd2G2z9w+/tgIuqvA3CFmPa707vUahNjWPzMb+ZjZpmX+hAAA7D3CLJIg0WH2wQcftP3228+Vxo0b+7XRWrVqZSeffHLJ9HXq1LG7777bH1u+XIbZ8X5/2Qv+8Z7rXqAwu2DTAn9sBF28FQRYFT1GdlYZ4RcAgL1AmEUSJDbMNmvWzOrVq2fTpk2zBQsWWKNGjaxp06b+2NIUZtu3b29FRUW2du1a69ixowu1qo8jl2H2+RHzXJh9aMA7Lsie1vs0f0yEndu8APvX/2f21h/NVn/kjwAAoGIRZpEEiQ2z1atXtw4dOvhDZj179nThVOE2rtq1a9vll1/uD5Utl2G28T+9/rJ/HdPNhdk/vlscUrMpGuCF2Vd/41cAAJAbhFkkQSLDrFpWFVwnT57s13hU16tXL3+ofAqn6qoQR67CbLi/7ENj/uTCbPfZ3f2xEfrd7oVZPcELAIAcIswiCRIZZqdPn+6Cqy7+ClNdmzZt/KGytW7d2vWhzZxHYOzYsXbUUUeVFLXiVqtWzR9bcSYsXO+C7Pn/eM8u6HeBC7NF64v8sRm++drsqcO8MLt1lV8JAEBuEGaRBIkMszNnzswaZtUvtjzDhg1z0w4aNMivKW3jxo02YcKEkqIWX3VtqGht/f6yDw8Y74Jsgx4N7BvdrSDK4ve8IPtyGX1qAQCoIIRZJEEiw+ymTZtcGI3qZtC/f39/KJou+NJ0ffv29WviyVU3g6C/7NNje7kw23RE9ovYbGhLL8yOfsqvAAAgdwizSILEXgCmOxl07tzZHzLXynrwwQfb0qVL/ZrS9jbISi7CbLi/7KPjnnBhttOsMvrCtjvaC7OrPvArAADIHcIskiCxYfa+++5zgXbixInudlsNGza0O++80x9rrmvAYYcdZsuXL3fDPXr0KAmyo0eP3q3EkYswO9HvL3te27F2+ZuXuzA7bU2WuzF8OtsLsm2O9CsAAMgtwiySILFhVlq2bGk1a9Z0ITPzoQkzZsxwf4Rr1qxxwxqve9FGlThyEWbbjfT6y7Z8c7ILsr/s/kv7+tuv/bEZxj7nhdm37/crAADILcIskiDRYbYy5SLMXtNpkguzbcb1d2H25iE3+2Mi/OvXXphd+I5fAQBAbhFmkQSE2ZgqOsyG+8v+bWJrF2ZfmP6CPzbD9g1ekP1b8efr9lwAAFQCwiySgDAbU0WH2SmLN7gge27bsXbt29e6MDtuxTh/bIb3/+2F2b6/8ysAAMg9wiySgDAbU0WH2b7vL3Nh9k8DZlj97vVd2f7Vdn9shp5Xe2F2Vh+/AgCA3CPMIgkIszHlos/sji+/tqELx7pW2WsGXePXZvhyh9mTh5o98UOznVv9SgAAco8wiyQgzMaUizArHWZ2cGG29dTWfk2GOW95rbLdLvYrAACoHIRZJAFhNqZchdkmQ5u4MDtq6Si/JkP/P3hhdtJLfgUAAJWDMIskIMzGlIswq3vKNujRwIXZzTs3+7Uh335r9tRhXpjdlP3JZgAA5AJhFklAmI0pF2F2xqczXJC9bMBlfk2GT8Z7QfalU/0KAAAqD2EWSUCYjSkXYfaVD19xYbbVxFZ+TYbhf/bC7DtP+hUAAFQewiySgDAbUy7C7B0j73Bh9u1Fb/s1Gdod7YXZFdP9CgAAKg9hFklAmI2posPsN99+Yyf1PMmF2bU71vq1IWs/9oJs61p+BQAAlYswiyQgzMZU0WF2zvo5Lsie98Z5fk2GcW29MDvoHr8CAIDKRZhFEhBmY6roMPvanNdcmH1k3CN+TYZXzvbC7PzhfgUAAJWLMIskIMzGVNFhdufXO23iyon24boP/ZqQ7RvMWh1s9rfiz/v6C78SAIDKRZhFEhBmY8rFBWBZTe/mtcr+50a/AgCAykeYRRIQZmOq1DDb67demP2gl18BAEDlI8wiCQizMVVamP1yh9mTh3rdDHZu9SsBAKh8hFkkAWE2pkoLsx8P8lpl/32hXwEAQH4QZpEEhNmYKi3MvtnMC7MT2vsVAADkB2EWSUCYjalSwuy333oPSVCY3bTUrwQAID8Is0gCwmxMlRJml07yguyLJ/oVAADkD2EWSUCYjalSwuyIv3hhduTjfgUAAPlDmEUSEGZjqpQw2+5oL8wum+JXAACQP4RZJAFhNqach9l1870gqz6z6jsLAECeEWaRBITZmHIeZsf/wwuzA5v7FQAA5BdhFklAmI0p52G2y3lemJ07xK8AACC/CLNIAsJsTDkNs9s3eE/8+lvx/L/+wq8EACC/CLNIAsJsTDkNszN6eK2yva/zKwAAyD/CLJKAMBtTTsOsQqzC7IxX/QoAAPKPMIskIMzGlLMwq24F6l6gbgbqbgAAQIEgzCIJCLMx5SzM6oIvtcp2PtevAACgMBBmkQSE2ZhyFmYH3u2F2fHt/AoAAAoDYRZJQJiNKSdhVg9H0EMSFGbXzfMrAQAoDIRZJAFhNqachNnlU70gq8fYAgBQYAizSALCbEw5CbMjH/fC7PBH/QoAAAoHYRZJQJiNKSdh9sUTvTC7dKJfAQBA4SDMIgkIszFVeJjdtNQLsuozq76zAAAUGMIskoAwG1OFh9mJL3phdsCdfgUAAIWFMIskIMzGVOFh9ssdZvOHm63+0K8AAKCwEGaRBITZmHLSZxYAgAJGmEUSEGZjIswCANKGMIskIMzGRJgFAKQNYRZJQJiNiTALAEgbwiySgDAbE2EWAJA2hFkkAWE2JsIsACBtCLNIAsJsTIRZAEDaEGaRBIkPs2vXrrUVK1b4Q/HMmjXLduzY4Q/FQ5gFAKQNYRZJkOgw27JlS9tvv/1cady4sV+bXVFRkZ144olu+h/84AfWqlUrf0z5CLMAgLQhzCIJEhtm7733XqtXr55NmzbNFixYYI0aNbKmTZv6Y6OdcsopLvSuXLnSJk+ebAceeKB16tTJH1s2wiwAIG0Is0iCxIbZGjVqWIcOHfwhs549e7oWV4XbKAqjGj9z5ky/xuzuu++2Bg0a+ENlI8wCANKGMIskSGSYVT9ZBVO1roaprm/fvv7Q7lT//e9/3x/yjB492r1n9erVfk12hFkAQNoQZpEEiQyz06dPdyF0w4YNfo1Hde3atfOHdte2bVurX7++P+QJwuyMGTP8ml007ogjjigphx12mP33f//3bnX7Wv7nf/7HDj/88MhxFG/7qESNo3hF26dWrVqR4yjsQ3GKts/Pf/7zyHEU9iE1Aj355JP+kREoTIkMs+oqkC3Mtm/f3h/anUJutjD7wQcf+DW7bNq0yaZMmVJSRo0aZS+99NJudftaFJC7d+8eOY4yxR555BE7++yzI8dRvKJ+3/37948cR5lid911l11++eWR4yhe+a//+i8bOXJk5DjKFPvd735nN998c+S4NJQ+ffrYkiVL/CMjUJgSGWYVNBVCo7oZ6MAeZcCAAVm7Gaxbt86vqVxqDcnWxxfmLs6Lc5eKNPvhD39oixYt8oeQ6Zlnnin3wtC0U5jNbBjALrprzsMPP+wPAShEib0ATHcyyLwA7OCDD7alS5f6Nbtbvny5C66ZF4Cdc845/lDlI8yWjTBbPsJs2Qiz5SPMlo0wCxS+xIbZ++67zwVa9Z8Nbs115513+mPNxo8f7y7YUogNnHrqqS4crVq1quTWXF26dPHHVj7CbNkIs+UjzJaNMFs+wmzZCLNA4UtsmJVu3bq5W2sdffTRpTqoqwX2rLPOsjVr1vg15v6HrXCkAKBxr732mj8mP3r37k2YLYO2jwqy0/YhzGbHPlQ+bR/CbHbsQ0DhS3SYBQAAQLoRZgEAAJBYhFkAAAAkFmEWAAAAiUWYzRNdHXvppZda8+bNbdKkSX5t8umCu+HDh9tzzz1nrVq18mtLe/DBB+3CCy+02267zaZOnerX7qLHD99000120UUXRc5n8+bN7o4WuovFHXfcYUVFRf6YXXSBny74u/LKK8tclsrWr18/t95XXXWVde3a1davX++P2aW8ZY+z/q+++qo1adLErrvuush56D333HOPm4f+jZpHPmj7aJlUtJ9of8qke023aNGizGXXOmvd92X9y5tHvmnf0XJFLVt5y14R6x9nHvmgZY0qYZW1/uXNA8C+I8zmgUKKguygQYPc/9wOOOAAGzx4sD822bQ+derUcffw1X19o2j9VYL11y3Shg4d6o81V6/36mluetiFbsGmA0FAQUZ3sdA89OALzUPT6JZrAdVpHrq9l+5BrPGqyzctgw58nTt3doH9kksuseOOO862b9/uT1H+su/N+teoUWO3eejqdb0ncx4rV670p8gfbZ8HHnjAffd6cp/uPPL000/7Y71lP/bYY8tc9opY//LmUQhuvPFGt71Uwipj/VesWOHeo7rwPFSfb9oeWp7MEgiWvbz11zpr3aPWX9OWt/7heWg7anuG5wGgYhBmK1kQ1MKtkZdddpndfvvt/lDVoP+5az0zaf0PPfRQf8ij9Vd4CSjot27d2h/y7hmsea1evdoNK+REzSN8kFDYy5yHDirhwJcPeixy2Nq1a926qRU1UN6yx1l/HVTD8+jYsaObR3Cw1sE1cx76EVKIB1rd51PLHoha9iBUBDLXXz+M9nT9NY/w47Ez55FvujXhKaecEhlm46y/1jcsvP47duyw/fffv9T6q07jRC3jUfNQfb4FYTabbMu+J+uvactaf21rbfPMeei7AVCxCLOVTKeGM586pv8pHn/88f5Q1ZAtzCq0RrUiBeuvFiO9T+8PU93AgQPd6+D0epjmcfLJJ7vXW7ZsyTqPN9980x8qHLVr1y55eEecZd+b9Vdo/t73vudO4Uu2eZx44on+UOF4/PHH3Q+cQHnLXtY2jLv+wTwyheeRT2r900Nh9MRDrUd4XeKuv9Y3LLz+em+29Q/mq8+Mmkfmds2HYNm0rFEtxdmWfU/WX9OWtf7a1tnmoe8IQMUp/ZeGnFI/yVtvvdUf8qiVpHr16v5Q1ZDtYHDttdeWaoUOr7/6nOl9mQcgtXiodVF0ajBqHj/5yU/c67Lm8fLLL/tDhUEtNVrW4OEZcZZ9b9e/bt26JY+AjpqHDsQKSIVAy6KiftX6oTNy5Eh/TPSyq+9xsOzZ1l91cdc/mEem8Dzy6frrr3f9iUXhKRwg466/1jcsvP5xwly2QBhelnzRMvzv//6vHXPMMW6ZNTxkyBB/bPZl35P117Rlrb+2dbZ56DsCUHFK/6Uhpy644AJ32jRM/5P97ne/6w9VDdkOBlr/zNOQ4fXXxXB6X3AqL6BWx6Df5Pnnnx85D7U8ysSJE8udRyEItlH4gBhn2fd2/XVKuqx5aDmCeeSblkVF+4tCyb///W9/TPSyq891eeuvurjrH8wjU3ge+aJW/F/84hf+UOkwG3f9w/udhNc/TpjLFgjDy5IvEyZM8F9566KWfXXfCWRb9j1Zf01b1vprW2ebh74jABWn9F8acuoPf/iDa50MU+ucTjVXJdkOBlp/XaEfFl5/nTbV+2bNmuWGA4cccoj16NHDvdadAKLmERzgly9fnnUeasEsBNu2bSvVz1PiLPverr9absuah5YlHJIKxTPPPOMuEvz888/dcNSyqxWsvPVXXdz1D+aRKTyPfPnpT3/q/r6CovCkEoSsuOsfFcSC9c/296u64HOyBcIgzBUSLVd4fbIt+56sv6Yta/21rbPNQ98RgIpT+i8NOaX/2dWqVcs+++wzv8asWbNmBXkA2BfZDgZa/yOPPNIf8qjbRbD+X3zxhbtoQqeNA0uWLNntIJJtHuF+lZo+ah6ZF2DlQ3AaeNiwYX7N7spb9rjrr7slBILvIzyPzAtRdOo5PI9CEaz/jBkz3HCcZc9cf3Xj2NP11/RB9w/JnEe+6G8lWwnEWX+tb1h4/YP9JWr9g79DfV7UPMLLUSjatGnj/r8SyLbse7L+mras9de2zjYPABWLv6pKFlzhqvuwioKNDqqF1pdzXwUHg0y6Iv+www4rOdBGrb8OtNdcc40/5A2rv2dg5syZbt7BPGbPnu3moTslBHR/1fCBRvMIB5V80TJr2XVwzaa8Zd+b9dc9e8uaR/B9heeRL+Fl0AWBWhf9AAxuXxZn2Sti/cubR6FQeMoMkBWx/urfnzkP1QXUj1nv0XslmEe4f3M+aDmCZRL9GPrtb3/r+l8HgmUvb/21zoHM9de05a2/tnl4Htqe+m4AVCzCbB7of6D6n179+vXtoIMOclcWVxUKXlq3zBI+uATrrz5s1apVi1x//U9foV/3E1WQzTxAvvTSS24eOojrFLQ+N0zdFS6++GIXgvR+hb1CuOhCyxveLkEJL3+cZY+7/jVr1nRdOKLmEXxXmofudZw5j3zRMukHT7Ct1Aod3MkiEF7/qGUPfiTty/rHmUch0PKrhFXW+us9em+2eeRDECr196P/x+j1mWeeWerhG3HXX+u+t+sfnoe2Y9Q8AOw7wmyeqMVp8uTJrtWgKtGBJFsJi7P+CxYscPfjDXfJCNPtpjTfzKu2w3TgUN/BzIth8iVzm4RLpvKWvSLWX+8tbx75EN4u69at82t3V976a5217vuy/nHmkW/BdspUWetf3jwqm/pWf/zxxyXbpazlqoz1jzMPAPuGMAsAAIDEIswCAAAgsQizAAAASCzCLAAAABKLMAsAAIDEIswCAAAgsQizAAAASCzCLFCFBPfWXLRokV/jCepzSfPPvHl/PrVv395uvPFGt0y5XncAQP4QZoEqRE8gOuOMM+zqq6/2azwKc3raUS5VxmfEpRvna1n02GAt156GWQXgzKc5AQAKE2EWqEIUwBTEFOSGDRvm16YvzGpZqlevbs2aNdvjICuEWQBIDsIsUIUEYfaee+6xk046ya8tHTSjwlq4Lpi+X79+Jc+3P+2009y4J554wurWrWs1atSwp59+2tVJ8J6+ffvaL3/5S/f6yiuvLPWYz1deecXq16/vxp966qnWqVMnf8yu5de/RxxxhHudTYsWLdzz7n/84x/v9jl6r+YdlGzz0PJefvnlVq1aNTednpsvme9XCeg9WmbVaR1at27tj9m1/s8995x7Fv9BBx1Uav0ff/xx+9WvflUy37LWDwAQD2EWqEKCMLhy5Ur7/ve/by+++KKrD4JWIAiMYeG6YHoN63VRUZGdd955blin7pcsWWIjRoxwnzFt2rTd3nPCCSfYyJEj3bDmee6557rx0rhxY7viiitsyJAhblhBVu8JhoPlv+6662zx4sXuc6IoyGo6fcakSZPsmmuu2e1zgs8ui8a/+uqrtmXLFjes9wQ0LnP7dO7c2S2rwrro38MPP7xkOFj/YLmCZdA6BzR+zJgxtnXrVjfctWtX9y8AYO8RZoEqJAiDwetatWrZzp07S4JWICqsheuC6SdPnuyG5e6777ZDDjnEduzY4deYa50MWlaD97zxxhtuWBRSVTdo0CDbsGGDe92xY0d/rCf8ufpX06xfv94NR9m4caObplevXn6NudbP4HMkCJJl0XiF2TVr1vg1u0Rtn4YNG5aqu/baa10ruATr//rrr7th0TKqTsusHxh6remCMAsA2HeEWaAKUdgKhzh1BXjsscdKglYgKqyF6zKnl8x5S9R7FNzC6tSp41pzp0yZ4sZHlWAeUZ+RSS3Beo/CcZiCtT5HtCzlzUefdcopp7h5KagHrcMSXq9AeHnDJficqPUPAnzQet28eXO3PfS93H///TZu3DhXDwDYe4RZoArJDIPqZrD//vvHCrPHH398Sd2+hNnp06e7YQnCXPfu3d3twvRap9mzifqMTMF8wq3G8qMf/ch9jsQJs4HBgwfb7373OzfPiRMnurqo7aN+wuovnE3U+msZVZd5q7TevXvb7bff7ra5Ws4BAHuPMAtUIVFhUBc23XHHHS5UBdRXUxd2Bd5//303XWYwDYuadzj0Be95/vnn3bDoXq8K0zNmzHDD+ox7773XvQ4Lwl7UZ0TRxWhBK6wohIY/p7wwu3DhQv/VLpo+6LqguyA0bdrUvQ40adIkcp5Bt4uo9dcyBheWZQZa0fTz5s3zhwAAe4MwC1QhUWGwf//+LjSFw+mcOXPsF7/4hbuoS8FNp+j1vooIs7rgSfUqGg7Gi0LnD37wA3dFv06za5ym13sl6jOiDBw40PXf1bTnn39+qc8pL8xq/GWXXebeEyzD6aef7o/1Lu7SPBVgw/PVXRqOO+44+/3vf+/qte2C8cH6n3nmmZHrr/H6HA0/9NBDdsEFF9g555zjxgEA9h5hFqhCFJSC8BQWVa+LkNQSqQuydLeC8DQKXpnTR80j23t69uzpXof7oQbUkqnT7LqtV+Y04fmVZ9asWW7Z1fo7fPhwv9YTtfyZdBeE4PN0T97grgYB1T377LOl5qMfB7oll+p1m7FAEGbV0qp6vTd8r1/Ruup9arHV9FwIBgD7jjALABUgCLMAgMrF/3kBoAIQZgEgP/g/LwBUAIVZFQBA5SLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgswiwAAAASizALAACAxCLMAgAAILEIswAAAEgos/8PAKXZKAZp7rQAAAAASUVORK5CYII="> ## Tokenizer Le tokenizer de départ est [BarthezTokenizer](https://huggingface.co/transformers/model_doc/barthez.html) auquel ont été rajouté les tokens spéciaux \<sep\> et \<hl\>. ## Utilisation _Le modèle est un POC, nous garantissons pas ses performances_ ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from transformers import Text2TextGenerationPipeline model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation' loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name) loaded_tokenizer = AutoTokenizer.from_pretrained(model_name) nlp = Text2TextGenerationPipeline(model=loaded_model, tokenizer=loaded_tokenizer) nlp("Les projecteurs peuvent être utilisées pour <hl>illuminer<hl> des terrains de jeu extérieurs") # >>> [{'generated_text': 'À quoi servent les projecteurs sur les terrains de jeu extérieurs?'}] ``` ```py from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from transformers import Text2TextGenerationPipeline model_name = 'lincoln/barthez-squadFR-fquad-piaf-question-generation' loaded_model = AutoModelForSeq2SeqLM.from_pretrained(model_name) loaded_tokenizer = AutoTokenizer.from_pretrained(model_name) text = "Les Etats signataires de la convention sur la diversité biologique des Nations unies doivent parvenir, lors de la COP15, qui s’ouvre <hl>lundi<hl>, à un nouvel accord mondial pour enrayer la destruction du vivant au cours de la prochaine décennie." inputs = loaded_tokenizer(text, return_tensors='pt') out = loaded_model.generate( input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, num_beams=16, num_return_sequences=16, length_penalty=10 ) questions = [] for question in out: questions.append(loaded_tokenizer.decode(question, skip_special_tokens=True)) for q in questions: print(q) # Quand se tient la conférence des Nations Unies sur la diversité biologique? # Quand a lieu la conférence des Nations Unies sur la diversité biologique? # Quand se tient la conférence sur la diversité biologique des Nations unies? # Quand se tient la conférence de la diversité biologique des Nations unies? # Quand a lieu la conférence sur la diversité biologique des Nations unies? # Quand a lieu la conférence de la diversité biologique des Nations unies? # Quand se tient la conférence des Nations unies sur la diversité biologique? # Quand a lieu la conférence des Nations unies sur la diversité biologique? # Quand se tient la conférence sur la diversité biologique des Nations Unies? # Quand se tient la conférence des Nations Unies sur la diversité biologique? # Quand se tient la conférence de la diversité biologique des Nations Unies? # Quand la COP15 a-t-elle lieu? # Quand la COP15 a-t-elle lieu? # Quand se tient la conférence sur la diversité biologique? # Quand s'ouvre la COP15,? # Quand s'ouvre la COP15? ``` ## Citation Model based on: paper: https://arxiv.org/abs/2010.12321 \ github: https://github.com/moussaKam/BARThez ``` @article{eddine2020barthez, title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model}, author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis}, journal={arXiv preprint arXiv:2010.12321}, year={2020} } ```
lincoln/camembert-squadFR-fquad-piaf-answer-extraction
lincoln
2021-10-11T15:01:04Z
5
0
transformers
[ "transformers", "pytorch", "camembert", "token-classification", "answer extraction", "fr", "dataset:squadFR", "dataset:fquad", "dataset:piaf", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - fr license: mit datasets: - squadFR - fquad - piaf tags: - camembert - answer extraction --- # Extraction de réponse Ce modèle est _fine tuné_ à partir du modèle [camembert-base](https://huggingface.co/camembert-base) pour la tâche de classification de tokens. L'objectif est d'identifier les suites de tokens probables qui pourrait être l'objet d'une question. ## Données d'apprentissage La base d'entrainement est la concatenation des bases SquadFR, [fquad](https://huggingface.co/datasets/fquad), [piaf](https://huggingface.co/datasets/piaf). Les réponses de chaque contexte ont été labelisées avec le label "ANS". Volumétrie (nombre de contexte): * train: 24 652 * test: 1 370 * valid: 1 370 ## Entrainement L'apprentissage s'est effectué sur une carte Tesla K80. * Batch size: 16 * Weight decay: 0.01 * Learning rate: 2x10-5 (décroit linéairement) * Paramètres par défaut de la classe [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) * Total steps: 1 000 Le modèle semble sur apprendre au delà : ![Loss](assets/loss_m_sl_sota_2.PNG) ## Critiques Le modèle n'a pas de bonnes performances et doit être corrigé après prédiction pour être cohérent. La tâche de classification n'est pas évidente car le modèle doit identifier des groupes de token _sachant_ qu'une question peut être posée. ![Performances](assets/perfs_m_sl_sota_2.PNG) ## Utilisation _Le modèle est un POC, nous garantissons pas ses performances_ ```python from transformers import AutoTokenizer, AutoModelForTokenClassification import numpy as np model_name = "lincoln/camembert-squadFR-fquad-piaf-answer-extraction" loaded_tokenizer = AutoTokenizer.from_pretrained(model_path) loaded_model = AutoModelForTokenClassification.from_pretrained(model_path) text = "La science des données est un domaine interdisciplinaire qui utilise des méthodes, des processus,\ des algorithmes et des systèmes scientifiques pour extraire des connaissances et des idées de nombreuses données structurelles et non structurées.\ Elle est souvent associée aux données massives et à l'analyse des données." inputs = loaded_tokenizer(text, return_tensors="pt", return_offsets_mapping=True) outputs = loaded_model(inputs.input_ids).logits probs = 1 / (1 + np.exp(-outputs.detach().numpy())) probs[:, :, 1][0] = np.convolve(probs[:, :, 1][0], np.ones(2), 'same') / 2 sentences = loaded_tokenizer.tokenize(text, add_special_tokens=False) prob_answer_tokens = probs[:, 1:-1, 1].flatten().tolist() offset_start_mapping = inputs.offset_mapping[:, 1:-1, 0].flatten().tolist() offset_end_mapping = inputs.offset_mapping[:, 1:-1, 1].flatten().tolist() threshold = 0.4 entities = [] for ix, (token, prob_ans, offset_start, offset_end) in enumerate(zip(sentences, prob_answer_tokens, offset_start_mapping, offset_end_mapping)): entities.append({ 'entity': 'ANS' if prob_ans > threshold else 'O', 'score': prob_ans, 'index': ix, 'word': token, 'start': offset_start, 'end': offset_end }) for p in entities: print(p) # {'entity': 'O', 'score': 0.3118681311607361, 'index': 0, 'word': '▁La', 'start': 0, 'end': 2} # {'entity': 'O', 'score': 0.37866950035095215, 'index': 1, 'word': '▁science', 'start': 3, 'end': 10} # {'entity': 'ANS', 'score': 0.45018652081489563, 'index': 2, 'word': '▁des', 'start': 11, 'end': 14} # {'entity': 'ANS', 'score': 0.4615934491157532, 'index': 3, 'word': '▁données', 'start': 15, 'end': 22} # {'entity': 'O', 'score': 0.35033443570137024, 'index': 4, 'word': '▁est', 'start': 23, 'end': 26} # {'entity': 'O', 'score': 0.24779987335205078, 'index': 5, 'word': '▁un', 'start': 27, 'end': 29} # {'entity': 'O', 'score': 0.27084410190582275, 'index': 6, 'word': '▁domaine', 'start': 30, 'end': 37} # {'entity': 'O', 'score': 0.3259460926055908, 'index': 7, 'word': '▁in', 'start': 38, 'end': 40} # {'entity': 'O', 'score': 0.371802419424057, 'index': 8, 'word': 'terdisciplinaire', 'start': 40, 'end': 56} # {'entity': 'O', 'score': 0.3140853941440582, 'index': 9, 'word': '▁qui', 'start': 57, 'end': 60} # {'entity': 'O', 'score': 0.2629334330558777, 'index': 10, 'word': '▁utilise', 'start': 61, 'end': 68} # {'entity': 'O', 'score': 0.2968383729457855, 'index': 11, 'word': '▁des', 'start': 69, 'end': 72} # {'entity': 'O', 'score': 0.33898216485977173, 'index': 12, 'word': '▁méthodes', 'start': 73, 'end': 81} # {'entity': 'O', 'score': 0.3776060938835144, 'index': 13, 'word': ',', 'start': 81, 'end': 82} # {'entity': 'O', 'score': 0.3710060119628906, 'index': 14, 'word': '▁des', 'start': 83, 'end': 86} # {'entity': 'O', 'score': 0.35908180475234985, 'index': 15, 'word': '▁processus', 'start': 87, 'end': 96} # {'entity': 'O', 'score': 0.3890596628189087, 'index': 16, 'word': ',', 'start': 96, 'end': 97} # {'entity': 'O', 'score': 0.38341325521469116, 'index': 17, 'word': '▁des', 'start': 101, 'end': 104} # {'entity': 'O', 'score': 0.3743852376937866, 'index': 18, 'word': '▁', 'start': 105, 'end': 106} # {'entity': 'O', 'score': 0.3943936228752136, 'index': 19, 'word': 'algorithme', 'start': 105, 'end': 115} # {'entity': 'O', 'score': 0.39456743001937866, 'index': 20, 'word': 's', 'start': 115, 'end': 116} # {'entity': 'O', 'score': 0.3846966624259949, 'index': 21, 'word': '▁et', 'start': 117, 'end': 119} # {'entity': 'O', 'score': 0.367380827665329, 'index': 22, 'word': '▁des', 'start': 120, 'end': 123} # {'entity': 'O', 'score': 0.3652925491333008, 'index': 23, 'word': '▁systèmes', 'start': 124, 'end': 132} # {'entity': 'O', 'score': 0.3975735306739807, 'index': 24, 'word': '▁scientifiques', 'start': 133, 'end': 146} # {'entity': 'O', 'score': 0.36417365074157715, 'index': 25, 'word': '▁pour', 'start': 147, 'end': 151} # {'entity': 'O', 'score': 0.32438698410987854, 'index': 26, 'word': '▁extraire', 'start': 152, 'end': 160} # {'entity': 'O', 'score': 0.3416857123374939, 'index': 27, 'word': '▁des', 'start': 161, 'end': 164} # {'entity': 'O', 'score': 0.3674810230731964, 'index': 28, 'word': '▁connaissances', 'start': 165, 'end': 178} # {'entity': 'O', 'score': 0.38362061977386475, 'index': 29, 'word': '▁et', 'start': 179, 'end': 181} # {'entity': 'O', 'score': 0.364640474319458, 'index': 30, 'word': '▁des', 'start': 182, 'end': 185} # {'entity': 'O', 'score': 0.36050117015838623, 'index': 31, 'word': '▁idées', 'start': 186, 'end': 191} # {'entity': 'O', 'score': 0.3768993020057678, 'index': 32, 'word': '▁de', 'start': 192, 'end': 194} # {'entity': 'O', 'score': 0.39184248447418213, 'index': 33, 'word': '▁nombreuses', 'start': 195, 'end': 205} # {'entity': 'ANS', 'score': 0.4091200828552246, 'index': 34, 'word': '▁données', 'start': 206, 'end': 213} # {'entity': 'ANS', 'score': 0.41234123706817627, 'index': 35, 'word': '▁structurelle', 'start': 214, 'end': 226} # {'entity': 'ANS', 'score': 0.40243157744407654, 'index': 36, 'word': 's', 'start': 226, 'end': 227} # {'entity': 'ANS', 'score': 0.4007353186607361, 'index': 37, 'word': '▁et', 'start': 228, 'end': 230} # {'entity': 'ANS', 'score': 0.40597623586654663, 'index': 38, 'word': '▁non', 'start': 231, 'end': 234} # {'entity': 'ANS', 'score': 0.40272021293640137, 'index': 39, 'word': '▁structurée', 'start': 235, 'end': 245} # {'entity': 'O', 'score': 0.392631471157074, 'index': 40, 'word': 's', 'start': 245, 'end': 246} # {'entity': 'O', 'score': 0.34266412258148193, 'index': 41, 'word': '.', 'start': 246, 'end': 247} # {'entity': 'O', 'score': 0.26178646087646484, 'index': 42, 'word': '▁Elle', 'start': 255, 'end': 259} # {'entity': 'O', 'score': 0.2265639454126358, 'index': 43, 'word': '▁est', 'start': 260, 'end': 263} # {'entity': 'O', 'score': 0.22844195365905762, 'index': 44, 'word': '▁souvent', 'start': 264, 'end': 271} # {'entity': 'O', 'score': 0.2475772500038147, 'index': 45, 'word': '▁associée', 'start': 272, 'end': 280} # {'entity': 'O', 'score': 0.3002186715602875, 'index': 46, 'word': '▁aux', 'start': 281, 'end': 284} # {'entity': 'O', 'score': 0.3875720798969269, 'index': 47, 'word': '▁données', 'start': 285, 'end': 292} # {'entity': 'ANS', 'score': 0.445063054561615, 'index': 48, 'word': '▁massive', 'start': 293, 'end': 300} # {'entity': 'ANS', 'score': 0.4419114589691162, 'index': 49, 'word': 's', 'start': 300, 'end': 301} # {'entity': 'ANS', 'score': 0.4240635633468628, 'index': 50, 'word': '▁et', 'start': 302, 'end': 304} # {'entity': 'O', 'score': 0.3900952935218811, 'index': 51, 'word': '▁à', 'start': 305, 'end': 306} # {'entity': 'O', 'score': 0.3784807324409485, 'index': 52, 'word': '▁l', 'start': 307, 'end': 308} # {'entity': 'O', 'score': 0.3459452986717224, 'index': 53, 'word': "'", 'start': 308, 'end': 309} # {'entity': 'O', 'score': 0.37636008858680725, 'index': 54, 'word': 'analyse', 'start': 309, 'end': 316} # {'entity': 'ANS', 'score': 0.4475618302822113, 'index': 55, 'word': '▁des', 'start': 317, 'end': 320} # {'entity': 'ANS', 'score': 0.43845775723457336, 'index': 56, 'word': '▁données', 'start': 321, 'end': 328} # {'entity': 'O', 'score': 0.3761221170425415, 'index': 57, 'word': '.', 'start': 328, 'end': 329} ```
sontn122/xlm-roberta-large-finetuned-squad-v2
sontn122
2021-10-11T13:30:06Z
28
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: xlm-roberta-large-finetuned-squad-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-finetuned-squad-v2 This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.4627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.029 | 1.0 | 950 | 0.9281 | | 0.9774 | 2.0 | 1900 | 0.6130 | | 0.6781 | 3.0 | 2850 | 0.4627 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
chrommium/sbert_large-finetuned-sent_in_news_sents_3lab
chrommium
2021-10-11T13:29:58Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: sbert_large-finetuned-sent_in_news_sents_3lab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sbert_large-finetuned-sent_in_news_sents_3lab This model is a fine-tuned version of [sberbank-ai/sbert_large_nlu_ru](https://huggingface.co/sberbank-ai/sbert_large_nlu_ru) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9443 - Accuracy: 0.8580 - F1: 0.6199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 17 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 264 | 0.6137 | 0.8608 | 0.3084 | | 0.524 | 2.0 | 528 | 0.6563 | 0.8722 | 0.4861 | | 0.524 | 3.0 | 792 | 0.7110 | 0.8494 | 0.4687 | | 0.2225 | 4.0 | 1056 | 0.7323 | 0.8608 | 0.6015 | | 0.2225 | 5.0 | 1320 | 0.9604 | 0.8551 | 0.6185 | | 0.1037 | 6.0 | 1584 | 0.8801 | 0.8523 | 0.5535 | | 0.1037 | 7.0 | 1848 | 0.9443 | 0.8580 | 0.6199 | | 0.0479 | 8.0 | 2112 | 1.0048 | 0.8608 | 0.6168 | | 0.0479 | 9.0 | 2376 | 0.9757 | 0.8551 | 0.6097 | | 0.0353 | 10.0 | 2640 | 1.0743 | 0.8580 | 0.6071 | | 0.0353 | 11.0 | 2904 | 1.1216 | 0.8580 | 0.6011 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
GKLMIP/electra-myanmar-base-uncased
GKLMIP
2021-10-11T04:58:43Z
5
0
transformers
[ "transformers", "pytorch", "electra", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
The Usage of tokenizer for Myanmar is same as Laos in https://github.com/GKLMIP/Pretrained-Models-For-Laos. If you use our model, please consider citing our paper: ``` @InProceedings{, author="Jiang, Shengyi and Huang, Xiuwen and Cai, Xiaonan and Lin, Nankai", title="Pre-trained Models and Evaluation Data for the Myanmar Language", booktitle="The 28th International Conference on Neural Information Processing", year="2021", publisher="Springer International Publishing", address="Cham", } ```
suwani/BERT_NER_Ep5_PAD_75-finetuned-ner
suwani
2021-10-11T04:05:50Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: BERT_NER_Ep5_PAD_75-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_NER_Ep5_PAD_75-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3504 - Precision: 0.6469 - Recall: 0.7246 - F1: 0.6835 - Accuracy: 0.9013 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 288 | 0.3695 | 0.5799 | 0.6200 | 0.5993 | 0.8792 | | 0.4695 | 2.0 | 576 | 0.3443 | 0.5823 | 0.7252 | 0.6460 | 0.8862 | | 0.4695 | 3.0 | 864 | 0.3189 | 0.6407 | 0.7030 | 0.6704 | 0.8978 | | 0.2184 | 4.0 | 1152 | 0.3458 | 0.6383 | 0.7335 | 0.6826 | 0.8980 | | 0.2184 | 5.0 | 1440 | 0.3504 | 0.6469 | 0.7246 | 0.6835 | 0.9013 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
unicamp-dl/translation-en-pt-t5
unicamp-dl
2021-10-11T03:47:21Z
8,676
20
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "translation", "en", "pt", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - pt datasets: - EMEA - ParaCrawl 99k - CAPES - Scielo - JRC-Acquis - Biomedical Domain Corpora tags: - translation metrics: - bleu --- # Introduction This repository brings an implementation of T5 for translation in EN-PT tasks using a modest hardware setup. We propose some changes in tokenizator and post-processing that improves the result and used a Portuguese pretrained model for the translation. You can collect more informations in [our repository](https://github.com/unicamp-dl/Lite-T5-Translation). Also, check [our paper](https://aclanthology.org/2020.wmt-1.90.pdf)! # Usage Just follow "Use in Transformers" instructions. It is necessary to add a few words before to define the task to T5. You can also create a pipeline for it. An example with the phrase "I like to eat rice" is: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline tokenizer = AutoTokenizer.from_pretrained("unicamp-dl/translation-en-pt-t5") model = AutoModelForSeq2SeqLM.from_pretrained("unicamp-dl/translation-en-pt-t5") enpt_pipeline = pipeline('text2text-generation', model=model, tokenizer=tokenizer) enpt_pipeline("translate English to Portuguese: I like to eat rice.") ``` # Citation ```bibtex @inproceedings{lopes-etal-2020-lite, title = "Lite Training Strategies for {P}ortuguese-{E}nglish and {E}nglish-{P}ortuguese Translation", author = "Lopes, Alexandre and Nogueira, Rodrigo and Lotufo, Roberto and Pedrini, Helio", booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.wmt-1.90", pages = "833--840", } ```
unicamp-dl/translation-pt-en-t5
unicamp-dl
2021-10-11T03:47:04Z
366
25
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "translation", "en", "pt", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - pt datasets: - EMEA - ParaCrawl 99k - CAPES - Scielo - JRC-Acquis - Biomedical Domain Corpora tags: - translation metrics: - bleu --- # Introduction This repository brings an implementation of T5 for translation in PT-EN tasks using a modest hardware setup. We propose some changes in tokenizator and post-processing that improves the result and used a Portuguese pretrained model for the translation. You can collect more informations in [our repository](https://github.com/unicamp-dl/Lite-T5-Translation). Also, check [our paper](https://aclanthology.org/2020.wmt-1.90.pdf)! # Usage Just follow "Use in Transformers" instructions. It is necessary to add a few words before to define the task to T5. You can also create a pipeline for it. An example with the phrase " Eu gosto de comer arroz" is: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline tokenizer = AutoTokenizer.from_pretrained("unicamp-dl/translation-pt-en-t5") model = AutoModelForSeq2SeqLM.from_pretrained("unicamp-dl/translation-pt-en-t5") pten_pipeline = pipeline('text2text-generation', model=model, tokenizer=tokenizer) pten_pipeline("translate Portuguese to English: Eu gosto de comer arroz.") ``` # Citation ```bibtex @inproceedings{lopes-etal-2020-lite, title = "Lite Training Strategies for {P}ortuguese-{E}nglish and {E}nglish-{P}ortuguese Translation", author = "Lopes, Alexandre and Nogueira, Rodrigo and Lotufo, Roberto and Pedrini, Helio", booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.wmt-1.90", pages = "833--840", } ```
bsingh/roberta_goEmotion
bsingh
2021-10-11T00:26:09Z
992
3
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "emotions", "en", "dataset:go_emotions", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - text-classification - pytorch - roberta - emotions datasets: - go_emotions license: mit widget: - text: "I am not feeling well today." --- ## This model is trained for GoEmotions dataset which contains labeled 58k Reddit comments with 28 emotions - admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise + neutral ## Training details: - The training script is provided here: https://github.com/bsinghpratap/roberta_train_goEmotion - Please feel free to start an issue in the repo if you have trouble running the model and I would try to respond as soon as possible. - The model works well on most of the emotions except: 'desire', 'disgust', 'embarrassment', 'excitement', 'fear', 'grief', 'nervousness', 'pride', 'relief', 'remorse', 'surprise'] - I'll try to fine-tune the model further and update here if RoBERTa achieves a better performance. - Each text datapoint can have more than 1 label. Most of the training set had 1 label: Counter({1: 36308, 2: 6541, 3: 532, 4: 28, 5: 1}). So currently I just used the first label for each of the datapoint. Not ideal but it does a decent job. ## Model Performance ============================================================<br> Emotion: admiration<br> ============================================================<br> GoEmotions Paper: 0.65<br> RoBERTa: 0.62<br> Support: 504<br> ============================================================<br> Emotion: amusement<br> ============================================================<br> GoEmotions Paper: 0.80<br> RoBERTa: 0.78<br> Support: 252<br> ============================================================<br> Emotion: anger<br> ============================================================<br> GoEmotions Paper: 0.47<br> RoBERTa: 0.44<br> Support: 197<br> ============================================================<br> Emotion: annoyance<br> ============================================================<br> GoEmotions Paper: 0.34<br> RoBERTa: 0.22<br> Support: 286<br> ============================================================<br> Emotion: approval<br> ============================================================<br> GoEmotions Paper: 0.36<br> RoBERTa: 0.31<br> Support: 318<br> ============================================================<br> Emotion: caring<br> ============================================================<br> GoEmotions Paper: 0.39<br> RoBERTa: 0.24<br> Support: 114<br> ============================================================<br> Emotion: confusion<br> ============================================================<br> GoEmotions Paper: 0.37<br> RoBERTa: 0.29<br> Support: 139<br> ============================================================<br> Emotion: curiosity<br> ============================================================<br> GoEmotions Paper: 0.54<br> RoBERTa: 0.48<br> Support: 233<br> ============================================================<br> Emotion: disappointment<br> ============================================================<br> GoEmotions Paper: 0.28<br> RoBERTa: 0.18<br> Support: 127<br> ============================================================<br> Emotion: disapproval<br> ============================================================<br> GoEmotions Paper: 0.39<br> RoBERTa: 0.26<br> Support: 220<br> ============================================================<br> Emotion: gratitude<br> ============================================================<br> GoEmotions Paper: 0.86<br> RoBERTa: 0.84<br> Support: 288<br> ============================================================<br> Emotion: joy<br> ============================================================<br> GoEmotions Paper: 0.51<br> RoBERTa: 0.47<br> Support: 116<br> ============================================================<br> Emotion: love<br> ============================================================<br> GoEmotions Paper: 0.78<br> RoBERTa: 0.68<br> Support: 169<br> ============================================================<br> Emotion: neutral<br> ============================================================<br> GoEmotions Paper: 0.68<br> RoBERTa: 0.61<br> Support: 1606<br> ============================================================<br> Emotion: optimism<br> ============================================================<br> GoEmotions Paper: 0.51<br> RoBERTa: 0.52<br> Support: 120<br> ============================================================<br> Emotion: realization<br> ============================================================<br> GoEmotions Paper: 0.21<br> RoBERTa: 0.15<br> Support: 109<br> ============================================================<br> Emotion: sadness<br> ============================================================<br> GoEmotions Paper: 0.49<br> RoBERTa: 0.42<br> Support: 108
imzachjohnson/autonlp-spinner-check-16492731
imzachjohnson
2021-10-11T00:02:11Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:imzachjohnson/autonlp-data-spinner-check", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - imzachjohnson/autonlp-data-spinner-check --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 16492731 ## Validation Metrics - Loss: 0.21610039472579956 - Accuracy: 0.9155366722657816 - Precision: 0.9530714194995978 - Recall: 0.944871149164778 - AUC: 0.9553238723676906 - F1: 0.9489535692456846 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/imzachjohnson/autonlp-spinner-check-16492731 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("imzachjohnson/autonlp-spinner-check-16492731", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
S34NtheGuy/DialoGPT-small-cursedryno
S34NtheGuy
2021-10-10T21:57:32Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational --- # DialoGPT chat bot model using discord messages as data
Fiddi/distilbert-base-uncased-finetuned-ner
Fiddi
2021-10-10T20:08:19Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9290544285555925 - name: Recall type: recall value: 0.9375769101689228 - name: F1 type: f1 value: 0.9332962138084633 - name: Accuracy type: accuracy value: 0.9841136193940935 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9291 - Recall: 0.9376 - F1: 0.9333 - Accuracy: 0.9841 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2412 | 1.0 | 878 | 0.0688 | 0.9178 | 0.9246 | 0.9212 | 0.9815 | | 0.0514 | 2.0 | 1756 | 0.0608 | 0.9251 | 0.9344 | 0.9298 | 0.9832 | | 0.0304 | 3.0 | 2634 | 0.0604 | 0.9291 | 0.9376 | 0.9333 | 0.9841 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
S34NtheGuy/DialoGPT-small-wetterlettuce
S34NtheGuy
2021-10-10T17:59:38Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- tags: - conversational --- # DialoGPT chat bot model using discord messages as data
mamlong34/t5_small_cosmos_qa
mamlong34
2021-10-10T15:37:59Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:cosmos_qa", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cosmos_qa metrics: - accuracy model-index: - name: t5_small_cosmos_qa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5_small_cosmos_qa This model is a fine-tuned version of [mamlong34/t5_small_race_mutlirc](https://huggingface.co/mamlong34/t5_small_race_mutlirc) on the cosmos_qa dataset. It achieves the following results on the evaluation set: - Loss: 0.5614 - Accuracy: 0.6067 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4811 | 1.0 | 3158 | 0.5445 | 0.5548 | | 0.4428 | 2.0 | 6316 | 0.5302 | 0.5836 | | 0.3805 | 3.0 | 9474 | 0.5614 | 0.6067 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.12.1 - Tokenizers 0.10.3
hiiii23/distilbert-base-uncased-finetuned-squad
hiiii23
2021-10-10T13:02:48Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/fnet-large-finetuned-cola-copy3
gchhablani
2021-10-10T11:08:30Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: fnet-large-finetuned-cola-copy3 results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-large-finetuned-cola-copy3 This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6554 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6408 | 1.0 | 2138 | 0.7329 | 0.0 | | 0.6589 | 2.0 | 4276 | 0.6311 | 0.0 | | 0.6467 | 3.0 | 6414 | 0.6554 | 0.0 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
JonatanGk/roberta-base-ca-finetuned-cyberbullying-catalan
JonatanGk
2021-10-10T09:50:17Z
5
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "catalan", "ca", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: ca tags: - "catalan" metrics: - accuracy widget: - text: "Ets més petita que un barrufet!!" - text: "Ets tan lletja que et donaven de menjar per sota la porta." --- # roberta-base-ca-finetuned-cyberbullying-catalan This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the dataset generated scrapping all social networks (Twitter, Youtube ...) to detect cyberbullying on Catalan. It achieves the following results on the evaluation set: - Loss: 0.1508 - Accuracy: 0.9665 ## Training and evaluation data I use the concatenation from multiple datasets generated scrapping social networks (Twitter,Youtube,Discord...) to fine-tune this model. The total number of sentence pairs is above 410k sentences. Trained similar method at [roberta-base-bne-finetuned-cyberbullying-spanish](https://huggingface.co/JonatanGk/roberta-base-bne-finetuned-cyberbullying-spanish) ## Training procedure <details> ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 </details> ### Model in action 🚀 Fast usage with **pipelines**: ```python from transformers import pipeline model_path = "JonatanGk/roberta-base-ca-finetuned-ciberbullying-catalan" bullying_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path) bullying_analysis( "Des que et vaig veure m'en vaig enamorar de tu." ) # Output: [{'label': 'Not_bullying', 'score': 0.9996786117553711}] bullying_analysis( "Ets tan lletja que et donaven de menjar per sota la porta." ) # Output: [{'label': 'Bullying', 'score': 0.9927878975868225}] ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Cyberbullying_detection_(CATALAN).ipynb) ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3 ## Citation ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` > Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C. > Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/)
ThomasSimonini/t5-end2end-question-generation
ThomasSimonini
2021-10-10T08:30:38Z
3,055
15
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: t5-end2end-question-generation results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: squad type: squad args: plain_text --- # t5-end2end-question-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset to generate questions based on a context. 👉 If you want to learn how to fine-tune the t5 model to do the same, you can follow this [tutorial](https://colab.research.google.com/drive/1z-Zl2hftMrFXabYfmz8o9YZpgYx6sGeW?usp=sharing) For instance: ``` Context: "Python is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace." ``` ``` Questions: Who created Python?, When was Python first released? What is Python's design philosophy? ``` It achieves the following results on the evaluation set: - Loss: 1.5691 ## Use the Model ``` from transformers import T5ForConditionalGeneration, T5TokenizerFast hfmodel = T5ForConditionalGeneration.from_pretrained("ThomasSimonini/t5-end2end-question-generation") text= "The abolition of feudal privileges by the National Constituent Assembly on 4 August 1789 and the Declaration \\nof the Rights of Man and of the Citizen (La Déclaration des Droits de l'Homme et du Citoyen), drafted by Lafayette \\nwith the help of Thomas Jefferson and adopted on 26 August, paved the way to a Constitutional Monarchy \\n(4 September 1791 – 21 September 1792). Despite these dramatic changes, life at the court continued, while the situation \\nin Paris was becoming critical because of bread shortages in September. On 5 October 1789, a crowd from Paris descended upon Versailles \\nand forced the royal family to move to the Tuileries Palace in Paris, where they lived under a form of house arrest under \\nthe watch of Lafayette's Garde Nationale, while the Comte de Provence and his wife were allowed to reside in the \\nPetit Luxembourg, where they remained until they went into exile on 20 June 1791." def run_model(input_string, **generator_args): generator_args = { "max_length": 256, "num_beams": 4, "length_penalty": 1.5, "no_repeat_ngram_size": 3, "early_stopping": True, } input_string = "generate questions: " + input_string + " </s>" input_ids = tokenizer.encode(input_string, return_tensors="pt") res = hfmodel.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) output = [item.split("<sep>") for item in output] return output run_model(text) => [['When did the National Constituent Assembly abolish feudal privileges?', ' Who drafted the Declaration of the Rights of Man and of the Citizen?', ' When was the Constitutional Monarchy established?', ' What was the name of the Declaration that paved the way to a constitutional monarchy?', '']] ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5834 | 0.34 | 100 | 1.9107 | | 1.9642 | 0.68 | 200 | 1.7227 | | 1.8526 | 1.02 | 300 | 1.6627 | | 1.7383 | 1.36 | 400 | 1.6354 | | 1.7223 | 1.69 | 500 | 1.6154 | | 1.6871 | 2.03 | 600 | 1.6096 | | 1.6309 | 2.37 | 700 | 1.6048 | | 1.6242 | 2.71 | 800 | 1.5923 | | 1.6226 | 3.05 | 900 | 1.5855 | | 1.5645 | 3.39 | 1000 | 1.5874 | | 1.5705 | 3.73 | 1100 | 1.5822 | | 1.5543 | 4.07 | 1200 | 1.5817 | | 1.5284 | 4.41 | 1300 | 1.5841 | | 1.5275 | 4.75 | 1400 | 1.5741 | | 1.5269 | 5.08 | 1500 | 1.5715 | | 1.5079 | 5.42 | 1600 | 1.5701 | | 1.4876 | 5.76 | 1700 | 1.5754 | | 1.498 | 6.1 | 1800 | 1.5699 | | 1.4852 | 6.44 | 1900 | 1.5693 | | 1.4776 | 6.78 | 2000 | 1.5691 | ### Framework versions - Transformers 4.10.3 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/fnet-large-finetuned-cola-copy2
gchhablani
2021-10-10T07:23:36Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: fnet-large-finetuned-cola-copy2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-large-finetuned-cola-copy2 This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6173 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6192 | 1.0 | 2138 | 0.6443 | 0.0 | | 0.6177 | 2.0 | 4276 | 0.6296 | 0.0 | | 0.6128 | 3.0 | 6414 | 0.6173 | 0.0 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
MaryaAI/opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
MaryaAI
2021-10-10T06:33:20Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "dataset:syssr_en_ar", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - syssr_en_ar metrics: - bleu model-index: - name: opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: syssr_en_ar type: syssr_en_ar args: default metrics: - name: Bleu type: bleu value: 7.9946 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset. It achieves the following results on the evaluation set: - Loss: 1.2046 - Bleu: 7.9946 - Gen Len: 20.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 1 | 1.2038 | 7.9946 | 20.0 | | No log | 2.0 | 2 | 1.2038 | 7.9946 | 20.0 | | No log | 3.0 | 3 | 1.2038 | 7.9946 | 20.0 | | No log | 4.0 | 4 | 1.2036 | 7.9946 | 20.0 | | No log | 5.0 | 5 | 1.2046 | 7.9946 | 20.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
gchhablani/fnet-large-finetuned-cola-copy
gchhablani
2021-10-10T05:39:18Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: fnet-large-finetuned-cola-copy results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-large-finetuned-cola-copy This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6243 - Matthews Correlation: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6195 | 1.0 | 2138 | 0.6527 | 0.0 | | 0.6168 | 2.0 | 4276 | 0.6259 | 0.0 | | 0.616 | 3.0 | 6414 | 0.6243 | 0.0 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
staceythompson/autonlp-myclassification-fortext-16332728
staceythompson
2021-10-10T00:24:34Z
3
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "unk", "dataset:staceythompson/autonlp-data-myclassification-fortext", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - staceythompson/autonlp-data-myclassification-fortext --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 16332728 ## Validation Metrics - Loss: 0.08077391237020493 - Accuracy: 0.9846153846153847 - Macro F1: 0.9900793650793651 - Micro F1: 0.9846153846153847 - Weighted F1: 0.9846153846153847 - Macro Precision: 0.9900793650793651 - Micro Precision: 0.9846153846153847 - Weighted Precision: 0.9846153846153847 - Macro Recall: 0.9900793650793651 - Micro Recall: 0.9846153846153847 - Weighted Recall: 0.9846153846153847 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/staceythompson/autonlp-myclassification-fortext-16332728 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("staceythompson/autonlp-myclassification-fortext-16332728", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("staceythompson/autonlp-myclassification-fortext-16332728", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
risingodegua/wine-quality-model
risingodegua
2021-10-09T17:21:02Z
9
1
sklearn
[ "sklearn", "joblib", "structured-data-classification", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - structured-data-classification - sklearn - joblib dataset: - wine-quality widget: structuredData: fixed_acidity: - 7.4 - 7.8 - 10.3 volatile_acidity: - 0.7 - 0.88 - 0.32 citric_acid: - 0 - 0 - 0.45 residual_sugar: - 1.9 - 2.6 - 6.4 chlorides: - 0.076 - 0.098 - 0.073 free_sulfur_dioxide: - 11 - 25 - 5 total_sulfur_dioxide: - 34 - 67 - 13 density: - 0.9978 - 0.9968 - 0.9976 pH: - 3.51 - 3.2 - 3.23 sulphates: - 0.56 - 0.68 - 0.82 alcohol: - 9.4 - 9.8 - 12.6 --- ## Wine Quality classification ### A Simple Example of Scikit-learn Pipeline > Inspired by https://towardsdatascience.com/a-simple-example-of-pipeline-in-machine-learning-with-scikit-learn-e726ffbb6976 by Saptashwa Bhattacharyya ### How to use ```python from huggingface_hub import hf_hub_url, cached_download import joblib import pandas as pd REPO_ID = "julien-c/wine-quality" FILENAME = "sklearn_model.joblib" model = joblib.load(cached_download( hf_hub_url(REPO_ID, FILENAME) )) # model is a `sklearn.pipeline.Pipeline` ``` #### Get sample data from this repo ```python data_file = cached_download( hf_hub_url(REPO_ID, "winequality-red.csv") ) winedf = pd.read_csv(data_file, sep=";") X = winedf.drop(["quality"], axis=1) Y = winedf["quality"] print(X[:3]) ``` | | fixed acidity | volatile acidity | citric acid | residual sugar | chlorides | free sulfur dioxide | total sulfur dioxide | density | pH | sulphates | alcohol | |---:|----------------:|-------------------:|--------------:|-----------------:|------------:|----------------------:|-----------------------:|----------:|-----:|------------:|----------:| | 0 | 7.4 | 0.7 | 0 | 1.9 | 0.076 | 11 | 34 | 0.9978 | 3.51 | 0.56 | 9.4 | | 1 | 7.8 | 0.88 | 0 | 2.6 | 0.098 | 25 | 67 | 0.9968 | 3.2 | 0.68 | 9.8 | | 2 | 7.8 | 0.76 | 0.04 | 2.3 | 0.092 | 15 | 54 | 0.997 | 3.26 | 0.65 | 9.8 | #### Get your prediction ```python labels = model.predict(X[:3]) # [5, 5, 5] ``` #### Eval ```python model.score(X, Y) # 0.6616635397123202 ``` ### 🍷 Disclaimer No red wine was drunk (unfortunately) while training this model 🍷
gchhablani/bert-large-cased-finetuned-rte
gchhablani
2021-10-09T14:14:22Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-large-cased-finetuned-rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.6642599277978339 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-finetuned-rte This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 1.5187 - Accuracy: 0.6643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6969 | 1.0 | 623 | 0.7039 | 0.5343 | | 0.5903 | 2.0 | 1246 | 0.6461 | 0.7184 | | 0.4557 | 3.0 | 1869 | 1.5187 | 0.6643 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
svanhvit/XLMR-ENIS-finetuned-conll_ner
svanhvit
2021-10-08T15:14:21Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:mim_gold_ner", "license:agpl-3.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: agpl-3.0 tags: - generated_from_trainer datasets: - mim_gold_ner metrics: - precision - recall - f1 - accuracy model-index: - name: XLMR-ENIS-finetuned-conll_ner results: - task: name: Token Classification type: token-classification dataset: name: mim_gold_ner type: mim_gold_ner args: mim-gold-ner metrics: - name: Precision type: precision value: 0.8754622097322882 - name: Recall type: recall value: 0.8425622775800712 - name: F1 type: f1 value: 0.8586972290729725 - name: Accuracy type: accuracy value: 0.9860744627305035 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLMR-ENIS-finetuned-conll_ner This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0713 - Precision: 0.8755 - Recall: 0.8426 - F1: 0.8587 - Accuracy: 0.9861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0493 | 1.0 | 2904 | 0.0673 | 0.8588 | 0.8114 | 0.8344 | 0.9841 | | 0.0277 | 2.0 | 5808 | 0.0620 | 0.8735 | 0.8275 | 0.8499 | 0.9855 | | 0.0159 | 3.0 | 8712 | 0.0713 | 0.8755 | 0.8426 | 0.8587 | 0.9861 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
patrickvonplaten/wav2vec2-large-repro-960h-libri-120k-steps
patrickvonplaten
2021-10-08T14:12:07Z
2
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
https://wandb.ai/patrickvonplaten/pretraining-wav2vec2/reports/Wav2Vec2-Large--VmlldzoxMTAwODM4?accessToken=wm3qzcnldrwsa31tkvf2pdmilw3f63d4twtffs86ou016xjbyilh55uoi3mo1qzc
Ajaykannan6/autonlp-manthan-16122692
Ajaykannan6
2021-10-08T13:52:19Z
9
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autonlp", "unk", "dataset:Ajaykannan6/autonlp-data-manthan", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - Ajaykannan6/autonlp-data-manthan --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 16122692 ## Validation Metrics - Loss: 1.1877621412277222 - Rouge1: 42.0713 - Rouge2: 23.3043 - RougeL: 37.3755 - RougeLsum: 37.8961 - Gen Len: 60.7117 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Ajaykannan6/autonlp-manthan-16122692 ```
svanhvit/XLMR-ENIS-finetuned-ner-finetuned-conll_ner
svanhvit
2021-10-08T13:38:38Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:mim_gold_ner", "license:agpl-3.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: agpl-3.0 tags: - generated_from_trainer datasets: - mim_gold_ner metrics: - precision - recall - f1 - accuracy model-index: - name: XLMR-ENIS-finetuned-ner-finetuned-conll_ner results: - task: name: Token Classification type: token-classification dataset: name: mim_gold_ner type: mim_gold_ner args: mim-gold-ner metrics: - name: Precision type: precision value: 0.8720365189221028 - name: Recall type: recall value: 0.8429893238434164 - name: F1 type: f1 value: 0.8572669368847712 - name: Accuracy type: accuracy value: 0.9857922913838598 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLMR-ENIS-finetuned-ner-finetuned-conll_ner This model is a fine-tuned version of [vesteinn/XLMR-ENIS-finetuned-ner](https://huggingface.co/vesteinn/XLMR-ENIS-finetuned-ner) on the mim_gold_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0770 - Precision: 0.8720 - Recall: 0.8430 - F1: 0.8573 - Accuracy: 0.9858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0461 | 1.0 | 2904 | 0.0647 | 0.8588 | 0.8107 | 0.8341 | 0.9842 | | 0.0244 | 2.0 | 5808 | 0.0704 | 0.8691 | 0.8296 | 0.8489 | 0.9849 | | 0.0132 | 3.0 | 8712 | 0.0770 | 0.8720 | 0.8430 | 0.8573 | 0.9858 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
raynardj/roberta-pubmed
raynardj
2021-10-08T02:58:27Z
8
2
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "pubmed", "cancer", "gene", "clinical trial", "bioinformatic", "en", "dataset:pubmed", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - en tags: - pubmed - cancer - gene - clinical trial - bioinformatic license: apache-2.0 datasets: - pubmed widget: - text: "The <mask> effects of hyperatomarin" --- # Roberta-Base fine-tuned on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) Abstract > We limit the training textual data to the following [MeSH](https://www.ncbi.nlm.nih.gov/mesh/) * All the child MeSH of ```Biomarkers, Tumor(D014408)```, including things like ```Carcinoembryonic Antigen(D002272)``` * All the child MeSH of ```Carcinoma(D002277)```, including things like all kinds of carcinoma: like ```Carcinoma, Lewis Lung(D018827)``` etc. around 80 kinds of carcinoma * All the child MeSH of ```Clinical Trial(D016439)``` * The training text file amounts to 531Mb ## Training * Trained on language modeling task, with ```mlm_probability=0.15```, on 2 Tesla V100 32G ```python training_args = TrainingArguments( output_dir=config.save, #select model path for checkpoint overwrite_output_dir=True, num_train_epochs=3, per_device_train_batch_size=30, per_device_eval_batch_size=60, evaluation_strategy= 'steps', save_total_limit=2, eval_steps=250, metric_for_best_model='eval_loss', greater_is_better=False, load_best_model_at_end =True, prediction_loss_only=True, report_to = "none") ```
gchhablani/fnet-large-finetuned-sst2
gchhablani
2021-10-07T16:48:43Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "fnet", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: fnet-large-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9048165137614679 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fnet-large-finetuned-sst2 This model is a fine-tuned version of [google/fnet-large](https://huggingface.co/google/fnet-large) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.5240 - Accuracy: 0.9048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.394 | 1.0 | 16838 | 0.3896 | 0.8968 | | 0.2076 | 2.0 | 33676 | 0.5100 | 0.8956 | | 0.1148 | 3.0 | 50514 | 0.5240 | 0.9048 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0 - Datasets 1.12.1 - Tokenizers 0.10.3
arjun3816/autonlp-pegas_large_samsum-15892673
arjun3816
2021-10-07T15:05:32Z
4
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "unk", "dataset:arjun3816/autonlp-data-pegas_large_samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - arjun3816/autonlp-data-pegas_large_samsum --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 15892673 ## Validation Metrics - Loss: 1.3661842346191406 - Rouge1: 50.8868 - Rouge2: 26.996 - RougeL: 42.9088 - RougeLsum: 46.6748 - Gen Len: 20.716 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-pegas_large_samsum-15892673 ```
huggingtweets/theqwaincrane
huggingtweets
2021-10-07T14:31:53Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/theqwaincrane/1633617055766/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1422024471368507400/a7QrcUd-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">didgeridoogus</div> <div style="text-align: center; font-size: 14px;">@theqwaincrane</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from didgeridoogus. | Data | didgeridoogus | | --- | --- | | Tweets downloaded | 3103 | | Retweets | 1841 | | Short tweets | 137 | | Tweets kept | 1125 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1n6d7k8x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theqwaincrane's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wskchoi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wskchoi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/theqwaincrane') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
philschmid/BERT-tweet-eval-emotion
philschmid
2021-10-07T13:19:11Z
7
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:tweet_eval", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry" datasets: - tweet_eval model-index: - name: BERT-tweet-eval-emotion results: - task: name: Sentiment Analysis type: sentiment-analysis dataset: name: "tweeteval" type: tweet-eval metrics: - name: Accuracy type: accuracy value: 81.00 - name: Macro F1 type: macro-f1 value: 77.37 - name: Weighted F1 type: weighted-f1 value: 80.63 --- # `BERT-tweet-eval-emotion` trained using autoNLP - Problem type: Multi-class Classification ## Validation Metrics - Loss: 0.5408923625946045 - Accuracy: 0.8099929627023223 - Macro F1: 0.7737195387641751 - Micro F1: 0.8099929627023222 - Weighted F1: 0.8063100677512649 - Macro Precision: 0.8083955817268176 - Micro Precision: 0.8099929627023223 - Weighted Precision: 0.8104009668394634 - Macro Recall: 0.7529197049888299 - Micro Recall: 0.8099929627023223 - Weighted Recall: 0.8099929627023223 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"}' https://api-inference.huggingface.co/models/philschmid/BERT-tweet-eval-emotion ``` Or Python API: ```py from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_id = 'philschmid/BERT-tweet-eval-emotion' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSequenceClassification.from_pretrained(model_id) classifier = pipeline('text-classification', tokenizer=tokenizer, model=model) classifier("Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry") ```
philschmid/DistilBERT-tweet-eval-emotion
philschmid
2021-10-07T13:19:01Z
5
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autonlp", "en", "dataset:tweet_eval", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry" datasets: - tweet_eval model-index: - name: DistilBERT-tweet-eval-emotion results: - task: name: Sentiment Analysis type: sentiment-analysis dataset: name: "tweeteval" type: tweet-eval metrics: - name: Accuracy type: accuracy value: 80.59 - name: Macro F1 type: macro-f1 value: 78.17 - name: Weighted F1 type: weighted-f1 value: 80.11 --- # `DistilBERT-tweet-eval-emotion` trained using autoNLP - Problem type: Multi-class Classification ## Validation Metrics - Loss: 0.5564454197883606 - Accuracy: 0.8057705840957072 - Macro F1: 0.7536021792986777 - Micro F1: 0.8057705840957073 - Weighted F1: 0.8011390170248318 - Macro Precision: 0.7817458823222652 - Micro Precision: 0.8057705840957072 - Weighted Precision: 0.8025156844840151 - Macro Recall: 0.7369154685020982 - Micro Recall: 0.8057705840957072 - Weighted Recall: 0.8057705840957072 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"}' https://api-inference.huggingface.co/models/philschmid/autonlp-tweet_eval_vs_comprehend-3092245 ``` Or Python API: ```py from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_id = 'philschmid/DistilBERT-tweet-eval-emotion' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSequenceClassification.from_pretrained(model_id) classifier = pipeline('text-classification', tokenizer=tokenizer, model=model) classifier("Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry") ```
hiiamsid/est5-base-qg
hiiamsid
2021-10-07T09:26:49Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "spanish", "question generation", "qg", "es", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: ["es"] tags: - spanish - question generation - qg Datasets: - SQUAD license: mit --- This is the finetuned model of hiiamsid/est5-base for Question Generation task. * Here input is the context only and output is questions. No information regarding answers were given to model. * Unfortunately, due to lack of sufficient resources it is fine tuned with batch_size=10 and num_seq_len=256. So, if too large context is given model may not get information about last portions. ``` from transformers import T5ForConditionalGeneration, T5Tokenizer MODEL_NAME = 'hiiamsid/est5-base-qg' model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME) tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME) model.cuda(); model.eval(); def generate_question(text, beams=10, grams=2, num_return_seq=10,max_size=256): x = tokenizer(text, return_tensors='pt', padding=True).to(model.device) out = model.generate(**x, no_repeat_ngram_size=grams, num_beams=beams, num_return_sequences=num_return_seq, max_length=max_size) return tokenizer.decode(out[0], skip_special_tokens=True) print(generate_question('Any context in spanish from which question is to be generated')) ``` ## Citing & Authors - Datasets : [squad_es](https://huggingface.co/datasets/squad_es) - Model : [hiiamsid/est5-base](hiiamsid/est5-base)
huggingartists/bryan-adams
huggingartists
2021-10-07T08:16:16Z
4
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/bryan-adams", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/bryan-adams tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/2cb27a7f3f50142f45cd18fae968738c.750x750x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Bryan Adams</div> <a href="https://genius.com/artists/bryan-adams"> <div style="text-align: center; font-size: 14px;">@bryan-adams</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Bryan Adams. Dataset is available [here](https://huggingface.co/datasets/huggingartists/bryan-adams). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/bryan-adams") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/22ksbpsz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Bryan Adams's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/3b0c22fu) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/3b0c22fu/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/bryan-adams') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/bryan-adams") model = AutoModelWithLMHead.from_pretrained("huggingartists/bryan-adams") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
arjun3816/autonlp-sam_summarization1-15492651
arjun3816
2021-10-07T02:28:05Z
4
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "unk", "dataset:arjun3816/autonlp-data-sam_summarization1", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - arjun3816/autonlp-data-sam_summarization1 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 15492651 ## Validation Metrics - Loss: 1.4060134887695312 - Rouge1: 50.9953 - Rouge2: 35.9204 - RougeL: 43.5673 - RougeLsum: 46.445 - Gen Len: 58.0193 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/arjun3816/autonlp-sam_summarization1-15492651 ```
huggingartists/the-weeknd
huggingartists
2021-10-06T11:02:39Z
9
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/the-weeknd", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/the-weeknd tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/1bab7f9dbd1216febc16d73ae4da9bd0.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">The Weeknd</div> <a href="https://genius.com/artists/the-weeknd"> <div style="text-align: center; font-size: 14px;">@the-weeknd</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from The Weeknd. Dataset is available [here](https://huggingface.co/datasets/huggingartists/the-weeknd). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/the-weeknd") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/34tqtrsm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on The Weeknd's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1pjby702) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1pjby702/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/the-weeknd') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/the-weeknd") model = AutoModelWithLMHead.from_pretrained("huggingartists/the-weeknd") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
huggingtweets/restrictedwop
huggingtweets
2021-10-06T07:23:26Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/restrictedwop/1633505002699/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1445644000547901456/nvlo-aRM_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">mohammad</div> <div style="text-align: center; font-size: 14px;">@restrictedwop</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from mohammad. | Data | mohammad | | --- | --- | | Tweets downloaded | 3208 | | Retweets | 220 | | Short tweets | 788 | | Tweets kept | 2200 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/7l1gtdha/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @restrictedwop's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ly1slypx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ly1slypx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/restrictedwop') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
delpart/distilbert-base-uncased-finetuned-ner
delpart
2021-10-06T03:58:21Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.925115970841617 - name: Recall type: recall value: 0.9370175634858485 - name: F1 type: f1 value: 0.9310287333963209 - name: Accuracy type: accuracy value: 0.9839388692074285 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0602 - Precision: 0.9251 - Recall: 0.9370 - F1: 0.9310 - Accuracy: 0.9839 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2435 | 1.0 | 878 | 0.0685 | 0.9182 | 0.9221 | 0.9202 | 0.9816 | | 0.0515 | 2.0 | 1756 | 0.0602 | 0.9212 | 0.9368 | 0.9289 | 0.9834 | | 0.0301 | 3.0 | 2634 | 0.0602 | 0.9251 | 0.9370 | 0.9310 | 0.9839 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
huggingtweets/beth_kindig-elonmusk-iofundofficial
huggingtweets
2021-10-06T03:14:09Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1442634650703237120/mXIcYtIs_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1441096557944737802/y56EUiiU_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1431003324157812739/QYyroq6k_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Beth Kindig & I/O Fund Official</div> <div style="text-align: center; font-size: 14px;">@beth_kindig-elonmusk-iofundofficial</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & Beth Kindig & I/O Fund Official. | Data | Elon Musk | Beth Kindig | I/O Fund Official | | --- | --- | --- | --- | | Tweets downloaded | 2400 | 3247 | 1935 | | Retweets | 127 | 484 | 143 | | Short tweets | 642 | 273 | 8 | | Tweets kept | 1631 | 2490 | 1784 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3pyiqrq2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beth_kindig-elonmusk-iofundofficial's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3anxlpvl) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3anxlpvl/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/beth_kindig-elonmusk-iofundofficial') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
vuiseng9/bert-base-uncased-mnli
vuiseng9
2021-10-06T02:40:23Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
This model is developed with transformers v4.10.3. # Train ```bash #!/usr/bin/env bash export CUDA_VISIBLE_DEVICES=0 OUTDIR=bert-based-uncased-mnli WORKDIR=transformers/examples/pytorch/text-classification cd $WORKDIR nohup python run_glue.py \ --model_name_or_path bert-base-uncased \ --task_name mnli \ --do_eval \ --do_train \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --max_seq_length 128 \ --num_train_epochs 3 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ``` # Eval ```bash export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-based-uncased-mnli WORKDIR=transformers/examples/pytorch/text-classification cd $WORKDIR nohup python run_glue.py \ --model_name_or_path vuiseng9/bert-base-uncased-mnli \ --task_name mnli \ --do_eval \ --per_device_eval_batch_size 16 \ --max_seq_length 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
ueb1/XLMR-ENIS-finetuned-ner
ueb1
2021-10-05T23:19:15Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:mim_gold_ner", "license:agpl-3.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: agpl-3.0 tags: - generated_from_trainer datasets: - mim_gold_ner metrics: - precision - recall - f1 - accuracy model-index: - name: XLMR-ENIS-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: mim_gold_ner type: mim_gold_ner args: mim-gold-ner metrics: - name: Precision type: precision value: 0.8685291700903862 - name: Recall type: recall value: 0.841273450824332 - name: F1 type: f1 value: 0.8546840706942359 - name: Accuracy type: accuracy value: 0.9824748714976435 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLMR-ENIS-finetuned-ner This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0940 - Precision: 0.8685 - Recall: 0.8413 - F1: 0.8547 - Accuracy: 0.9825 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0564 | 1.0 | 2904 | 0.0943 | 0.8505 | 0.8118 | 0.8307 | 0.9798 | | 0.0321 | 2.0 | 5808 | 0.0907 | 0.8610 | 0.8235 | 0.8419 | 0.9814 | | 0.0198 | 3.0 | 8712 | 0.0940 | 0.8685 | 0.8413 | 0.8547 | 0.9825 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
ueb1/IceBERT-finetuned-ner
ueb1
2021-10-05T21:28:47Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "dataset:mim_gold_ner", "license:gpl-3.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: gpl-3.0 tags: - generated_from_trainer datasets: - mim_gold_ner metrics: - precision - recall - f1 - accuracy model-index: - name: IceBERT-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: mim_gold_ner type: mim_gold_ner args: mim-gold-ner metrics: - name: Precision type: precision value: 0.8926985693142575 - name: Recall type: recall value: 0.8648584060222249 - name: F1 type: f1 value: 0.8785579899253504 - name: Accuracy type: accuracy value: 0.985303647287535 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IceBERT-finetuned-ner This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0799 - Precision: 0.8927 - Recall: 0.8649 - F1: 0.8786 - Accuracy: 0.9853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0528 | 1.0 | 2904 | 0.0774 | 0.8784 | 0.8529 | 0.8655 | 0.9829 | | 0.0258 | 2.0 | 5808 | 0.0742 | 0.8769 | 0.8705 | 0.8737 | 0.9843 | | 0.0166 | 3.0 | 8712 | 0.0799 | 0.8927 | 0.8649 | 0.8786 | 0.9853 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
prajjwal1/bert-tiny-mnli
prajjwal1
2021-10-05T18:00:12Z
104
2
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "arxiv:1908.08962", "arxiv:2110.01518", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI. If you use the model, please consider citing the paper ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). ``` MNLI: 60% MNLI-mm: 61.61% ``` These models were trained for 4 epochs. [@prajjwal_1](https://twitter.com/prajjwal_1)
prajjwal1/bert-small-mnli
prajjwal1
2021-10-05T17:57:54Z
88
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "arxiv:1908.08962", "arxiv:2110.01518", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI. If you use the model, please consider citing the paper ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). ``` MNLI: 72.1% MNLI-mm: 73.76% ``` These models were trained for 4 epochs. [@prajjwal_1](https://twitter.com/prajjwal_1)
prajjwal1/bert-mini-mnli
prajjwal1
2021-10-05T17:57:20Z
16
0
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "arxiv:1908.08962", "arxiv:2110.01518", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). These BERT variants were introduced in the paper [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). These models are trained on MNLI. If you use the model, please consider citing the paper ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). ``` MNLI: 68.04% MNLI-mm: 69.17% ``` These models were trained for 4 epochs. [@prajjwal_1](https://twitter.com/prajjwal_1)
prajjwal1/albert-base-v1-mnli
prajjwal1
2021-10-05T17:54:14Z
4
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "arxiv:2110.01518", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
If you use the model, please consider citing this paper ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
thorduragust/XLMR-ENIS-finetuned-ner
thorduragust
2021-10-05T15:40:05Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:mim_gold_ner", "license:agpl-3.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: agpl-3.0 tags: - generated_from_trainer datasets: - mim_gold_ner metrics: - precision - recall - f1 - accuracy model-index: - name: XLMR-ENIS-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: mim_gold_ner type: mim_gold_ner args: mim-gold-ner metrics: - name: Precision type: precision value: 0.8707943925233644 - name: Recall type: recall value: 0.8475270039795338 - name: F1 type: f1 value: 0.8590031691155287 - name: Accuracy type: accuracy value: 0.982856184128243 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLMR-ENIS-finetuned-ner This model is a fine-tuned version of [vesteinn/XLMR-ENIS](https://huggingface.co/vesteinn/XLMR-ENIS) on the mim_gold_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0916 - Precision: 0.8708 - Recall: 0.8475 - F1: 0.8590 - Accuracy: 0.9829 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0581 | 1.0 | 2904 | 0.1055 | 0.8477 | 0.8057 | 0.8262 | 0.9791 | | 0.0316 | 2.0 | 5808 | 0.0902 | 0.8574 | 0.8349 | 0.8460 | 0.9813 | | 0.0201 | 3.0 | 8712 | 0.0916 | 0.8708 | 0.8475 | 0.8590 | 0.9829 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
mrm8488/roberta-base-bne-finetuned-sqac
mrm8488
2021-10-05T15:03:21Z
7
2
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "es", "dataset:sqac", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: es license: apache-2.0 tags: - generated_from_trainer datasets: - sqac metrics: - f1 model-index: - name: roberta-base-bne-finetuned-sqac results: - task: name: Question Answering type: Question-Answering dataset: name: sqac type: sqac args: metrics: - name: f1 type: f1 value: 0.7903 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-sqac This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the sqac dataset. It achieves the following results on the evaluation set: - Loss: 1.2111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9971 | 1.0 | 1196 | 0.8646 | | 0.482 | 2.0 | 2392 | 0.9334 | | 0.1652 | 3.0 | 3588 | 1.2111 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
eliasbe/IceBERT-finetuned-ner
eliasbe
2021-10-05T12:35:51Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "dataset:mim_gold_ner", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: gpl-3.0 tags: - generated_from_trainer datasets: - mim_gold_ner model-index: - name: IceBERT-finetuned-ner widget: - text: systurnar guðrún og monique voru einar í skóginum umkringdar víði, eik og reyni með þá ósk að sameinast fjölskyldu sinni sem fór á mai thai og í bíó paradís að sjá jim carey leika í the eternal sunshine of the spotless mind. results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IceBERT-finetuned-ner This model is a fine-tuned version of [eliasbe/IceBERT-finetuned-ner](https://huggingface.co/eliasbe/IceBERT-finetuned-ner) on the mim_gold_ner dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
LenaT/distilgpt2-finetuned-wikitext2
LenaT
2021-10-05T12:32:43Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7608 | 1.0 | 2334 | 3.6655 | | 3.6335 | 2.0 | 4668 | 3.6455 | | 3.6066 | 3.0 | 7002 | 3.6424 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Tokenizers 0.10.3
huggingtweets/beaniemaxi-loopifyyy-punk6529
huggingtweets
2021-10-05T09:45:40Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1440017111531855879/A4p6F07H_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1440481469231558659/ZjEcoltA_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1435265846436409346/yAV2qzDs_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">6529 & Beanie & Loopify 🧙‍♂️</div> <div style="text-align: center; font-size: 14px;">@beaniemaxi-loopifyyy-punk6529</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 6529 & Beanie & Loopify 🧙‍♂️. | Data | 6529 | Beanie | Loopify 🧙‍♂️ | | --- | --- | --- | --- | | Tweets downloaded | 3249 | 3250 | 3249 | | Retweets | 939 | 391 | 179 | | Short tweets | 525 | 559 | 1194 | | Tweets kept | 1785 | 2300 | 1876 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ejmosjg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @beaniemaxi-loopifyyy-punk6529's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/15k8d8xn) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/15k8d8xn/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/beaniemaxi-loopifyyy-punk6529') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
smallbenchnlp/roberta-small
smallbenchnlp
2021-10-05T04:03:28Z
59
1
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
Small-Bench NLP is a benchmark for small efficient neural language models trained on a single GPU.
shiyue/wav2vec2-common_voice-tr-demo
shiyue
2021-10-05T01:04:19Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "tr", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - tr license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-tr-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tr-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.7.1+cu110 - Datasets 1.12.1 - Tokenizers 0.10.3
Titantoe/IceBERT-finetuned-ner
Titantoe
2021-10-04T22:31:18Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "dataset:mim_gold_ner", "license:gpl-3.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: gpl-3.0 tags: - generated_from_trainer datasets: - mim_gold_ner metrics: - precision - recall - f1 - accuracy model-index: - name: IceBERT-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: mim_gold_ner type: mim_gold_ner args: mim-gold-ner metrics: - name: Precision type: precision value: 0.8920083733530353 - name: Recall type: recall value: 0.8655753375552635 - name: F1 type: f1 value: 0.8785930867192238 - name: Accuracy type: accuracy value: 0.9855436530476731 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IceBERT-finetuned-ner This model is a fine-tuned version of [vesteinn/IceBERT](https://huggingface.co/vesteinn/IceBERT) on the mim_gold_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0772 - Precision: 0.8920 - Recall: 0.8656 - F1: 0.8786 - Accuracy: 0.9855 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0519 | 1.0 | 2904 | 0.0731 | 0.8700 | 0.8564 | 0.8631 | 0.9832 | | 0.026 | 2.0 | 5808 | 0.0749 | 0.8771 | 0.8540 | 0.8654 | 0.9840 | | 0.0159 | 3.0 | 8712 | 0.0772 | 0.8920 | 0.8656 | 0.8786 | 0.9855 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
ueb1/distilbert-base-uncased-finetuned-ner
ueb1
2021-10-04T18:16:48Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9290229566374626 - name: Recall type: recall value: 0.9371294328224634 - name: F1 type: f1 value: 0.9330585876587213 - name: Accuracy type: accuracy value: 0.9839547555880344 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0608 - Precision: 0.9290 - Recall: 0.9371 - F1: 0.9331 - Accuracy: 0.9840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2276 | 1.0 | 878 | 0.0685 | 0.9204 | 0.9246 | 0.9225 | 0.9814 | | 0.0498 | 2.0 | 1756 | 0.0622 | 0.9238 | 0.9358 | 0.9298 | 0.9833 | | 0.0298 | 3.0 | 2634 | 0.0608 | 0.9290 | 0.9371 | 0.9331 | 0.9840 | ### Framework versions - Transformers 4.11.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
andi611/bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat
andi611
2021-10-04T14:52:03Z
74
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "en", "dataset:squad_v2", "dataset:conll2003", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - en license: cc-by-4.0 tags: - generated_from_trainer datasets: - squad_v2 - conll2003 model_index: - name: bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat results: - task: name: Token Classification type: token-classification dataset: name: squad_v2 type: squad_v2 args: conll2003 - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-whole-word-masking-squad2-with-ner-conll2003-with-neg-with-repeat This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on the squad_v2 and the conll2003 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.1+cu111 - Datasets 1.8.0 - Tokenizers 0.10.3
Mael7307/bert-base-uncased-mnli
Mael7307
2021-10-04T13:30:13Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
``` for i in range(len(predictions)): if predictions[i] == 0: predictions[i] = 2 elif predictions[i] == 1: predictions[i] = 0 elif predictions[i] == 2: predictions[i] = 1 ```
Elron/bleurt-large-128
Elron
2021-10-04T13:21:56Z
6
2
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-128") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-128") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([ 0.0020, -0.6647]) ```
Mael7307/bert-base-uncased-snli
Mael7307
2021-10-04T13:20:31Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
``` for i in range(len(predictions)): if predictions[i] == 0: predictions[i] = 2 elif predictions[i] == 1: predictions[i] = 0 elif predictions[i] == 2: predictions[i] = 1 ```
MultiBertGunjanPatrick/multiberts-seed-10
MultiBertGunjanPatrick
2021-10-04T05:49:42Z
10
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-9
MultiBertGunjanPatrick
2021-10-04T05:47:01Z
6
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-8
MultiBertGunjanPatrick
2021-10-04T05:44:32Z
11
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-6
MultiBertGunjanPatrick
2021-10-04T05:40:19Z
6
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4
MultiBertGunjanPatrick
2021-10-04T05:35:14Z
8
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-3
MultiBertGunjanPatrick
2021-10-04T05:32:27Z
8
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-2
MultiBertGunjanPatrick
2021-10-04T05:29:57Z
9
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 0 (uncased) Seed 0 MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("multiberts-seed-0") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-2000k
MultiBertGunjanPatrick
2021-10-04T05:12:58Z
4
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 2000k (uncased) Seed 4 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-2000k') model = BertModel.from_pretrained("multiberts-seed-4-2000k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-1900k
MultiBertGunjanPatrick
2021-10-04T05:12:51Z
4
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 1900k (uncased) Seed 4 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1900k') model = BertModel.from_pretrained("multiberts-seed-4-1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-1700k
MultiBertGunjanPatrick
2021-10-04T05:12:38Z
1
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 1700k (uncased) Seed 4 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1700k') model = BertModel.from_pretrained("multiberts-seed-4-1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-1600k
MultiBertGunjanPatrick
2021-10-04T05:12:31Z
7
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 1600k (uncased) Seed 4 intermediate checkpoint 1600k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1600k') model = BertModel.from_pretrained("multiberts-seed-4-1600k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-1400k
MultiBertGunjanPatrick
2021-10-04T05:12:17Z
4
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 1400k (uncased) Seed 4 intermediate checkpoint 1400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1400k') model = BertModel.from_pretrained("multiberts-seed-4-1400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-1300k
MultiBertGunjanPatrick
2021-10-04T05:12:10Z
6
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 1300k (uncased) Seed 4 intermediate checkpoint 1300k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-1300k') model = BertModel.from_pretrained("multiberts-seed-4-1300k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-900k
MultiBertGunjanPatrick
2021-10-04T05:11:41Z
4
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 900k (uncased) Seed 4 intermediate checkpoint 900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-900k') model = BertModel.from_pretrained("multiberts-seed-4-900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-800k
MultiBertGunjanPatrick
2021-10-04T05:11:33Z
4
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 800k (uncased) Seed 4 intermediate checkpoint 800k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-800k') model = BertModel.from_pretrained("multiberts-seed-4-800k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-500k
MultiBertGunjanPatrick
2021-10-04T05:11:11Z
9
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 500k (uncased) Seed 4 intermediate checkpoint 500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-500k') model = BertModel.from_pretrained("multiberts-seed-4-500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-200k
MultiBertGunjanPatrick
2021-10-04T05:10:41Z
1
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 200k (uncased) Seed 4 intermediate checkpoint 200k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-200k') model = BertModel.from_pretrained("multiberts-seed-4-200k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-160k
MultiBertGunjanPatrick
2021-10-04T05:10:26Z
4
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 160k (uncased) Seed 4 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-160k') model = BertModel.from_pretrained("multiberts-seed-4-160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-140k
MultiBertGunjanPatrick
2021-10-04T05:10:19Z
1
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 140k (uncased) Seed 4 intermediate checkpoint 140k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-140k') model = BertModel.from_pretrained("multiberts-seed-4-140k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-120k
MultiBertGunjanPatrick
2021-10-04T05:10:11Z
1
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 120k (uncased) Seed 4 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-120k') model = BertModel.from_pretrained("multiberts-seed-4-120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-80k
MultiBertGunjanPatrick
2021-10-04T05:09:58Z
1
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 80k (uncased) Seed 4 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-80k') model = BertModel.from_pretrained("multiberts-seed-4-80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
MultiBertGunjanPatrick/multiberts-seed-4-60k
MultiBertGunjanPatrick
2021-10-04T05:09:51Z
4
0
transformers
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: en tags: - exbert - multiberts - multiberts-seed-4 license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 4 Checkpoint 60k (uncased) Seed 4 intermediate checkpoint 60k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-60k') model = BertModel.from_pretrained("multiberts-seed-4-60k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>