modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
lewtun/minilm-finetuned-emotion
2e1ecc37e5edd7eb71dec436923ad199f57825c6
2021-11-11T20:44:07.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
lewtun
null
lewtun/minilm-finetuned-emotion
50
null
transformers
6,000
--- license: mit tags: - generated_from_trainer datasets: - emotion metrics: - f1 model-index: - name: minilm-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: F1 type: f1 value: 0.9117582218338629 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # minilm-finetuned-emotion This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3891 - F1: 0.9118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.3957 | 1.0 | 250 | 1.0134 | 0.6088 | | 0.8715 | 2.0 | 500 | 0.6892 | 0.8493 | | 0.6085 | 3.0 | 750 | 0.4943 | 0.8920 | | 0.4626 | 4.0 | 1000 | 0.4096 | 0.9078 | | 0.3961 | 5.0 | 1250 | 0.3891 | 0.9118 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.6.0 - Datasets 1.15.1 - Tokenizers 0.10.3
mrm8488/flaubert-small-finetuned-movie-review-sentiment-analysis
1765cad0225ce82b945e6a2ce6e6d5ea8e42173e
2021-06-21T10:04:37.000Z
[ "pytorch", "flaubert", "text-classification", "transformers" ]
text-classification
false
mrm8488
null
mrm8488/flaubert-small-finetuned-movie-review-sentiment-analysis
50
null
transformers
6,001
Entry not found
pmthangk09/bert-base-uncased-glue-sst2
2bf098c8b26580282044f6e9a0917731456f1fbb
2021-05-20T02:48:36.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
pmthangk09
null
pmthangk09/bert-base-uncased-glue-sst2
50
null
transformers
6,002
Entry not found
raynardj/ner-chemical-bionlp-bc5cdr-pubmed
30dd3edf4b0e1a052259b6e11308e3ac86d0c503
2021-11-16T03:19:53.000Z
[ "pytorch", "roberta", "token-classification", "en", "dataset:bionlp", "dataset:bc4cdr", "transformers", "ner", "chemical", "bionlp", "bc4cdr", "bioinfomatics", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
raynardj
null
raynardj/ner-chemical-bionlp-bc5cdr-pubmed
50
2
transformers
6,003
--- language: - en tags: - ner - chemical - bionlp - bc4cdr - bioinfomatics license: apache-2.0 datasets: - bionlp - bc4cdr widget: - text: "Serotonin receptor 2A (HTR2A) gene polymorphism predicts treatment response to venlafaxine XR in generalized anxiety disorder." --- # NER to find Gene & Gene products > The model was trained on bionlp and bc4cdr dataset, pretrained on this [pubmed-pretrained roberta model](/raynardj/roberta-pubmed) All the labels, the possible token classes. ```json {"label2id": { "O": 0, "Chemical": 1, } } ``` Notice, we removed the 'B-','I-' etc from data label.🗡 ## This is the template we suggest for using the model Of course I'm well aware of the ```aggregation_strategy``` arguments offered by hf, but by the way of training, I discard any entropy loss for appending subwords, like only the label for the 1st subword token is not -100, after many search effort, I can't find a way to achieve that with default pipeline, hence I fancy an inference class myself. ```python !pip install forgebox from forgebox.hf.train import NERInference ner = NERInference.from_pretrained("raynardj/ner-chemical-bionlp-bc5cdr-pubmed") a_df = ner.predict(["text1", "text2"]) ``` > check our NER model on * [gene and gene products](/raynardj/ner-gene-dna-rna-jnlpba-pubmed) * [chemical substance](/raynardj/ner-chemical-bionlp-bc5cdr-pubmed). * [disease](/raynardj/ner-disease-ncbi-bionlp-bc5cdr-pubmed)
rifkat/uztext-3Gb-BPE-Roberta
0c8749478ad426e25029837134ddca4f0cad4ba7
2022-05-06T10:48:06.000Z
[ "pytorch", "roberta", "fill-mask", "uz", "transformers", "mit", "robert", "uzrobert", "uzbek", "cyrillic", "latin", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
rifkat
null
rifkat/uztext-3Gb-BPE-Roberta
50
1
transformers
6,004
--- language: - uz tags: - transformers - mit - robert - uzrobert - uzbek - cyrillic - latin license: apache-2.0 widget: - text: "Kuchli yomg‘irlar tufayli bir qator <mask> kuchli sel oqishi kuzatildi." example_title: "Latin script" - text: "Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг <mask>, мутафаккири ва давлат арбоби бўлган." example_title: "Cyrillic script" --- <p><b>UzRoBerta model.</b> Pre-prepared model in Uzbek (Cyrillic and latin script) to model the masked language and predict the next sentences. <p><b>How to use.</b> You can use this model directly with a pipeline for masked language modeling: <pre><code class="language-python"> from transformers import pipeline unmasker = pipeline('fill-mask', model='rifkat/uztext-3Gb-BPE-Roberta') unmasker("Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг [mask], мутафаккири ва давлат арбоби бўлган.") [{'score': 0.5902208685874939, 'sequence': 'Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг шоири, мутафаккири ва давлат арбоби бўлган.', 'token': 28809, 'token_str': ' шоири'}, {'score': 0.08303504437208176, 'sequence': 'Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг устози, мутафаккири ва давлат арбоби бўлган.', 'token': 17484, 'token_str': ' устози'}, {'score': 0.035882771015167236, 'sequence': 'Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг арбоби, мутафаккири ва давлат арбоби бўлган.', 'token': 34552, 'token_str': ' арбоби'}, {'score': 0.03447483479976654, 'sequence': 'Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг асосчиси, мутафаккири ва давлат арбоби бўлган.', 'token': 14034, 'token_str': ' асосчиси'}, {'score': 0.03044942207634449, 'sequence': 'Алишер Навоий – улуғ ўзбек ва бошқа туркий халқларнинг дўсти, мутафаккири ва давлат арбоби бўлган.', 'token': 28100, 'token_str': ' дўсти'}] unmasker("Kuchli yomg‘irlar tufayli bir qator [mask] kuchli sel oqishi kuzatildi.") [{'score': 0.410250186920166, 'sequence': 'Kuchli yomg‘irlar tufayli bir qator hududlarda kuchli sel oqishi kuzatildi.', 'token': 11009, 'token_str': ' hududlarda'}, {'score': 0.2023029774427414, 'sequence': 'Kuchli yomg‘irlar tufayli bir qator tumanlarda kuchli sel oqishi kuzatildi.', 'token': 35370, 'token_str': ' tumanlarda'}, {'score': 0.129830002784729, 'sequence': 'Kuchli yomg‘irlar tufayli bir qator viloyatlarda kuchli sel oqishi kuzatildi.', 'token': 33584, 'token_str': ' viloyatlarda'}, {'score': 0.04539087787270546, 'sequence': 'Kuchli yomg‘irlar tufayli bir qator mamlakatlarda kuchli sel oqishi kuzatildi.', 'token': 19315, 'token_str': ' mamlakatlarda'}, {'score': 0.0369882769882679, 'sequence': 'Kuchli yomg‘irlar tufayli bir qator joylarda kuchli sel oqishi kuzatildi.', 'token': 5853, 'token_str': ' joylarda'}] </code></pre> <p><b>Training data.</b> UzBERT model was pretrained on &asymp;2M news articles (&asymp;3Gb).
smallbenchnlp/bert-small
5796c8eed06465f91a3d9fae1dc3cda4d716d69c
2021-10-14T10:38:23.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
smallbenchnlp
null
smallbenchnlp/bert-small
50
null
transformers
6,005
Small-Bench NLP is a benchmark for small efficient neural language models trained on a single GPU.
taeminlee/kodialogpt2-base
d123f95f7fea6865022d6f047708635c263011f5
2021-05-23T13:03:30.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
taeminlee
null
taeminlee/kodialogpt2-base
50
null
transformers
6,006
Entry not found
aihijo/gpt2-zh-21k
f570cd42dcb161e0936bcdfceab005ecbfcef217
2022-03-27T14:59:53.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "license:cc-by-nc-sa-4.0" ]
text-generation
false
aihijo
null
aihijo/gpt2-zh-21k
50
null
transformers
6,007
--- license: cc-by-nc-sa-4.0 ---
peterhsu/distilbert-base-uncased-finetuned-squad-d5716d28
7a6bc409aa7e56502ca7542baa0630d4a67f72f4
2022-03-30T12:22:49.000Z
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
false
peterhsu
null
peterhsu/distilbert-base-uncased-finetuned-squad-d5716d28
50
null
transformers
6,008
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
mismayil/kogito-rc-swem
61c6156c43a0fa5cd530784f9d2e07d98a84fdfa
2022-04-28T13:51:17.000Z
[ "pytorch", "transformers", "license:mit" ]
null
false
mismayil
null
mismayil/kogito-rc-swem
50
null
transformers
6,009
--- license: mit ---
TweebankNLP/bertweet-tb2_ewt-pos-tagging
1616a205d6f1953421a242105d38f58d67c82f8a
2022-05-05T00:23:51.000Z
[ "pytorch", "roberta", "token-classification", "arxiv:2201.07281", "transformers", "license:cc-by-nc-4.0", "autotrain_compatible" ]
token-classification
false
TweebankNLP
null
TweebankNLP/bertweet-tb2_ewt-pos-tagging
50
2
transformers
6,010
--- license: cc-by-nc-4.0 --- ## Model Specification - This is the **state-of-the-art Twitter POS tagging model (with 95.38\% Accuracy)** on Tweebank V2's NER benchmark (also called `Tweebank-NER`), trained on the corpus combining both Tweebank-NER and English-EWT training data. - For more details about the `TweebankNLP` project, please refer to this [our paper](https://arxiv.org/pdf/2201.07281.pdf) and [github](https://github.com/social-machines/TweebankNLP) page. - In the paper, it is referred as `HuggingFace-BERTweet (TB2+EWT)` in the POS table. ## How to use the model - **PRE-PROCESSING**: when you apply the model on tweets, please make sure that tweets are preprocessed by the [TweetTokenizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) to get the best performance. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("TweebankNLP/bertweet-tb2_ewt-pos-tagging") model = AutoModelForTokenClassification.from_pretrained("TweebankNLP/bertweet-tb2_ewt-pos-tagging") ``` ## References If you use this repository in your research, please kindly cite [our paper](https://arxiv.org/pdf/2201.07281.pdf): ```bibtex @article{jiang2022tweetnlp, title={Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis}, author={Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb}, journal={In Proceedings of the 13th Language Resources and Evaluation Conference (LREC)}, year={2022} } ```
d0r1h/led-base-ilc
08a1d521150602fa95f0f4abdaa6f807c22d1988
2022-05-06T08:17:46.000Z
[ "pytorch", "led", "text2text-generation", "dataset:ilc", "arxiv:2004.05150", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
d0r1h
null
d0r1h/led-base-ilc
50
null
transformers
6,011
--- license: apache-2.0 datasets: ilc tags: - summarization metrics: - rouge widget: - text: "IN THE HIGH COURT OF JUDICATURE AT PATNA CRIMINAL MISCELLANEOUS No. 229121 Arising Out of PS. Case No. 127 Year 2020 Thana DUMRAON District Buxar 1. Ramlal Goswami aged about 44 years Male S o Late Gauri Shankar 2. Dharmshila Devi @ Savita Devi aged about 35 years wife of Ramlal Both resident of village Badka Dhakaich P.S. Krishna Brahm District ... Petitioner s ... Opposite Party s The State of Bihar Appearance : For the Petitioner s For the State CORAM: HONOURABLE MR. JUSTICE AHSANUDDIN AMANULLAH ORAL JUDGMENT Mr. Manoj Kumar with Mr. Anil Kumar Roy Advocates Mr. Ram Sumiran Roy APP The matter has been heard via video conferencing. 2. Heard Mr. Manoj Kumar learned counsel along with Mr. Anil Kumar Roy learned counsel for the petitioners and Mr. Ram Sumiran Roy learned Additional Public Prosecutorfor the State. 3. Learned counsel for the petitioners submitted that he may be permitted to add alias name of petitioner no. 2 which is Savita Devi. 4. Prayer allowed. 5. Let necessary correction be made in the cause title Date : 03 08 2021 Patna High Court CR. MISC. No. 229121 dt.03 08 2021 2 4 by learned counsel for the petitioners through e mode by day after tomorrow. 6. The petitioners apprehend arrest in connection with Dumraon PS Case No. 1220 dated 15.04.2020 instituted under Sections 406 420 467 468 471 448 506 34 of the Indian Penal Code. 7. The allegation against the petitioners is that the informant who is the cousin brother of petitioner no. 1 had bought land through the petitioner no. 1 but he was cheated both with regard to the rates as also that the same piece of land being sold by the petitioners to two different persons. 8. Learned counsel for the petitioners submitted that in the FIR itself it has been stated that the informant had sold his land at a much higher price than the price he was paying for the land which he alleges to have been negotiated by the petitioner no. 1 for him. Further it was submitted that all such dispute relating to money is a purely civil in nature for which criminal case is an abuse of the process of the Court. Learned counsel submitted that the informant being the first cousin of the petitioner no. 1 and having sold his land was very well aware of the ground realities and cannot take a stand that he was ignorant of what was the actual position. Further it was submitted that Patna High Court CR. MISC. No. 229121 dt.03 08 2021 3 4 the petitioners have filed a supplementary affidavit in which a categorical stand has been taken on oath that the petitioners have not sold the same piece of land to two different persons. Learned counsel submitted that the petitioners are simple citizens being husband and wife and have no other criminal antecedent. It was submitted that had the allegation been correct the other person aggrieved would also have filed a case and most importantly neither any name of any person has been taken nor details of any document that the same piece of land was transferred to two persons has been either mentioned or brought on record. 9. Learned APP submitted that the petitioners are alleged to have cheated the informant and have got the same piece of land registered in favour of two persons. 10. Having considered the facts and circumstances of the case and submissions of learned counsel for the parties in the event of arrest or surrender before the Court below within six weeks from today the petitioners be released on bail upon furnishing bail bonds of Rs. 25 000 each with two sureties of the like amount each to the satisfaction of the learned Chief Judicial Magistrate Buxar in Dumrao PS Case No. 127 of 2020 subject to the conditions laid down in Patna High Court CR. MISC. No. 229121 dt.03 08 2021 4 4 Section 438(2) of the Code of Criminal Procedure 1973 and furtherthat one of the bailors shall be a close relative of the petitioners andthat the petitioners shall cooperate with the Court and the police prosecution. Failure to cooperate shall lead to cancellation of their bail bonds. 11. It shall also be open for the prosecution to bring any violation of the foregoing conditions of bail by the petitioners to the notice of the Court concerned which shall take immediate action on the same after giving opportunity of hearing to the aforementioned terms. 12. The petition stands disposed of Anjani " --- # Longformer Encoder-Decoder (LED) fine-tuned on ILC This model is a fine-tuned version of [led-base-16384](https://huggingface.co/allenai/led-base-16384) on the [ILC](https://huggingface.co/datasets/d0r1h/ILC) dataset. As described in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan, *led-base-16384* was initialized from [*bart-base*](https://huggingface.co/facebook/bart-base) since both models share the exact same architecture. To be able to process 16K tokens, *bart-base*'s position embedding matrix was simply copied 16 times. ```Python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer device = "cuda" if torch.cuda.is_available() else "CPU" checkpoint = "d0r1h/led-base-ilc" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, return_dict_in_generate=True).to(device) case = "......." input_ids = tokenizer(case, return_tensors="pt").input_ids.to(device) global_attention_mask = torch.zeros_like(input_ids) global_attention_mask[:, 0] = 1 sequences = model.generate(input_ids, global_attention_mask=global_attention_mask).sequences summary = tokenizer.batch_decode(sequences, skip_special_tokens=True) ``` ## Evaluation results When the model is used for summarizing ILC documents(10 samples), it achieves the following results: | Model | rouge1-f | rouge1-p | rouge2-f | rouge2-p | rougeL-f | rougeL-p | |:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:| | led-ilc | **42** | **47** | **22** | **24** | **39** | **44** | | led-base | 3 | 39 | 1 | 21 | 3 | 37 | [This notebook](https://colab.research.google.com/github/d0r1h/Notebooks/blob/main/NLP/Summarization/led_base_ilc_summarization.ipynb) shows how *led* can effectively be used for downstream tasks such as summarization.
allenai/tk-instruct-11b-def-pos
655f68f5a6cf3ae9685688e109e714a0596fa380
2022-05-27T06:29:13.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:natural instructions v2.0", "arxiv:1910.10683", "arxiv:2204.07705", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
allenai
null
allenai/tk-instruct-11b-def-pos
50
null
transformers
6,012
--- language: en license: apache-2.0 datasets: - natural instructions v2.0 --- # Model description Tk-Instruct is a series of encoder-decoder Transformer models that are trained to solve various NLP tasks by following in-context instructions (plain language task definitions, k-shot examples, explanations, etc). Built upon the pre-trained [T5 models](https://arxiv.org/abs/1910.10683), they are fine-tuned on a large number of tasks & instructions that are collected in the [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. This enables the model to not only process the training tasks, but also generalize to many unseen tasks without further parameter update. More resources for using the model: - **Paper**: [link](https://arxiv.org/abs/2204.07705) - **Code repository**: [Tk-Instruct](https://github.com/yizhongw/Tk-Instruct) - **Official Website**: [Natural Instructions](https://instructions.apps.allenai.org/) - **All released models**: [allenai/tk-instruct](https://huggingface.co/models?search=allenai/tk-instruct) ## Intended uses & limitations Tk-Instruct can be used to do many NLP tasks by following instructions. ### How to use When instructing the model, task definition or demonstration examples or explanations should be prepended to the original input and fed into the model. You can easily try Tk-Instruct models as follows: ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("allenai/tk-instruct-3b-def") >>> model = AutoModelForSeq2SeqLM.from_pretrained("allenai/tk-instruct-3b-def") >>> input_ids = tokenizer.encode( "Definition: return the currency of the given country. Now complete the following example - Input: India. Output:", return_tensors="pt") >>> output = model.generate(input_ids, max_length=10) >>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'Indian Rupee' >>> input_ids = tokenizer.encode( "Definition: negate the following sentence. Input: John went to school. Output:", return_tensors="pt") >>> output = model.generate(input_ids, max_length=10) >>> output = tokenizer.decode(output[0], skip_special_tokens=True) # model should output 'John did not go to shool.' ``` ### Limitations We are still working on understanding the behaviors of these models, but here are several issues we have found: - Models are generally sensitive to the instruction. Sometimes rewording the instruction can lead to very different output. - Models are not always compliant to the instruction. Sometimes the model don't follow your instruction (e.g., when you ask the model to generate one sentence, it might still generate one word or a long story). - Models might totally fail on some tasks. If you find serious issues or any interesting result, you are welcome to share with us! ## Training data Tk-Instruct is trained using the tasks & instructions in [Natural Instructions benchmark](https://github.com/allenai/natural-instructions), which contains 1600+ tasks in 70+ broach categories in total. We follow the official train/test split. Tk-Instruct model series were trained using 757 tasks, and mTk-Instruct series were trained using 1271 tasks (including some non-English tasks). The training tasks are in 64 broad categories, such as text categorization / question answering / sentiment analysis / summarization / grammar error detection / dialogue generation / etc. The other 12 categories are selected for evaluation. ## Training procedure All our models are initialized from either T5 models or mT5 models. Because generating the output can be regarded as a form of language modeling, we used their [LM adapted version](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). All data is converted into a text-to-text format, and models are fine-tuned to maximize the likelihood of the output sequence. Our [released models](https://huggingface.co/models?search=allenai/tk-instruct) are in different sizes, and each of them was trained with a specific type of instruction encoding. For instance, `tk-instruct-3b-def-pos` was initialized from [t5-xl-lm-adapt](https://huggingface.co/google/t5-xl-lm-adapt), and it saw task definition & 2 positive examples as the instruction during training time. Although they are trained with only one type of instruction encodings, we found they can usually work with other type of encodings at test time (see more in our paper). ### BibTeX entry and citation info ```bibtex @article{wang2022benchmarking, title={Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks}, author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and A. Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and M. Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddharth Deepak Mishra and Sujan C. Reddy and Sumanta Patro and Tanay Dixit and Xu-dong Shen and Chitta Baral and Yejin Choi and Hannaneh Hajishirzi and Noah A. Smith and Daniel Khashabi}, year={2022}, archivePrefix={arXiv}, eprint={2204.07705}, primaryClass={cs.CL}, } ```
samrawal/medical-sentence-tokenizer
9e006a3fbed6747fcf36ff6530b8fdbe778243f3
2022-05-30T19:12:19.000Z
[ "pytorch", "bert", "token-classification", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
samrawal
null
samrawal/medical-sentence-tokenizer
50
null
transformers
6,013
--- license: apache-2.0 --- `clinitokenizer` is a sentence tokenizer for clinical text to split unstructured text from clinical text (such as Electronic Medical Records) into individual sentences. To use this model, see the [clinitokenizer repository](https://github.com/clinisift/clinitokenizer). General English sentence tokenizers are often unable to correctly parse medical abbreviations, jargon, and other conventions often used in medical records (see "Motivating Examples" section below). clinitokenizer is specifically trained on medical record data and can perform better in these situations (conversely, for non-domain specific use, using more general sentence tokenizers may yield better results). The model has been trained on multiple datasets provided by [i2b2 (now n2c2)](https://n2c2.dbmi.hms.harvard.edu). Please visit the n2c2 site to request access to the dataset.
smc/PANDA_ConvNeXT_K
1d5430baeef533b5c3669798c106477298502cad
2022-05-26T21:33:39.000Z
[ "pytorch", "convnext", "image-classification", "transformers", "model-index" ]
image-classification
false
smc
null
smc/PANDA_ConvNeXT_K
50
1
transformers
6,014
--- tags: - image-classification - pytorch metrics: - accuracy - Cohen's Kappa model-index: - name: PANDA_ConvNeXT_K results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.6058823466300964 - name: Quadratic Cohen's Kappa type: Quadratic Cohen's Kappa value: 0.6207689046859741 --- # PANDA_ConvNeXT_K An attempt to use a ConvNeXT for medical image classification (ISUP grading in prostate histopathology images). Currently uses a tiled and concatenated WSI as input ISUP 0: <img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c02d3bb3a62519b31c63d0301c6843e_0.jpeg"> ISUP 1: <img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0cee71ab57422e04f76e09ef2186fcd5_1.jpeg"> ISUP 2: <img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/00bbc1482301d16de3ff63238cfd0b34_2.jpeg"> ISUP 3: <img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c5c2d16c0f2e399b7be641e7e7f66d9_3.jpeg"> ISUP 4: <img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/0c88d7c7033e2048b1068e208b105270_4.jpeg"> ISUP 5: <img width="256" height="256" src="https://huggingface.co/smc/PANDA_ViT/resolve/main/00c15b23b30a5ba061358d9641118904_5.jpeg">
bilalahmed15/Urdu_repo
ace2bdd81b2edbe366ffac7049cef94d3fc02a69
2022-06-02T21:01:04.000Z
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
bilalahmed15
null
bilalahmed15/Urdu_repo
50
null
transformers
6,015
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-turkish-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7532 - Wer: 0.4020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.9542 | 1.96 | 400 | 1.5737 | 0.8827 | | 0.8596 | 3.92 | 800 | 0.7296 | 0.5696 | | 0.4729 | 5.88 | 1200 | 0.6004 | 0.4934 | | 0.3364 | 7.84 | 1600 | 0.5776 | 0.4656 | | 0.2684 | 9.8 | 2000 | 0.6178 | 0.4563 | | 0.2143 | 11.76 | 2400 | 0.6408 | 0.4690 | | 0.1744 | 13.72 | 2800 | 0.6704 | 0.4573 | | 0.1458 | 15.68 | 3200 | 0.7015 | 0.4484 | | 0.1201 | 17.65 | 3600 | 0.7151 | 0.4228 | | 0.104 | 19.61 | 4000 | 0.7123 | 0.4195 | | 0.0887 | 21.57 | 4400 | 0.7102 | 0.4234 | | 0.0807 | 23.53 | 4800 | 0.7561 | 0.4132 | | 0.0697 | 25.49 | 5200 | 0.7435 | 0.4075 | | 0.0611 | 27.45 | 5600 | 0.7465 | 0.4034 | | 0.0556 | 29.41 | 6000 | 0.7532 | 0.4020 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Rebreak/bert_news_class
8bc8f312c13d2f8be961cbd7c9d4f7309511a758
2022-06-10T08:07:21.000Z
[ "pytorch", "bert", "text-classification", "transformers", "license:mit" ]
text-classification
false
Rebreak
null
Rebreak/bert_news_class
50
null
transformers
6,016
--- license: mit --- Classifier of news affecting the stock price in the next 10 minutes
huggingtweets/dril-tacticalmaid
f2e2cea517d4a627226a01612fa88a21800dd135
2022-07-01T12:50:55.000Z
[ "pytorch", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/dril-tacticalmaid
50
null
transformers
6,017
--- language: en thumbnail: http://www.huggingtweets.com/dril-tacticalmaid/1656679850409/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1498996796093509632/Z7VwFzOJ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & Maid POLadin 🎪 💙💛</div> <div style="text-align: center; font-size: 14px;">@dril-tacticalmaid</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & Maid POLadin 🎪 💙💛. | Data | wint | Maid POLadin 🎪 💙💛 | | --- | --- | --- | | Tweets downloaded | 3231 | 3225 | | Retweets | 487 | 2081 | | Short tweets | 295 | 290 | | Tweets kept | 2449 | 854 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/20brgjpa/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-tacticalmaid's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ev3hr7n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ev3hr7n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dril-tacticalmaid') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Danitg95/autotrain-kaggle-effective-arguments-1086739296
5ad53906ce06a39872b8dce0dd8d812dbf0e89e4
2022-07-04T21:53:10.000Z
[ "pytorch", "distilbert", "text-classification", "en", "dataset:Danitg95/autotrain-data-kaggle-effective-arguments", "transformers", "autotrain", "co2_eq_emissions" ]
text-classification
false
Danitg95
null
Danitg95/autotrain-kaggle-effective-arguments-1086739296
50
null
transformers
6,018
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - Danitg95/autotrain-data-kaggle-effective-arguments co2_eq_emissions: 5.2497206864306065 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1086739296 - CO2 Emissions (in grams): 5.2497206864306065 ## Validation Metrics - Loss: 0.744236171245575 - Accuracy: 0.6719238613188308 - Macro F1: 0.5450301061253738 - Micro F1: 0.6719238613188308 - Weighted F1: 0.6349879540623229 - Macro Precision: 0.6691326843926052 - Micro Precision: 0.6719238613188308 - Weighted Precision: 0.6706209016443158 - Macro Recall: 0.5426627824078865 - Micro Recall: 0.6719238613188308 - Weighted Recall: 0.6719238613188308 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Danitg95/autotrain-kaggle-effective-arguments-1086739296 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Danitg95/autotrain-kaggle-effective-arguments-1086739296", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Danitg95/autotrain-kaggle-effective-arguments-1086739296", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
SafeTorpedo/DialoGPT-small-MichaelBot
7d9358a9d288df252e498ba830c295bef94aef57
2022-07-08T11:38:03.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
SafeTorpedo
null
SafeTorpedo/DialoGPT-small-MichaelBot
50
null
transformers
6,019
--- tags: - conversational --- #Michael from Office DialoGPT Model
ignatius/cyT5-small
91c22f51996710d97694120bd7ab997ac9ce0a1b
2022-07-19T15:02:01.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:cc-by-4.0", "autotrain_compatible" ]
text2text-generation
false
ignatius
null
ignatius/cyT5-small
50
null
transformers
6,020
--- license: cc-by-4.0 --- `cyT5-small` is a light-weight (alpha-version) Welsh T5 model extracted from the `google/mt5-small` model and fine-tuned only on the [Welsh summarization dataset](https://huggingface.co/datasets/ignatius/welsh_summarization) Further developments are ongoing and will updates will be shared soon.
jonatasgrosman/exp_w2v2t_pt_vp-it_s529
6178739c77d95a53efacd1957580fd1d99540627
2022-07-11T20:21:11.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "license:apache-2.0" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/exp_w2v2t_pt_vp-it_s529
50
null
transformers
6,021
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-it_s529 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jordyvl/biobert-base-cased-v1.2_ncbi_disease-sm-first-ner
7dfc679be20aa2605346028f1fa68ffa7b6c1634
2022-07-20T09:26:17.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "dataset:ncbi_disease", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
jordyvl
null
jordyvl/biobert-base-cased-v1.2_ncbi_disease-sm-first-ner
50
null
transformers
6,022
--- tags: - generated_from_trainer datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy model-index: - name: biobert-base-cased-v1.2_ncbi_disease-sm-first-ner results: - task: name: Token Classification type: token-classification dataset: name: ncbi_disease type: ncbi_disease args: ncbi_disease metrics: - name: Precision type: precision value: 0.8522139160437032 - name: Recall type: recall value: 0.8826682549136391 - name: F1 type: f1 value: 0.8671737858396723 - name: Accuracy type: accuracy value: 0.9826972482743678 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-base-cased-v1.2_ncbi_disease-sm-first-ner This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the ncbi_disease dataset. It achieves the following results on the evaluation set: - Loss: 0.0865 - Precision: 0.8522 - Recall: 0.8827 - F1: 0.8672 - Accuracy: 0.9827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0858 | 1.0 | 1359 | 0.0985 | 0.7929 | 0.8005 | 0.7967 | 0.9730 | | 0.042 | 2.0 | 2718 | 0.0748 | 0.8449 | 0.8856 | 0.8648 | 0.9820 | | 0.0124 | 3.0 | 4077 | 0.0865 | 0.8522 | 0.8827 | 0.8672 | 0.9827 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
scales-okn/entity-resolution
3b9072a2f6108ad8ff54975b5acdc6f0faea656c
2022-07-26T17:11:07.000Z
[ "pytorch", "deberta-v2", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
scales-okn
null
scales-okn/entity-resolution
50
null
transformers
6,023
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: entity-resolution results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # entity-resolution This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1726 - Accuracy: 0.9548 - F1: 0.8235 - Precision: 0.9130 - Recall: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
derwahnsinn/gpt2-mediumForbiddenToyStory
99206abd7dcacf5c94694481936a03808eac48bf
2022-07-28T13:25:44.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-generation
false
derwahnsinn
null
derwahnsinn/gpt2-mediumForbiddenToyStory
50
null
transformers
6,024
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-mediumForbiddenToyStory results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-mediumForbiddenToyStory This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.9980 - eval_runtime: 96.8281 - eval_samples_per_second: 34.535 - eval_steps_per_second: 4.317 - epoch: 7.53 - step: 3147 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 29 ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
RAYZ/macbert
c268a598b76662e73ad2970d42b961d4dc7a9480
2022-07-29T19:24:46.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
RAYZ
null
RAYZ/macbert
50
null
transformers
6,025
Entry not found
Hate-speech-CNERG/dehatebert-mono-portugese
a212b2dd7e8e3d953787a49d92c469b30c6da6ba
2021-09-25T13:58:01.000Z
[ "pytorch", "jax", "bert", "text-classification", "pt", "arxiv:2004.06465", "transformers", "license:apache-2.0" ]
text-classification
false
Hate-speech-CNERG
null
Hate-speech-CNERG/dehatebert-mono-portugese
49
2
transformers
6,026
--- language: pt license: apache-2.0 --- This model is used detecting **hatespeech** in **Portuguese language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.716119 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Helsinki-NLP/opus-mt-en-ine
1fc70cd9577f55f77c5177f1f1769068c1f8563d
2021-01-18T08:09:54.000Z
[ "pytorch", "marian", "text2text-generation", "en", "ca", "es", "os", "ro", "fy", "cy", "sc", "is", "yi", "lb", "an", "sq", "fr", "ht", "rm", "ps", "af", "uk", "sl", "lt", "bg", "be", "gd", "si", "br", "mk", "or", "mr", "ru", "fo", "co", "oc", "pl", "gl", "nb", "bn", "id", "hy", "da", "gv", "nl", "pt", "hi", "as", "kw", "ga", "sv", "gu", "wa", "lv", "el", "it", "hr", "ur", "nn", "de", "cs", "ine", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-ine
49
null
transformers
6,027
--- language: - en - ca - es - os - ro - fy - cy - sc - is - yi - lb - an - sq - fr - ht - rm - ps - af - uk - sl - lt - bg - be - gd - si - br - mk - or - mr - ru - fo - co - oc - pl - gl - nb - bn - id - hy - da - gv - nl - pt - hi - as - kw - ga - sv - gu - wa - lv - el - it - hr - ur - nn - de - cs - ine tags: - translation license: apache-2.0 --- ### eng-ine * source group: English * target group: Indo-European languages * OPUS readme: [eng-ine](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md) * model: transformer * source language(s): eng * target language(s): afr aln ang_Latn arg asm ast awa bel bel_Latn ben bho bos_Latn bre bul bul_Latn cat ces cor cos csb_Latn cym dan deu dsb egl ell enm_Latn ext fao fra frm_Latn frr fry gcf_Latn gla gle glg glv gom gos got_Goth grc_Grek gsw guj hat hif_Latn hin hrv hsb hye ind isl ita jdt_Cyrl ksh kur_Arab kur_Latn lad lad_Latn lat_Latn lav lij lit lld_Latn lmo ltg ltz mai mar max_Latn mfe min mkd mwl nds nld nno nob nob_Hebr non_Latn npi oci ori orv_Cyrl oss pan_Guru pap pdc pes pes_Latn pes_Thaa pms pnb pol por prg_Latn pus roh rom ron rue rus san_Deva scn sco sgs sin slv snd_Arab spa sqi srp_Cyrl srp_Latn stq swe swg tgk_Cyrl tly_Latn tmw_Latn ukr urd vec wln yid zlm_Latn zsm_Latn zza * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2014-enghin.eng.hin | 6.2 | 0.317 | | newsdev2016-enro-engron.eng.ron | 22.1 | 0.525 | | newsdev2017-enlv-englav.eng.lav | 17.4 | 0.486 | | newsdev2019-engu-engguj.eng.guj | 6.5 | 0.303 | | newsdev2019-enlt-englit.eng.lit | 14.9 | 0.476 | | newsdiscussdev2015-enfr-engfra.eng.fra | 26.4 | 0.547 | | newsdiscusstest2015-enfr-engfra.eng.fra | 30.0 | 0.575 | | newssyscomb2009-engces.eng.ces | 14.7 | 0.442 | | newssyscomb2009-engdeu.eng.deu | 16.7 | 0.487 | | newssyscomb2009-engfra.eng.fra | 24.8 | 0.547 | | newssyscomb2009-engita.eng.ita | 25.2 | 0.562 | | newssyscomb2009-engspa.eng.spa | 27.0 | 0.554 | | news-test2008-engces.eng.ces | 13.0 | 0.417 | | news-test2008-engdeu.eng.deu | 17.4 | 0.480 | | news-test2008-engfra.eng.fra | 22.3 | 0.519 | | news-test2008-engspa.eng.spa | 24.9 | 0.532 | | newstest2009-engces.eng.ces | 13.6 | 0.432 | | newstest2009-engdeu.eng.deu | 16.6 | 0.482 | | newstest2009-engfra.eng.fra | 23.5 | 0.535 | | newstest2009-engita.eng.ita | 25.5 | 0.561 | | newstest2009-engspa.eng.spa | 26.3 | 0.551 | | newstest2010-engces.eng.ces | 14.2 | 0.436 | | newstest2010-engdeu.eng.deu | 18.3 | 0.492 | | newstest2010-engfra.eng.fra | 25.7 | 0.550 | | newstest2010-engspa.eng.spa | 30.5 | 0.578 | | newstest2011-engces.eng.ces | 15.1 | 0.439 | | newstest2011-engdeu.eng.deu | 17.1 | 0.478 | | newstest2011-engfra.eng.fra | 28.0 | 0.569 | | newstest2011-engspa.eng.spa | 31.9 | 0.580 | | newstest2012-engces.eng.ces | 13.6 | 0.418 | | newstest2012-engdeu.eng.deu | 17.0 | 0.475 | | newstest2012-engfra.eng.fra | 26.1 | 0.553 | | newstest2012-engrus.eng.rus | 21.4 | 0.506 | | newstest2012-engspa.eng.spa | 31.4 | 0.577 | | newstest2013-engces.eng.ces | 15.3 | 0.438 | | newstest2013-engdeu.eng.deu | 20.3 | 0.501 | | newstest2013-engfra.eng.fra | 26.0 | 0.540 | | newstest2013-engrus.eng.rus | 16.1 | 0.449 | | newstest2013-engspa.eng.spa | 28.6 | 0.555 | | newstest2014-hien-enghin.eng.hin | 9.5 | 0.344 | | newstest2015-encs-engces.eng.ces | 14.8 | 0.440 | | newstest2015-ende-engdeu.eng.deu | 22.6 | 0.523 | | newstest2015-enru-engrus.eng.rus | 18.8 | 0.483 | | newstest2016-encs-engces.eng.ces | 16.8 | 0.457 | | newstest2016-ende-engdeu.eng.deu | 26.2 | 0.555 | | newstest2016-enro-engron.eng.ron | 21.2 | 0.510 | | newstest2016-enru-engrus.eng.rus | 17.6 | 0.471 | | newstest2017-encs-engces.eng.ces | 13.6 | 0.421 | | newstest2017-ende-engdeu.eng.deu | 21.5 | 0.516 | | newstest2017-enlv-englav.eng.lav | 13.0 | 0.452 | | newstest2017-enru-engrus.eng.rus | 18.7 | 0.486 | | newstest2018-encs-engces.eng.ces | 13.5 | 0.425 | | newstest2018-ende-engdeu.eng.deu | 29.8 | 0.581 | | newstest2018-enru-engrus.eng.rus | 16.1 | 0.472 | | newstest2019-encs-engces.eng.ces | 14.8 | 0.435 | | newstest2019-ende-engdeu.eng.deu | 26.6 | 0.554 | | newstest2019-engu-engguj.eng.guj | 6.9 | 0.313 | | newstest2019-enlt-englit.eng.lit | 10.6 | 0.429 | | newstest2019-enru-engrus.eng.rus | 17.5 | 0.452 | | Tatoeba-test.eng-afr.eng.afr | 52.1 | 0.708 | | Tatoeba-test.eng-ang.eng.ang | 5.1 | 0.131 | | Tatoeba-test.eng-arg.eng.arg | 1.2 | 0.099 | | Tatoeba-test.eng-asm.eng.asm | 2.9 | 0.259 | | Tatoeba-test.eng-ast.eng.ast | 14.1 | 0.408 | | Tatoeba-test.eng-awa.eng.awa | 0.3 | 0.002 | | Tatoeba-test.eng-bel.eng.bel | 18.1 | 0.450 | | Tatoeba-test.eng-ben.eng.ben | 13.5 | 0.432 | | Tatoeba-test.eng-bho.eng.bho | 0.3 | 0.003 | | Tatoeba-test.eng-bre.eng.bre | 10.4 | 0.318 | | Tatoeba-test.eng-bul.eng.bul | 38.7 | 0.592 | | Tatoeba-test.eng-cat.eng.cat | 42.0 | 0.633 | | Tatoeba-test.eng-ces.eng.ces | 32.3 | 0.546 | | Tatoeba-test.eng-cor.eng.cor | 0.5 | 0.079 | | Tatoeba-test.eng-cos.eng.cos | 3.1 | 0.148 | | Tatoeba-test.eng-csb.eng.csb | 1.4 | 0.216 | | Tatoeba-test.eng-cym.eng.cym | 22.4 | 0.470 | | Tatoeba-test.eng-dan.eng.dan | 49.7 | 0.671 | | Tatoeba-test.eng-deu.eng.deu | 31.7 | 0.554 | | Tatoeba-test.eng-dsb.eng.dsb | 1.1 | 0.139 | | Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.089 | | Tatoeba-test.eng-ell.eng.ell | 42.7 | 0.640 | | Tatoeba-test.eng-enm.eng.enm | 3.5 | 0.259 | | Tatoeba-test.eng-ext.eng.ext | 6.4 | 0.235 | | Tatoeba-test.eng-fao.eng.fao | 6.6 | 0.285 | | Tatoeba-test.eng-fas.eng.fas | 5.7 | 0.257 | | Tatoeba-test.eng-fra.eng.fra | 38.4 | 0.595 | | Tatoeba-test.eng-frm.eng.frm | 0.9 | 0.149 | | Tatoeba-test.eng-frr.eng.frr | 8.4 | 0.145 | | Tatoeba-test.eng-fry.eng.fry | 16.5 | 0.411 | | Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.098 | | Tatoeba-test.eng-gla.eng.gla | 11.6 | 0.361 | | Tatoeba-test.eng-gle.eng.gle | 32.5 | 0.546 | | Tatoeba-test.eng-glg.eng.glg | 38.4 | 0.602 | | Tatoeba-test.eng-glv.eng.glv | 23.1 | 0.418 | | Tatoeba-test.eng-gos.eng.gos | 0.7 | 0.137 | | Tatoeba-test.eng-got.eng.got | 0.2 | 0.010 | | Tatoeba-test.eng-grc.eng.grc | 0.0 | 0.005 | | Tatoeba-test.eng-gsw.eng.gsw | 0.9 | 0.108 | | Tatoeba-test.eng-guj.eng.guj | 20.8 | 0.391 | | Tatoeba-test.eng-hat.eng.hat | 34.0 | 0.537 | | Tatoeba-test.eng-hbs.eng.hbs | 33.7 | 0.567 | | Tatoeba-test.eng-hif.eng.hif | 2.8 | 0.269 | | Tatoeba-test.eng-hin.eng.hin | 15.6 | 0.437 | | Tatoeba-test.eng-hsb.eng.hsb | 5.4 | 0.320 | | Tatoeba-test.eng-hye.eng.hye | 17.4 | 0.426 | | Tatoeba-test.eng-isl.eng.isl | 17.4 | 0.436 | | Tatoeba-test.eng-ita.eng.ita | 40.4 | 0.636 | | Tatoeba-test.eng-jdt.eng.jdt | 6.4 | 0.008 | | Tatoeba-test.eng-kok.eng.kok | 6.6 | 0.005 | | Tatoeba-test.eng-ksh.eng.ksh | 0.8 | 0.123 | | Tatoeba-test.eng-kur.eng.kur | 10.2 | 0.209 | | Tatoeba-test.eng-lad.eng.lad | 0.8 | 0.163 | | Tatoeba-test.eng-lah.eng.lah | 0.2 | 0.001 | | Tatoeba-test.eng-lat.eng.lat | 9.4 | 0.372 | | Tatoeba-test.eng-lav.eng.lav | 30.3 | 0.559 | | Tatoeba-test.eng-lij.eng.lij | 1.0 | 0.130 | | Tatoeba-test.eng-lit.eng.lit | 25.3 | 0.560 | | Tatoeba-test.eng-lld.eng.lld | 0.4 | 0.139 | | Tatoeba-test.eng-lmo.eng.lmo | 0.6 | 0.108 | | Tatoeba-test.eng-ltz.eng.ltz | 18.1 | 0.388 | | Tatoeba-test.eng-mai.eng.mai | 17.2 | 0.464 | | Tatoeba-test.eng-mar.eng.mar | 18.0 | 0.451 | | Tatoeba-test.eng-mfe.eng.mfe | 81.0 | 0.899 | | Tatoeba-test.eng-mkd.eng.mkd | 37.6 | 0.587 | | Tatoeba-test.eng-msa.eng.msa | 27.7 | 0.519 | | Tatoeba-test.eng.multi | 32.6 | 0.539 | | Tatoeba-test.eng-mwl.eng.mwl | 3.8 | 0.134 | | Tatoeba-test.eng-nds.eng.nds | 14.3 | 0.401 | | Tatoeba-test.eng-nep.eng.nep | 0.5 | 0.002 | | Tatoeba-test.eng-nld.eng.nld | 44.0 | 0.642 | | Tatoeba-test.eng-non.eng.non | 0.7 | 0.118 | | Tatoeba-test.eng-nor.eng.nor | 42.7 | 0.623 | | Tatoeba-test.eng-oci.eng.oci | 7.2 | 0.295 | | Tatoeba-test.eng-ori.eng.ori | 2.7 | 0.257 | | Tatoeba-test.eng-orv.eng.orv | 0.2 | 0.008 | | Tatoeba-test.eng-oss.eng.oss | 2.9 | 0.264 | | Tatoeba-test.eng-pan.eng.pan | 7.4 | 0.337 | | Tatoeba-test.eng-pap.eng.pap | 48.5 | 0.656 | | Tatoeba-test.eng-pdc.eng.pdc | 1.8 | 0.145 | | Tatoeba-test.eng-pms.eng.pms | 0.7 | 0.136 | | Tatoeba-test.eng-pol.eng.pol | 31.1 | 0.563 | | Tatoeba-test.eng-por.eng.por | 37.0 | 0.605 | | Tatoeba-test.eng-prg.eng.prg | 0.2 | 0.100 | | Tatoeba-test.eng-pus.eng.pus | 1.0 | 0.134 | | Tatoeba-test.eng-roh.eng.roh | 2.3 | 0.236 | | Tatoeba-test.eng-rom.eng.rom | 7.8 | 0.340 | | Tatoeba-test.eng-ron.eng.ron | 34.3 | 0.585 | | Tatoeba-test.eng-rue.eng.rue | 0.2 | 0.010 | | Tatoeba-test.eng-rus.eng.rus | 29.6 | 0.526 | | Tatoeba-test.eng-san.eng.san | 2.4 | 0.125 | | Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.079 | | Tatoeba-test.eng-sco.eng.sco | 33.6 | 0.562 | | Tatoeba-test.eng-sgs.eng.sgs | 3.4 | 0.114 | | Tatoeba-test.eng-sin.eng.sin | 9.2 | 0.349 | | Tatoeba-test.eng-slv.eng.slv | 15.6 | 0.334 | | Tatoeba-test.eng-snd.eng.snd | 9.1 | 0.324 | | Tatoeba-test.eng-spa.eng.spa | 43.4 | 0.645 | | Tatoeba-test.eng-sqi.eng.sqi | 39.0 | 0.621 | | Tatoeba-test.eng-stq.eng.stq | 10.8 | 0.373 | | Tatoeba-test.eng-swe.eng.swe | 49.9 | 0.663 | | Tatoeba-test.eng-swg.eng.swg | 0.7 | 0.137 | | Tatoeba-test.eng-tgk.eng.tgk | 6.4 | 0.346 | | Tatoeba-test.eng-tly.eng.tly | 0.5 | 0.055 | | Tatoeba-test.eng-ukr.eng.ukr | 31.4 | 0.536 | | Tatoeba-test.eng-urd.eng.urd | 11.1 | 0.389 | | Tatoeba-test.eng-vec.eng.vec | 1.3 | 0.110 | | Tatoeba-test.eng-wln.eng.wln | 6.8 | 0.233 | | Tatoeba-test.eng-yid.eng.yid | 5.8 | 0.295 | | Tatoeba-test.eng-zza.eng.zza | 0.8 | 0.086 | ### System Info: - hf_name: eng-ine - source_languages: eng - target_languages: ine - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-ine/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ca', 'es', 'os', 'ro', 'fy', 'cy', 'sc', 'is', 'yi', 'lb', 'an', 'sq', 'fr', 'ht', 'rm', 'ps', 'af', 'uk', 'sl', 'lt', 'bg', 'be', 'gd', 'si', 'br', 'mk', 'or', 'mr', 'ru', 'fo', 'co', 'oc', 'pl', 'gl', 'nb', 'bn', 'id', 'hy', 'da', 'gv', 'nl', 'pt', 'hi', 'as', 'kw', 'ga', 'sv', 'gu', 'wa', 'lv', 'el', 'it', 'hr', 'ur', 'nn', 'de', 'cs', 'ine'] - src_constituents: {'eng'} - tgt_constituents: {'cat', 'spa', 'pap', 'mwl', 'lij', 'bos_Latn', 'lad_Latn', 'lat_Latn', 'pcd', 'oss', 'ron', 'fry', 'cym', 'awa', 'swg', 'zsm_Latn', 'srd', 'gcf_Latn', 'isl', 'yid', 'bho', 'ltz', 'kur_Latn', 'arg', 'pes_Thaa', 'sqi', 'csb_Latn', 'fra', 'hat', 'non_Latn', 'sco', 'pnb', 'roh', 'bul_Latn', 'pus', 'afr', 'ukr', 'slv', 'lit', 'tmw_Latn', 'hsb', 'tly_Latn', 'bul', 'bel', 'got_Goth', 'lat_Grek', 'ext', 'gla', 'mai', 'sin', 'hif_Latn', 'eng', 'bre', 'nob_Hebr', 'prg_Latn', 'ang_Latn', 'aln', 'mkd', 'ori', 'mar', 'afr_Arab', 'san_Deva', 'gos', 'rus', 'fao', 'orv_Cyrl', 'bel_Latn', 'cos', 'zza', 'grc_Grek', 'oci', 'mfe', 'gom', 'bjn', 'sgs', 'tgk_Cyrl', 'hye_Latn', 'pdc', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'nob', 'ben', 'min', 'srp_Latn', 'zlm_Latn', 'ind', 'rom', 'hye', 'scn', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'rus_Latn', 'jdt_Cyrl', 'gsw', 'glv', 'nld', 'snd_Arab', 'kur_Arab', 'por', 'hin', 'dsb', 'asm', 'lad', 'frm_Latn', 'ksh', 'pan_Guru', 'cor', 'gle', 'swe', 'guj', 'wln', 'lav', 'ell', 'frr', 'rue', 'ita', 'hrv', 'urd', 'stq', 'nno', 'deu', 'lld_Latn', 'ces', 'egl', 'vec', 'max_Latn', 'pes_Latn', 'ltg', 'nds'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-ine/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: ine - short_pair: en-ine - chrF2_score: 0.539 - bleu: 32.6 - brevity_penalty: 0.973 - ref_len: 68664.0 - src_name: English - tgt_name: Indo-European languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: ine - prefer_old: False - long_pair: eng-ine - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
KoichiYasuoka/roberta-classical-chinese-base-upos
7fc4c2e87291baeeebf7e870bc7ba7d1079806c3
2022-07-05T12:02:23.000Z
[ "pytorch", "roberta", "token-classification", "lzh", "dataset:universal_dependencies", "transformers", "classical chinese", "literary chinese", "ancient chinese", "pos", "dependency-parsing", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
KoichiYasuoka
null
KoichiYasuoka/roberta-classical-chinese-base-upos
49
null
transformers
6,028
--- language: - "lzh" tags: - "classical chinese" - "literary chinese" - "ancient chinese" - "token-classification" - "pos" - "dependency-parsing" datasets: - "universal_dependencies" license: "apache-2.0" pipeline_tag: "token-classification" widget: - text: "子曰學而時習之不亦説乎有朋自遠方來不亦樂乎人不知而不慍不亦君子乎" --- # roberta-classical-chinese-base-upos ## Model Description This is a RoBERTa model pre-trained on Classical Chinese texts for POS-tagging and dependency-parsing, derived from [roberta-classical-chinese-base-char](https://huggingface.co/KoichiYasuoka/roberta-classical-chinese-base-char). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-classical-chinese-base-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/roberta-classical-chinese-base-upos") ``` ## Reference Koichi Yasuoka: [Universal Dependencies Treebank of the Four Books in Classical Chinese](http://hdl.handle.net/2433/245217), DADH2019: 10th International Conference of Digital Archives and Digital Humanities (December 2019), pp.20-28. ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
Lowin/chinese-bigbird-tiny-1024
4a9197e8d5e26185e478af472b9563f046b4670a
2021-11-24T16:03:15.000Z
[ "pytorch", "big_bird", "feature-extraction", "zh", "transformers", "license:apache-2.0" ]
feature-extraction
false
Lowin
null
Lowin/chinese-bigbird-tiny-1024
49
1
transformers
6,029
--- language: - zh license: - apache-2.0 --- ```python import jieba_fast from transformers import BertTokenizer from transformers import BigBirdModel class JiebaTokenizer(BertTokenizer): def __init__( self, pre_tokenizer=lambda x: jieba_fast.cut(x, HMM=False), *args, **kwargs ): super().__init__(*args, **kwargs) self.pre_tokenizer = pre_tokenizer def _tokenize(self, text, *arg, **kwargs): split_tokens = [] for text in self.pre_tokenizer(text): if text in self.vocab: split_tokens.append(text) else: split_tokens.extend(super()._tokenize(text)) return split_tokens model = BigBirdModel.from_pretrained('Lowin/chinese-bigbird-tiny-1024') tokenizer = JiebaTokenizer.from_pretrained('Lowin/chinese-bigbird-tiny-1024') ``` https://github.com/LowinLi/chinese-bigbird
Mary222/MADE_AI_Dungeon_model_RUS
530856eddd16b08b32483393db9131c84b8b9f82
2021-11-07T16:57:43.000Z
[ "pytorch", "gpt2", "text-generation", "ru", "transformers" ]
text-generation
false
Mary222
null
Mary222/MADE_AI_Dungeon_model_RUS
49
1
transformers
6,030
--- language: ru tags: - text-generation --- # GPT2 - RUS
Matthijsvanhof/4
0ea8c0832dacb2e74001b029c627bf047b7ac23e
2021-11-27T22:42:25.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
token-classification
false
Matthijsvanhof
null
Matthijsvanhof/4
49
null
transformers
6,031
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: '4' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 4 This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1243 - Precision: 0.5220 - Recall: 0.6137 - F1: 0.5641 - Accuracy: 0.9630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 134 | 0.1357 | 0.4549 | 0.5521 | 0.4988 | 0.9574 | | No log | 2.0 | 268 | 0.1243 | 0.5220 | 0.6137 | 0.5641 | 0.9630 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Tokenizers 0.10.3
Suchandra/bengali_language_NER
5c7283d2591c18f5bda274f5ec3da15a0cce3634
2022-01-16T10:49:59.000Z
[ "pytorch", "bert", "token-classification", "bn", "dataset:wikiann", "transformers", "autotrain_compatible" ]
token-classification
false
Suchandra
null
Suchandra/bengali_language_NER
49
null
transformers
6,032
--- language: bn datasets: - wikiann examples: widget: - text: "মারভিন দি মারসিয়ান" example_title: "Sentence_1" - text: "লিওনার্দো দা ভিঞ্চি" example_title: "Sentence_2" - text: "বসনিয়া ও হার্জেগোভিনা" example_title: "Sentence_3" - text: "সাউথ ইস্ট ইউনিভার্সিটি" example_title: "Sentence_4" - text: "মানিক বন্দ্যোপাধ্যায় লেখক" example_title: "Sentence_5" --- <h1>Bengali Named Entity Recognition</h1> Fine-tuning bert-base-multilingual-cased on Wikiann dataset for performing NER on Bengali language. ## Label ID and its corresponding label name | Label ID | Label Name| | -------- | ----- | |0 | O | | 1 | B-PER | | 2 | I-PER | | 3 | B-ORG| | 4 | I-ORG | | 5 | B-LOC | | 6 | I-LOC | <h1>Results</h1> | Name | Overall F1 | LOC F1 | ORG F1 | PER F1 | | ---- | -------- | ----- | ---- | ---- | | Train set | 0.997927 | 0.998246 | 0.996613 | 0.998769 | | Validation set | 0.970187 | 0.969212 | 0.956831 | 0.982079 | | Test set | 0.9673011 | 0.967120 | 0.963614 | 0.970938 | Example ```py from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Suchandra/bengali_language_NER") model = AutoModelForTokenClassification.from_pretrained("Suchandra/bengali_language_NER") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "মারভিন দি মারসিয়ান" ner_results = nlp(example) ner_results ```
Tanhim/gpt2-model-de
51774239e28accd2eb385a08ad10aeac5154bf2d
2022-04-22T23:24:24.000Z
[ "pytorch", "gpt2", "text-generation", "de", "transformers", "license:gpl" ]
text-generation
false
Tanhim
null
Tanhim/gpt2-model-de
49
1
transformers
6,033
--- language: de widget: - text: Hallo, ich bin ein Sprachmodell license: gpl --- <h2> GPT2 Model for German Language </h2> Model Name: Tanhim/gpt2-model-de <br /> language: German or Deutsch <br /> thumbnail: https://huggingface.co/Tanhim/gpt2-model-de <br /> datasets: Ten Thousand German News Articles Dataset <br /> ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, I set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generation= pipeline('text-generation', model='Tanhim/gpt2-model-de', tokenizer='Tanhim/gpt2-model-de') >>> set_seed(42) >>> generation("Hallo, ich bin ein Sprachmodell,", max_length=30, num_return_sequences=5) ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("Tanhim/gpt2-model-de") model = AutoModelWithLMHead.from_pretrained("Tanhim/gpt2-model-de") text = "Ersetzen Sie mich durch einen beliebigen Text, den Sie wünschen." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` Citation request: If you use the model of this repository in your research, please consider citing the following way: ```python @misc{GermanTransformer, author = {Tanhim Islam}, title = {{PyTorch Based Transformer Machine Learning Model for German Text Generation Task}}, howpublished = "\url{https://huggingface.co/Tanhim/gpt2-model-de}", year = {2021}, note = "[Online; accessed 17-June-2021]" } ```
TransQuest/monotransquest-da-en_de-wiki
cf4f548d2bba477c8c1305889d1c2013ad706835
2021-06-03T19:03:21.000Z
[ "pytorch", "xlm-roberta", "text-classification", "en-de", "transformers", "Quality Estimation", "monotransquest", "DA", "license:apache-2.0" ]
text-classification
false
TransQuest
null
TransQuest/monotransquest-da-en_de-wiki
49
null
transformers
6,034
--- language: en-de tags: - Quality Estimation - monotransquest - DA license: apache-2.0 --- # TransQuest: Translation Quality Estimation with Cross-lingual Transformers The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level. With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest). ## Features - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment. - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps. - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented. - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest) ## Installation ### From pip ```bash pip install transquest ``` ### From Source ```bash git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt ``` ## Using Pre-trained Models ```python import torch from transquest.algo.sentence_level.monotransquest.run_model import MonoTransQuestModel model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-en_de-wiki", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["Reducerea acestor conflicte este importantă pentru conservare.", "Reducing these conflicts is not important for preservation."]]) print(predictions) ``` ## Documentation For more details follow the documentation. 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip. 2. **Architectures** - Checkout the architectures implemented in TransQuest 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation. 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation. 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks. 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/) 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/) 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/) 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/) 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest ## Citations If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/). ```bash @InProceedings{ranasinghe2021, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers}, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics}, year = {2021} } ``` If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020. ```bash @InProceedings{transquest:2020a, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics}, year = {2020} } ``` ```bash @InProceedings{transquest:2020b, author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan}, title = {TransQuest at WMT2020: Sentence-Level Direct Assessment}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, year = {2020} } ```
activebus/BERT-DK_laptop
10b76b26386d0aa76f0526c2d8ad3c4e9b6283cf
2021-05-18T23:00:58.000Z
[ "pytorch", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
activebus
null
activebus/BERT-DK_laptop
49
null
transformers
6,035
# ReviewBERT BERT (post-)trained from review corpus to understand sentiment, options and various e-commence aspects. `BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`. ## Model Description The original model is from `BERT-base-uncased` trained from Wikipedia+BookCorpus. Models are post-trained from [Amazon Dataset](http://jmcauley.ucsd.edu/data/amazon/) and [Yelp Dataset](https://www.yelp.com/dataset/challenge/). `BERT-DK_laptop` is trained from 100MB laptop corpus under `Electronics/Computers & Accessories/Laptops`. ## Instructions Loading the post-trained weights are as simple as, e.g., ```python import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("activebus/BERT-DK_laptop") model = AutoModel.from_pretrained("activebus/BERT-DK_laptop") ``` ## Evaluation Results Check our [NAACL paper](https://www.aclweb.org/anthology/N19-1242.pdf) ## Citation If you find this work useful, please cite as following. ``` @inproceedings{xu_bert2019, title = "BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis", author = "Xu, Hu and Liu, Bing and Shu, Lei and Yu, Philip S.", booktitle = "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics", month = "jun", year = "2019", } ```
alaggung/bart-pretrained
e32cd9c352e95988fdbb07b3e6e0f4103b971e8e
2022-01-11T16:07:39.000Z
[ "pytorch", "tf", "bart", "text2text-generation", "ko", "transformers", "autotrain_compatible" ]
text2text-generation
false
alaggung
null
alaggung/bart-pretrained
49
null
transformers
6,036
--- language: - ko widget: - text: "[BOS]뭐 해?[SEP][MASK]하다가 이제 [MASK]려고[EOS]" inference: parameters: max_length: 64 --- # BART Pretrained [2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다. [2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 BART Pretrain 단계를 학습한 모델입니다. 데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
alistvt/bert-base-uncased-pretrain-finetuned-coqa-falttened
b2600736ab18d6e9ef26e94b20373d676a3e908b
2022-01-22T05:06:00.000Z
[ "pytorch", "tensorboard", "bert", "question-answering", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
question-answering
false
alistvt
null
alistvt/bert-base-uncased-pretrain-finetuned-coqa-falttened
49
null
transformers
6,037
--- tags: - generated_from_trainer model-index: - name: bert-base-uncased-pretrain-finetuned-coqa-falttened results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-pretrain-finetuned-coqa-falttened This model is a fine-tuned version of [alistvt/bert-base-uncased-pretrained-mlm-coqa-stories](https://huggingface.co/alistvt/bert-base-uncased-pretrained-mlm-coqa-stories) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.2886 | 0.29 | 2000 | 3.0142 | | 3.0801 | 0.59 | 4000 | 2.8347 | | 2.9744 | 0.88 | 6000 | 2.7643 | | 2.494 | 1.18 | 8000 | 2.7605 | | 2.4417 | 1.47 | 10000 | 2.7790 | | 2.4042 | 1.77 | 12000 | 2.7382 | | 2.1285 | 2.06 | 14000 | 2.8588 | | 2.0569 | 2.36 | 16000 | 2.8937 | | 2.0794 | 2.65 | 18000 | 2.8511 | | 2.0679 | 2.95 | 20000 | 2.8655 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
cambridgeltl/mirrorwic-bert-base-uncased
906953412ec1603951324c28e6c8447bac134e0a
2021-10-25T19:18:46.000Z
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
false
cambridgeltl
null
cambridgeltl/mirrorwic-bert-base-uncased
49
null
transformers
6,038
Entry not found
diper1998/distilgpt2-finetuned-AT
23b7c3c15aee13f4ec77bc3bcb0db2d68ff9e280
2021-12-16T16:14:38.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-generation
false
diper1998
null
diper1998/distilgpt2-finetuned-AT
49
null
transformers
6,039
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-AT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-AT This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2450 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 279 | 3.3451 | | 3.4534 | 2.0 | 558 | 3.2941 | | 3.4534 | 3.0 | 837 | 3.2740 | | 3.2435 | 4.0 | 1116 | 3.2617 | | 3.2435 | 5.0 | 1395 | 3.2556 | | 3.1729 | 6.0 | 1674 | 3.2490 | | 3.1729 | 7.0 | 1953 | 3.2475 | | 3.1262 | 8.0 | 2232 | 3.2467 | | 3.0972 | 9.0 | 2511 | 3.2448 | | 3.0972 | 10.0 | 2790 | 3.2450 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
frgfm/repvgg_a0
ab1c8cc7cd9dcc49c60352791c799fbda90cf2e8
2022-07-20T00:55:54.000Z
[ "pytorch", "onnx", "dataset:frgfm/imagenette", "arxiv:2101.03697", "transformers", "image-classification", "license:apache-2.0" ]
image-classification
false
frgfm
null
frgfm/repvgg_a0
49
null
transformers
6,040
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - frgfm/imagenette --- # RepVGG-A0 model Pretrained on [ImageNette](https://github.com/fastai/imagenette). The RepVGG architecture was introduced in [this paper](https://arxiv.org/pdf/2101.03697.pdf). ## Model description The core idea of the author is to distinguish the training architecture (with shortcut connections), from the inference one (a pure highway network). By designing the residual block, the training architecture can be reparametrized into a simple sequence of convolutions and non-linear activations. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install Holocron. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pylocron/) as follows: ```shell pip install pylocron ``` or using [conda](https://anaconda.org/frgfm/pylocron): ```shell conda install -c frgfm pylocron ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/frgfm/Holocron.git pip install -e Holocron/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/repvgg_a0").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-2101-03697, author = {Xiaohan Ding and Xiangyu Zhang and Ningning Ma and Jungong Han and Guiguang Ding and Jian Sun}, title = {RepVGG: Making VGG-style ConvNets Great Again}, journal = {CoRR}, volume = {abs/2101.03697}, year = {2021}, url = {https://arxiv.org/abs/2101.03697}, eprinttype = {arXiv}, eprint = {2101.03697}, timestamp = {Tue, 09 Feb 2021 15:29:34 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2101-03697.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
hakurei/gpt-j-random-tinier
8497f4876f4d6c40182a8038f023b73d75e292f3
2021-09-24T06:21:52.000Z
[ "pytorch", "gptj", "text-generation", "transformers" ]
text-generation
false
hakurei
null
hakurei/gpt-j-random-tinier
49
1
transformers
6,041
This model has been initialized with random values. It is supposed to be used for the purpose of debugging.
healx/gpt-2-pubmed-large
f86589bacd2e80eba8a79f7e5c74ded0ddb6fb2b
2020-12-11T21:43:38.000Z
[ "pytorch", "arxiv:2004.13845", "transformers" ]
null
false
healx
null
healx/gpt-2-pubmed-large
49
null
transformers
6,042
GPT-2 (774M model) finetuned on 0.5m PubMed abstracts. Used in the [writemeanabstract.com](writemeanabstract.com) and the following preprint: [Papanikolaou, Yannis, and Andrea Pierleoni. "DARE: Data Augmented Relation Extraction with GPT-2." arXiv preprint arXiv:2004.13845 (2020).](https://arxiv.org/abs/2004.13845)
huggingtweets/google
9c5a47405b9ff1fa60ebd0b14c8a83c50b89f28f
2021-05-22T05:54:43.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/google
49
null
transformers
6,043
--- language: en thumbnail: https://www.huggingtweets.com/google/1609714473367/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1343584679664873479/Xos3xQfk_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Google 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@google bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@google's tweets](https://twitter.com/google). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>3247</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>48</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>3</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>3196</td> </tr> </tbody> </table> [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ulajd1f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @google's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hx7jdkp) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hx7jdkp/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/google'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
hyunwoongko/jaberta-base-ja-xnli
dd8b90fddaa3ca6a752604bf9140f984e75818f9
2021-05-20T16:43:51.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers" ]
text-classification
false
hyunwoongko
null
hyunwoongko/jaberta-base-ja-xnli
49
null
transformers
6,044
Entry not found
jcblaise/roberta-tagalog-small
a1a4cd4430779ef00549499d8d3e94c9d65b27e6
2021-05-20T17:12:24.000Z
[ "pytorch", "tf", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
jcblaise
null
jcblaise/roberta-tagalog-small
49
null
transformers
6,045
Entry not found
jonfd/electra-small-igc-is
5d185d0df9eff92cd66dd7f8992f581e543b6450
2022-01-05T14:56:02.000Z
[ "pytorch", "electra", "pretraining", "is", "dataset:igc", "transformers", "license:cc-by-4.0" ]
null
false
jonfd
null
jonfd/electra-small-igc-is
49
null
transformers
6,046
--- language: - is license: cc-by-4.0 datasets: - igc --- # Icelandic ELECTRA-Small This model was pretrained on the [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/), which contains approximately 1.69B tokens, using default settings. The model uses a WordPiece tokenizer with a vocabulary size of 32,105. # Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
jxuhf/roberta-base-finetuned-cola
d617c9a8e20fa313eaf3d9dabf38ae732044feca
2021-07-23T22:08:00.000Z
[ "pytorch", "tensorboard", "roberta", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:mit" ]
text-classification
false
jxuhf
null
jxuhf/roberta-base-finetuned-cola
49
null
transformers
6,047
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model_index: - name: roberta-base-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metric: name: Matthews Correlation type: matthews_correlation value: 0.557882735147727 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-cola This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.4716 - Matthews Correlation: 0.5579 ## Model description More information needed ## Intended uses & limitations ```python from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("jxuhf/roberta-base-finetuned-cola") ``` More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4981 | 1.0 | 535 | 0.5162 | 0.5081 | | 0.314 | 2.0 | 1070 | 0.4716 | 0.5579 | ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
m3hrdadfi/albert-fa-base-v2-ner-arman
37ab49d817bea304fed5d59be2da3142b7ff5cb0
2020-12-26T08:36:57.000Z
[ "pytorch", "tf", "albert", "token-classification", "fa", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
m3hrdadfi
null
m3hrdadfi/albert-fa-base-v2-ner-arman
49
3
transformers
6,048
--- language: fa license: apache-2.0 --- # ALBERT Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT. Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models. ## Persian NER [ARMAN, PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. ### ARMAN ARMAN dataset holds 7,682 sentences with 250,015 sentences tagged over six different classes. 1. Organization 2. Location 3. Facility 4. Event 5. Product 6. Person | Label | # | |:------------:|:-----:| | Organization | 30108 | | Location | 12924 | | Facility | 4458 | | Event | 7557 | | Product | 4389 | | Person | 15645 | **Download** You can download the dataset from [here](https://github.com/HaniehP/PersianNER) ## Results The following table summarizes the F1 score obtained as compared to other models and architectures. | Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:| | ARMAN | 97.43 | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 | ### BibTeX entry and citation info Please cite in publications as the following: ```bibtex @misc{ALBERTPersian, author = {Mehrdad Farahani}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
mmm-da/anekdot_funny2_rugpt3Small
e3cd30702e5a0929050128d88036aad3c6524982
2021-05-23T09:51:06.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
mmm-da
null
mmm-da/anekdot_funny2_rugpt3Small
49
null
transformers
6,049
Entry not found
mrm8488/roberta-large-finetuned-wsc
ae61d2ab2fde7b732b37bd4760ce4bb1b3c2e36f
2021-05-20T18:30:59.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "arxiv:1905.06290", "transformers", "autotrain_compatible" ]
fill-mask
false
mrm8488
null
mrm8488/roberta-large-finetuned-wsc
49
null
transformers
6,050
# RoBERTa (large) fine-tuned on Winograd Schema Challenge (WSC) data Step from its original [repo](https://github.com/pytorch/fairseq/blob/master/examples/roberta/wsc/README.md) The following instructions can be used to finetune RoBERTa on the WSC training data provided by [SuperGLUE](https://super.gluebenchmark.com/). Note that there is high variance in the results. For our GLUE/SuperGLUE submission we swept over the learning rate (1e-5, 2e-5, 3e-5), batch size (16, 32, 64) and total number of updates (500, 1000, 2000, 3000), as well as the random seed. Out of ~100 runs we chose the best 7 models and ensembled them. **Approach:** The instructions below use a slightly different loss function than what's described in the original RoBERTa arXiv paper. In particular, [Kocijan et al. (2019)](https://arxiv.org/abs/1905.06290) introduce a margin ranking loss between `(query, candidate)` pairs with tunable hyperparameters alpha and beta. This is supported in our code as well with the `--wsc-alpha` and `--wsc-beta` arguments. However, we achieved slightly better (and more robust) results on the development set by instead using a single cross entropy loss term over the log-probabilities for the query and all mined candidates. **The candidates are mined using spaCy from each input sentence in isolation, so the approach remains strictly pointwise.** This reduces the number of hyperparameters and our best model achieved 92.3% development set accuracy, compared to ~90% accuracy for the margin loss. Later versions of the RoBERTa arXiv paper will describe this updated formulation. ### 1) Download the WSC data from the SuperGLUE website: ```bash wget https://dl.fbaipublicfiles.com/glue/superglue/data/v2/WSC.zip unzip WSC.zip # we also need to copy the RoBERTa dictionary into the same directory wget -O WSC/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt ``` ### 2) Finetune over the provided training data: ```bash TOTAL_NUM_UPDATES=2000 # Total number of training steps. WARMUP_UPDATES=250 # Linearly increase LR over this many steps. LR=2e-05 # Peak LR for polynomial LR scheduler. MAX_SENTENCES=16 # Batch size per GPU. SEED=1 # Random seed. ROBERTA_PATH=/path/to/roberta/model.pt # we use the --user-dir option to load the task and criterion # from the examples/roberta/wsc directory: FAIRSEQ_PATH=/path/to/fairseq FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/wsc CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train WSC/ \ --restore-file $ROBERTA_PATH \ --reset-optimizer --reset-dataloader --reset-meters \ --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ --valid-subset val \ --fp16 --ddp-backend no_c10d \ --user-dir $FAIRSEQ_USER_DIR \ --task wsc --criterion wsc --wsc-cross-entropy \ --arch roberta_large --bpe gpt2 --max-positions 512 \ --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ --lr-scheduler polynomial_decay --lr $LR \ --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_NUM_UPDATES \ --max-sentences $MAX_SENTENCES \ --max-update $TOTAL_NUM_UPDATES \ --log-format simple --log-interval 100 \ --seed $SEED ``` The above command assumes training on 4 GPUs, but you can achieve the same results on a single GPU by adding `--update-freq=4`. ### 3) Evaluate ```python from fairseq.models.roberta import RobertaModel from examples.roberta.wsc import wsc_utils # also loads WSC task and criterion roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'WSC/') roberta.cuda() nsamples, ncorrect = 0, 0 for sentence, label in wsc_utils.jsonl_iterator('WSC/val.jsonl', eval=True): pred = roberta.disambiguate_pronoun(sentence) nsamples += 1 if pred == label: ncorrect += 1 print('Accuracy: ' + str(ncorrect / float(nsamples))) # Accuracy: 0.9230769230769231 ``` ## RoBERTa training on WinoGrande dataset We have also provided `winogrande` task and criterion for finetuning on the [WinoGrande](https://mosaic.allenai.org/projects/winogrande) like datasets where there are always two candidates and one is correct. It's more efficient implementation for such subcases. ```bash TOTAL_NUM_UPDATES=23750 # Total number of training steps. WARMUP_UPDATES=2375 # Linearly increase LR over this many steps. LR=1e-05 # Peak LR for polynomial LR scheduler. MAX_SENTENCES=32 # Batch size per GPU. SEED=1 # Random seed. ROBERTA_PATH=/path/to/roberta/model.pt # we use the --user-dir option to load the task and criterion # from the examples/roberta/wsc directory: FAIRSEQ_PATH=/path/to/fairseq FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/wsc cd fairseq CUDA_VISIBLE_DEVICES=0 fairseq-train winogrande_1.0/ \ --restore-file $ROBERTA_PATH \ --reset-optimizer --reset-dataloader --reset-meters \ --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ --valid-subset val \ --fp16 --ddp-backend no_c10d \ --user-dir $FAIRSEQ_USER_DIR \ --task winogrande --criterion winogrande \ --wsc-margin-alpha 5.0 --wsc-margin-beta 0.4 \ --arch roberta_large --bpe gpt2 --max-positions 512 \ --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ --lr-scheduler polynomial_decay --lr $LR \ --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_NUM_UPDATES \ --max-sentences $MAX_SENTENCES \ --max-update $TOTAL_NUM_UPDATES \ --log-format simple --log-interval 100 ``` [Original repo](https://github.com/pytorch/fairseq/tree/master/examples/roberta/wsc)
nateraw/codecarbon-text-classification
8f32d9ecc161f64c142a81435f712549b927acf6
2022-02-07T20:30:43.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:imdb", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
nateraw
null
nateraw/codecarbon-text-classification
49
null
transformers
6,051
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: codecarbon-text-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codecarbon-text-classification This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
pucpr/biobertpt-clin
ae80140591239b057e935734ae78b74f57e5f71c
2021-10-13T09:28:07.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "pt", "dataset:biomedical literature from Scielo and Pubmed", "transformers", "autotrain_compatible" ]
fill-mask
false
pucpr
null
pucpr/biobertpt-clin
49
5
transformers
6,052
--- language: "pt" widget: - text: "O paciente recebeu [MASK] do hospital." - text: "O médico receitou a medicação para controlar a [MASK]." datasets: - biomedical literature from Scielo and Pubmed thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/BioBERTpt/master/images/logo-biobertpr1.png" alt="Logo BioBERTpt"> # BioBERTpt - Portuguese Clinical and Biomedical BERT The [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) paper contains clinical and biomedical BERT-based models for Portuguese Language, initialized with BERT-Multilingual-Cased & trained on clinical notes and biomedical literature. This model card describes the BioBERTpt(clin) model, a clinical version of BioBERTpt, trained on clinical narratives from electronic health records from Brazilian Hospitals. ## How to use the model Load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("pucpr/biobertpt-clin") model = AutoModel.from_pretrained("pucpr/biobertpt-clin") ``` ## More Information Refer to the original paper, [BioBERTpt - A Portuguese Neural Language Model for Clinical Named Entity Recognition](https://www.aclweb.org/anthology/2020.clinicalnlp-1.7/) for additional details and performance on Portuguese NER tasks. ## Acknowledgements This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. ## Citation ``` @inproceedings{schneider-etal-2020-biobertpt, title = "{B}io{BERT}pt - A {P}ortuguese Neural Language Model for Clinical Named Entity Recognition", author = "Schneider, Elisa Terumi Rubel and de Souza, Jo{\~a}o Vitor Andrioli and Knafou, Julien and Oliveira, Lucas Emanuel Silva e and Copara, Jenny and Gumiel, Yohan Bonescki and Oliveira, Lucas Ferro Antunes de and Paraiso, Emerson Cabrera and Teodoro, Douglas and Barra, Cl{\'a}udia Maria Cabral Moro", booktitle = "Proceedings of the 3rd Clinical Natural Language Processing Workshop", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.clinicalnlp-1.7", pages = "65--72", abstract = "With the growing number of electronic health record data, clinical NLP tasks have become increasingly relevant to unlock valuable information from unstructured clinical text. Although the performance of downstream NLP tasks, such as named-entity recognition (NER), in English corpus has recently improved by contextualised language models, less research is available for clinical texts in low resource languages. Our goal is to assess a deep contextual embedding model for Portuguese, so called BioBERTpt, to support clinical and biomedical NER. We transfer learned information encoded in a multilingual-BERT model to a corpora of clinical narratives and biomedical-scientific papers in Brazilian Portuguese. To evaluate the performance of BioBERTpt, we ran NER experiments on two annotated corpora containing clinical narratives and compared the results with existing BERT models. Our in-domain model outperformed the baseline model in F1-score by 2.72{\%}, achieving higher performance in 11 out of 13 assessed entities. We demonstrate that enriching contextual embedding models with domain literature can play an important role in improving performance for specific NLP tasks. The transfer learning process enhanced the Portuguese biomedical NER model by reducing the necessity of labeled data and the demand for retraining a whole new model.", } ``` ## Questions? Post a Github issue on the [BioBERTpt repo](https://github.com/HAILab-PUCPR/BioBERTpt).
striki-ai/william-shakespeare-poetry
46a3d4b3fa010e4d449ea42aaeb290281fbebc03
2021-06-05T20:25:15.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
striki-ai
null
striki-ai/william-shakespeare-poetry
49
null
transformers
6,053
Entry not found
textattack/xlnet-base-cased-QQP
9529809aceb015404efebb2efa12fe831c9f8636
2020-06-09T16:56:26.000Z
[ "pytorch", "xlnet", "text-classification", "transformers" ]
text-classification
false
textattack
null
textattack/xlnet-base-cased-QQP
49
null
transformers
6,054
Entry not found
unicamp-dl/mMiniLM-L6-v2-mmarco-v1
8d896aa5cb826f1a56eda9fd3d1573799cb75aed
2022-01-05T21:29:46.000Z
[ "pytorch", "xlm-roberta", "text-classification", "pt", "dataset:msmarco", "arxiv:2108.13897", "transformers", "msmarco", "miniLM", "tensorflow", "pt-br", "license:mit" ]
text-classification
false
unicamp-dl
null
unicamp-dl/mMiniLM-L6-v2-mmarco-v1
49
1
transformers
6,055
--- language: pt license: mit tags: - msmarco - miniLM - pytorch - tensorflow - pt - pt-br datasets: - msmarco widget: - text: "Texto de exemplo em português" inference: false --- # mMiniLM-L6-v2 Reranker finetuned on mMARCO ## Introduction mMiniLM-L6-v2-mmarco-v1 is a multilingual miniLM-based model finetuned on a multilingual version of MS MARCO passage dataset. This dataset, named mMARCO, is formed by passages in 9 different languages, translated from English MS MARCO passages collection. In the version v1, the datasets were translated using [Helsinki](https://huggingface.co/Helsinki-NLP) NMT model. Further information about the dataset or the translation method can be found on our [**mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) and [mMARCO](https://github.com/unicamp-dl/mMARCO) repository. ## Usage ```python from transformers import AutoTokenizer, AutoModel model_name = 'unicamp-dl/mMiniLM-L6-v2-mmarco-v1' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` # Citation If you use mMiniLM-L6-v2-mmarco-v1, please cite: @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} }
malteos/aspect-scibert-task
372552f649487d03b7512ecd05badf1f8ce84d13
2022-03-16T12:47:47.000Z
[ "pytorch", "bert", "feature-extraction", "transformers", "license:mit" ]
feature-extraction
false
malteos
null
malteos/aspect-scibert-task
49
1
transformers
6,056
--- license: mit ---
czw/gpt2-base-chinese-finetuned-job-resume
5b0cbff021614ce29fe375e6b40fa2a2d2e15f79
2022-05-04T03:38:53.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:gpl-3.0", "model-index" ]
text-generation
false
czw
null
czw/gpt2-base-chinese-finetuned-job-resume
49
null
transformers
6,057
--- license: gpl-3.0 tags: - generated_from_trainer model-index: - name: gpt2-base-chinese-finetuned-job-resume results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-base-chinese-finetuned-job-resume This model is a fine-tuned version of [ckiplab/gpt2-base-chinese](https://huggingface.co/ckiplab/gpt2-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 480 | 2.3271 | | 2.4967 | 2.0 | 960 | 2.2729 | | 2.2259 | 3.0 | 1440 | 2.2658 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cpu - Datasets 2.1.0 - Tokenizers 0.12.1
tomhosking/bert-base-uncased-debiased-nli
7214ca65939d0108455359cd5121584cbdfb1fb3
2022-05-06T15:28:40.000Z
[ "pytorch", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
tomhosking
null
tomhosking/bert-base-uncased-debiased-nli
49
null
transformers
6,058
--- license: apache-2.0 widget: - text: "[CLS] Rover is a dog. [SEP] Rover is a cat. [SEP]" --- `bert-base-uncased`, fine tuned on the debiased NLI dataset from "Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets", Wu et al., 2022. Tuned using the code at https://github.com/jimmycode/gen-debiased-nli
kabelomalapane/nso_en_ukuxhumana_model
fc3c3d1e76040c91af9da3fbde9b33a978932524
2022-05-21T01:15:15.000Z
[ "pytorch", "tensorboard", "marian", "text2text-generation", "transformers", "translation", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
translation
false
kabelomalapane
null
kabelomalapane/nso_en_ukuxhumana_model
49
null
transformers
6,059
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: nso_en_ukuxhumana_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nso_en_ukuxhumana_model This model is a fine-tuned version of [Helsinki-NLP/opus-mt-nso-en](https://huggingface.co/Helsinki-NLP/opus-mt-nso-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9349 - Bleu (before training): 9.3297 - Bleu: 18.1161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.0
sumedh/distilbart-cnn-12-6-amazonreviews
ec3da5a07519f708c3cba007911e09d321c57aeb
2022-05-22T17:47:06.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:amazon_reviews_multi", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible" ]
summarization
false
sumedh
null
sumedh/distilbart-cnn-12-6-amazonreviews
49
null
transformers
6,060
--- language: en tags: - summarization license: apache-2.0 datasets: - amazon_reviews_multi thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png --- ### Usage This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:| | 1.2875 | 1.0 | 5754 | 1.6294 | 11.009 | 7.4618 | 10.5573 | 10.8087 | 58.3382 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
memyprokotow/rut5-REBEL-base
6036825ca01ef3409b972c101d77a262f367ed7c
2022-06-07T17:37:00.000Z
[ "pytorch", "t5", "text2text-generation", "ru", "dataset:memyprokotow/rebel-dataset-rus", "transformers", "seq2seq", "relation-extraction", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
memyprokotow
null
memyprokotow/rut5-REBEL-base
49
null
transformers
6,061
--- language: - ru tags: - seq2seq - relation-extraction - t5 license: apache-2.0 datasets: - memyprokotow/rebel-dataset-rus widget: - text: "За последние 9 месяцев инвесторы в азиатские долларовые долговые обязательства потеряли 155 миллиардов долларов, пострадав от слабости Китая в дополнение к глобальной распродаже фиксированного дохода, наблюдаемой во всем мире по мере роста процентных ставок." --- # REBEL-ru Based on russian part of wikipedia (scrapped with CROCODILE). Model trained for 3 epochs on russian ruT5-base # How to use Same code as REBEL-large (https://huggingface.co/Babelscape/rebel-large) ``` text = '''За последние 9 месяцев инвесторы в азиатские долларовые долговые обязательства потеряли 155 миллиардов долларов, пострадав от слабости Китая в дополнение к глобальной распродаже фиксированного дохода, наблюдаемой во всем мире по мере роста процентных ставок. ''' model_path = r"memyprokotow/rut5-REBEL-base" triplet_extractor = pipeline('text2text-generation', model=model_path, tokenizer=model_path, #device=0 ) # We need to use the tokenizer manually since we need special tokens. extracted_text = triplet_extractor.tokenizer.batch_decode([triplet_extractor(text, return_tensors=True, return_text=False, max_length=500)[0]["generated_token_ids"]]) print(extracted_text[0]) # Function to parse the generated text and extract the triplets def extract_triplets(text): triplets = [] relation, subject, relation, object_ = '', '', '', '' text = text.strip() current = 'x' for token in text.replace("<s>", "").replace("<pad>", "").replace("</s>", "").split(): if token == "<triplet>": current = 't' if relation != '': triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()}) relation = '' subject = '' elif token == "<subj>": current = 's' if relation != '': triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()}) object_ = '' elif token == "<obj>": current = 'o' relation = '' else: if current == 't': subject += ' ' + token elif current == 's': object_ += ' ' + token elif current == 'o': relation += ' ' + token if subject != '' and relation != '' and object_ != '': triplets.append({'head': subject.strip(), 'type': relation.strip(),'tail': object_.strip()}) return triplets extracted_triplets = extract_triplets(extracted_text[0]) print(extracted_triplets) ```
pranavk/bart-paraphrase-finetuned-xsum-v3
e73d747d56a215cbaa3b069b6c66f1678bc9aa7c
2022-06-07T21:01:46.000Z
[ "pytorch", "tensorboard", "bart", "text2text-generation", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
text2text-generation
false
pranavk
null
pranavk/bart-paraphrase-finetuned-xsum-v3
49
null
transformers
6,062
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-paraphrase-finetuned-xsum-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-finetuned-xsum-v3 This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1881 - Rouge1: 99.9251 - Rouge2: 99.9188 - Rougel: 99.9251 - Rougelsum: 99.9251 - Gen Len: 10.17 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 100 | 0.2702 | 99.9251 | 99.9188 | 99.9251 | 99.9251 | 10.38 | | No log | 2.0 | 200 | 0.2773 | 99.9251 | 99.9188 | 99.9251 | 99.9251 | 11.45 | | No log | 3.0 | 300 | 0.2178 | 99.8148 | 99.7051 | 99.8208 | 99.8148 | 11.19 | | No log | 4.0 | 400 | 0.3649 | 99.9251 | 99.9188 | 99.9251 | 99.9251 | 12.32 | | 0.1561 | 5.0 | 500 | 0.2532 | 99.8957 | 99.8875 | 99.8957 | 99.8918 | 10.375 | | 0.1561 | 6.0 | 600 | 0.2050 | 99.9251 | 99.9188 | 99.9251 | 99.9251 | 11.15 | | 0.1561 | 7.0 | 700 | 0.2364 | 99.8957 | 99.8875 | 99.8957 | 99.8918 | 10.18 | | 0.1561 | 8.0 | 800 | 0.2006 | 99.9251 | 99.9188 | 99.9251 | 99.9251 | 10.17 | | 0.1561 | 9.0 | 900 | 0.1628 | 99.9251 | 99.9188 | 99.9251 | 99.9251 | 10.23 | | 0.1538 | 10.0 | 1000 | 0.1881 | 99.9251 | 99.9188 | 99.9251 | 99.9251 | 10.17 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Anery/legalbert_clause_combined
14b1fe3357c5d98f72549b58df084fe800e44ed0
2022-06-09T16:53:41.000Z
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Anery
null
Anery/legalbert_clause_combined
49
null
transformers
6,063
Entry not found
anahitapld/DABert
0c5d25199d20a36cfd452b16ecc4940593a6017e
2022-06-28T08:17:13.000Z
[ "pytorch", "bert", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
anahitapld
null
anahitapld/DABert
49
null
transformers
6,064
--- license: apache-2.0 ---
ukr-models/uk-punctcase
76132ef389bb0f782449dc23b5aae2ddbc07c7db
2022-07-13T12:29:54.000Z
[ "pytorch", "xlm-roberta", "token-classification", "uk", "transformers", "ukrainian", "license:mit", "autotrain_compatible" ]
token-classification
false
ukr-models
null
ukr-models/uk-punctcase
49
2
transformers
6,065
--- language: - uk tags: - ukrainian widget: - text: "упродовж 2012-2014 років національний природний парк «зачарований край» разом із всесвітнім фондом природи wwf успішно реалізували проект із відновлення болота «чорне багно» розташованого на схилах гори бужора у закарпатті водноболотне угіддя «чорне багно» є найбільшою болотною екосистемою регіону воно займає площу близько 15 га унікальністю цього високогірного болота розташованого на висоті 840 м над рівнем моря є велика потужність торфових покладів (глибиною до 59 м) і своєрідна рослинність у 50-х і на початку 60-х років минулого століття на природних потічках що протікали через болото побудували осушувальні канали це порушило природну рівновагу відтак змінилася екосистема болота" license: mit --- ## Model Description Fine-tuning of [XLM-RoBERTa-Uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) model on Ukrainian texts to recover punctuation and case. ## How to Use Download script get_predictions.py from the repository. ```py from transformers import AutoTokenizer, AutoModelForTokenClassification from get_predictions import recover_text tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-punctcase') model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-punctcase') text = "..." recover_text(text_processed, model, tokenizer) ```
valurank/headline_generator
506ab61e446f096c4fa5ca33dc1d5f26c6b38e6d
2022-07-20T12:28:06.000Z
[ "pytorch", "tensorboard", "pegasus", "text2text-generation", "transformers", "generated_from_trainer", "model-index", "autotrain_compatible" ]
text2text-generation
false
valurank
null
valurank/headline_generator
49
null
transformers
6,066
--- tags: - generated_from_trainer model-index: - name: multi_news_headline_generator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multi_news_headline_generator This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5795 | 0.8 | 500 | 0.3341 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
hassan4830/xlm-roberta-base-finetuned-urdu
6f55304f1cb7baa2d7db054862d21aaa2ceffa37
2022-07-25T07:09:45.000Z
[ "pytorch", "xlm-roberta", "text-classification", "ur", "transformers", "license:afl-3.0" ]
text-classification
false
hassan4830
null
hassan4830/xlm-roberta-base-finetuned-urdu
49
1
transformers
6,067
--- language: ur license: afl-3.0 --- # XLM-RoBERTa-Urdu-Classification This [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) text classification model trained on Urdu sentiment [data-set](https://huggingface.co/datasets/hassan4830/urdu-binary-classification-data) performs binary sentiment classification on any given Urdu sentence. The model has been fine-tuned for better results in manageable time frames. ## Model description XLM-RoBERTa is a scaled cross-lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross-lingual benchmarks. The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. ### How to use You can import this model directly from the transformers library: ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("hassan4830/xlm-roberta-base-finetuned-urdu") >>> model = AutoModelForSequenceClassification.from_pretrained("hassan4830/xlm-roberta-base-finetuned-urdu") ``` Here is how to use this model to get the label of a given text: ```python >>> from transformers import TextClassificationPipeline >>> text = "وہ ایک برا شخص ہے" >>> pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, device = 0) >>> pipe(text) ```
nthakur/mMiniLMv2-L12-H384-ms-marco-all-epoch-40
ae98e13213920bd16117311cb2833f6b8e87d872
2022-07-26T12:49:46.000Z
[ "pytorch", "xlm-roberta", "feature-extraction", "transformers" ]
feature-extraction
false
nthakur
null
nthakur/mMiniLMv2-L12-H384-ms-marco-all-epoch-40
49
null
transformers
6,068
Entry not found
JAlexis/ajusteBert004
e18d7d5c89d2e85f8c4e3d85a819047a53310e5f
2022-07-26T17:41:51.000Z
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
JAlexis
null
JAlexis/ajusteBert004
49
null
transformers
6,069
Entry not found
KoichiYasuoka/chinese-roberta-large-upos
43751ea82d5908ead06d3ed67babea4cece682c0
2022-02-11T06:30:18.000Z
[ "pytorch", "bert", "token-classification", "zh", "dataset:universal_dependencies", "transformers", "chinese", "pos", "wikipedia", "dependency-parsing", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
KoichiYasuoka
null
KoichiYasuoka/chinese-roberta-large-upos
48
null
transformers
6,070
--- language: - "zh" tags: - "chinese" - "token-classification" - "pos" - "wikipedia" - "dependency-parsing" datasets: - "universal_dependencies" license: "apache-2.0" pipeline_tag: "token-classification" --- # chinese-roberta-large-upos ## Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). ## How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-roberta-large-upos") ``` ## See Also [esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
LeoFeng/ChineseSequenceClassification
872a56197737bb565feb3f882434a4974eb49310
2022-01-02T09:13:10.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
LeoFeng
null
LeoFeng/ChineseSequenceClassification
48
1
transformers
6,071
利用THUC dataset 訓練的文章分類器,共支援14種種類
Media1129/keyword-tag-model-9000-v2
7cd4018260eedf9fb8d29056770c7483fe04420b
2021-08-30T06:04:11.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
Media1129
null
Media1129/keyword-tag-model-9000-v2
48
null
transformers
6,072
Entry not found
Narrativa/t5-base-finetuned-totto-table-to-text
a281b1998bc6c9529b306e5950f860a62f7de42d
2021-08-07T08:55:25.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Narrativa
null
Narrativa/t5-base-finetuned-totto-table-to-text
48
2
transformers
6,073
Entry not found
cola/chinese-address-ner
28da85cbc4d51e3234280899742df96ce65efeb1
2021-07-20T08:59:34.000Z
[ "pytorch", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
cola
null
cola/chinese-address-ner
48
1
transformers
6,074
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model_index: - name: chinese-address-ner results: - task: name: Token Classification type: token-classification metric: name: Accuracy type: accuracy value: 0.975825946817083 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chinese-address-ner This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.1080 - Precision: 0.9664 - Recall: 0.9774 - F1: 0.9719 - Accuracy: 0.9758 ## Model description 输入一串地址中文信息,比如快递单:`北京市海淀区西北旺东路10号院(马连洼街道西北旺社区东北方向)`,按照行政级别(总有 7 级)抽取地址信息,返回每个 token 的类别。具体类别含义表示如下: | 返回类别 | BIO 体系 | 解释 | | ----------- | -------- | ---------------------- | | **LABEL_0** | O | 忽略信息 | | **LABEL_1** | B-A1 | 第一级地址(头) | | **LABEL_2** | I-A1 | 第一级地址(其余部分) | | ... | ... | ... | More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 50 - eval_batch_size: 50 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 2.5055 | 1.0 | 7 | 1.6719 | 0.1977 | 0.2604 | 0.2248 | 0.5649 | | 1.837 | 2.0 | 14 | 1.0719 | 0.4676 | 0.6 | 0.5256 | 0.7421 | | 1.0661 | 3.0 | 21 | 0.7306 | 0.6266 | 0.7472 | 0.6816 | 0.8106 | | 0.8373 | 4.0 | 28 | 0.5197 | 0.6456 | 0.8113 | 0.7191 | 0.8614 | | 0.522 | 5.0 | 35 | 0.3830 | 0.7667 | 0.8679 | 0.8142 | 0.9001 | | 0.4295 | 6.0 | 42 | 0.3104 | 0.8138 | 0.8906 | 0.8505 | 0.9178 | | 0.3483 | 7.0 | 49 | 0.2453 | 0.8462 | 0.9132 | 0.8784 | 0.9404 | | 0.2471 | 8.0 | 56 | 0.2081 | 0.8403 | 0.9132 | 0.8752 | 0.9428 | | 0.2299 | 9.0 | 63 | 0.1979 | 0.8419 | 0.9245 | 0.8813 | 0.9420 | | 0.1761 | 10.0 | 70 | 0.1823 | 0.8830 | 0.9396 | 0.9104 | 0.9500 | | 0.1434 | 11.0 | 77 | 0.1480 | 0.9036 | 0.9547 | 0.9284 | 0.9629 | | 0.134 | 12.0 | 84 | 0.1341 | 0.9173 | 0.9623 | 0.9392 | 0.9678 | | 0.128 | 13.0 | 91 | 0.1365 | 0.9375 | 0.9623 | 0.9497 | 0.9694 | | 0.0824 | 14.0 | 98 | 0.1159 | 0.9557 | 0.9774 | 0.9664 | 0.9734 | | 0.0744 | 15.0 | 105 | 0.1092 | 0.9591 | 0.9736 | 0.9663 | 0.9766 | | 0.0569 | 16.0 | 112 | 0.1117 | 0.9556 | 0.9736 | 0.9645 | 0.9742 | | 0.0559 | 17.0 | 119 | 0.1040 | 0.9628 | 0.9774 | 0.9700 | 0.9790 | | 0.0456 | 18.0 | 126 | 0.1052 | 0.9593 | 0.9774 | 0.9682 | 0.9782 | | 0.0405 | 19.0 | 133 | 0.1133 | 0.9590 | 0.9698 | 0.9644 | 0.9718 | | 0.0315 | 20.0 | 140 | 0.1060 | 0.9591 | 0.9736 | 0.9663 | 0.9750 | | 0.0262 | 21.0 | 147 | 0.1087 | 0.9554 | 0.9698 | 0.9625 | 0.9718 | | 0.0338 | 22.0 | 154 | 0.1183 | 0.9625 | 0.9698 | 0.9662 | 0.9726 | | 0.0225 | 23.0 | 161 | 0.1080 | 0.9664 | 0.9774 | 0.9719 | 0.9758 | | 0.028 | 24.0 | 168 | 0.1057 | 0.9591 | 0.9736 | 0.9663 | 0.9742 | | 0.0202 | 25.0 | 175 | 0.1062 | 0.9628 | 0.9774 | 0.9700 | 0.9766 | | 0.0168 | 26.0 | 182 | 0.1097 | 0.9664 | 0.9774 | 0.9719 | 0.9758 | | 0.0173 | 27.0 | 189 | 0.1093 | 0.9628 | 0.9774 | 0.9700 | 0.9774 | | 0.0151 | 28.0 | 196 | 0.1162 | 0.9628 | 0.9774 | 0.9700 | 0.9766 | | 0.0135 | 29.0 | 203 | 0.1126 | 0.9483 | 0.9698 | 0.9590 | 0.9758 | | 0.0179 | 30.0 | 210 | 0.1100 | 0.9449 | 0.9698 | 0.9572 | 0.9774 | | 0.0161 | 31.0 | 217 | 0.1098 | 0.9449 | 0.9698 | 0.9572 | 0.9766 | | 0.0158 | 32.0 | 224 | 0.1191 | 0.9483 | 0.9698 | 0.9590 | 0.9734 | | 0.0151 | 33.0 | 231 | 0.1058 | 0.9483 | 0.9698 | 0.9590 | 0.9750 | | 0.0121 | 34.0 | 238 | 0.0990 | 0.9593 | 0.9774 | 0.9682 | 0.9790 | | 0.0092 | 35.0 | 245 | 0.1128 | 0.9519 | 0.9698 | 0.9607 | 0.9774 | | 0.0097 | 36.0 | 252 | 0.1181 | 0.9627 | 0.9736 | 0.9681 | 0.9766 | | 0.0118 | 37.0 | 259 | 0.1185 | 0.9591 | 0.9736 | 0.9663 | 0.9782 | | 0.0118 | 38.0 | 266 | 0.1021 | 0.9557 | 0.9774 | 0.9664 | 0.9823 | | 0.0099 | 39.0 | 273 | 0.1000 | 0.9559 | 0.9811 | 0.9683 | 0.9815 | | 0.0102 | 40.0 | 280 | 0.1025 | 0.9559 | 0.9811 | 0.9683 | 0.9815 | | 0.0068 | 41.0 | 287 | 0.1080 | 0.9522 | 0.9774 | 0.9646 | 0.9807 | | 0.0105 | 42.0 | 294 | 0.1157 | 0.9449 | 0.9698 | 0.9572 | 0.9766 | | 0.0083 | 43.0 | 301 | 0.1207 | 0.9380 | 0.9698 | 0.9536 | 0.9766 | | 0.0077 | 44.0 | 308 | 0.1208 | 0.9483 | 0.9698 | 0.9590 | 0.9766 | | 0.0077 | 45.0 | 315 | 0.1176 | 0.9483 | 0.9698 | 0.9590 | 0.9774 | | 0.0071 | 46.0 | 322 | 0.1137 | 0.9483 | 0.9698 | 0.9590 | 0.9790 | | 0.0075 | 47.0 | 329 | 0.1144 | 0.9483 | 0.9698 | 0.9590 | 0.9782 | | 0.0084 | 48.0 | 336 | 0.1198 | 0.9483 | 0.9698 | 0.9590 | 0.9766 | | 0.0103 | 49.0 | 343 | 0.1217 | 0.9519 | 0.9698 | 0.9607 | 0.9766 | | 0.0087 | 50.0 | 350 | 0.1230 | 0.9519 | 0.9698 | 0.9607 | 0.9766 | ### Framework versions - Transformers 4.8.2 - Pytorch 1.8.0 - Datasets 1.9.0 - Tokenizers 0.10.3
espejelomar/beto-base-cased
4da5782593ca46e0fa5276432c681d11e969d83c
2021-12-07T22:24:15.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
espejelomar
null
espejelomar/beto-base-cased
48
null
transformers
6,075
Entry not found
funnel-transformer/xlarge-base
5efaa39740d551ebf123c67230e739420439e765
2020-12-11T21:40:48.000Z
[ "pytorch", "tf", "funnel", "feature-extraction", "en", "dataset:bookcorpus", "dataset:wikipedia", "dataset:gigaword", "arxiv:2006.03236", "transformers", "license:apache-2.0" ]
feature-extraction
false
funnel-transformer
null
funnel-transformer/xlarge-base
48
null
transformers
6,076
--- language: en license: apache-2.0 datasets: - bookcorpus - wikipedia - gigaword --- # Funnel Transformer xlarge model (B10-10-10 without decoder) Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. **Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if you need one input per initial token. You should use the `xlarge` model in that case. ## Intended uses & limitations You can use the raw model to extract a vector representation of a given text, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import FunnelTokenizer, FunnelBaseModel tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge-base") model = FunnelBaseModel.from_pretrained("funnel-transformer/xlarge-base") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import FunnelTokenizer, TFFunnelBaseModel tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/xlarge-base") model = TFFunnelBaseModel.from_pretrained("funnel-transformer/xlarge-base") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data The BERT model was pretrained on: - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books, - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers), - [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages, - [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data, - [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages. ### BibTeX entry and citation info ```bibtex @misc{dai2020funneltransformer, title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing}, author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le}, year={2020}, eprint={2006.03236}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
gagan3012/Fox-News-Generator
24399eb68fd6edf0dbfdd3ba831c0286e22825d5
2021-05-21T16:03:28.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
false
gagan3012
null
gagan3012/Fox-News-Generator
48
2
transformers
6,077
# Generating Right Wing News Using GPT2 ### I have built a custom model for it using data from Kaggle Creating a new finetuned model using data from FOX news ### My model can be accessed at gagan3012/Fox-News-Generator Check the [BenchmarkTest](https://github.com/gagan3012/Fox-News-Generator/blob/master/BenchmarkTest.ipynb) notebook for results Find the model at [gagan3012/Fox-News-Generator](https://huggingface.co/gagan3012/Fox-News-Generator) ``` from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("gagan3012/Fox-News-Generator") model = AutoModelWithLMHead.from_pretrained("gagan3012/Fox-News-Generator") ```
hfl/chinese-electra-large-discriminator
d5b57a2a772c1f47f37274dbea0cc736c2ef9d9b
2021-03-03T01:42:48.000Z
[ "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0" ]
null
false
hfl
null
hfl/chinese-electra-large-discriminator
48
null
transformers
6,078
--- language: - zh license: "apache-2.0" --- **Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
huggingtweets/flatironschool
363501627e18df71b77be0329f38ad61c427be53
2021-05-22T04:20:52.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "en", "transformers", "huggingtweets" ]
text-generation
false
huggingtweets
null
huggingtweets/flatironschool
48
null
transformers
6,079
--- language: en thumbnail: https://www.huggingtweets.com/flatironschool/1603341000640/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278450406843125762/f5u_F2ng_400x400.png')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Flatiron School (at 🏡) 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@flatironschool bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@flatironschool's tweets](https://twitter.com/flatironschool). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>3202</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>1068</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>582</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>1552</td> </tr> </tbody> </table> [Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/179qzrny/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @flatironschool's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/174rjbb8) for full transparency and reproducibility. At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/174rjbb8/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/flatironschool'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets) <!--- random size file -->
it5/it5-large-question-answering
1a9102e557a469141fa9fee356b99e76553564de
2022-03-09T07:57:53.000Z
[ "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "it", "dataset:squad_it", "arxiv:2203.03759", "transformers", "italian", "sequence-to-sequence", "squad_it", "text2text-question-answering", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
it5
null
it5/it5-large-question-answering
48
2
transformers
6,080
--- language: - it license: apache-2.0 datasets: - squad_it tags: - italian - sequence-to-sequence - squad_it - text2text-question-answering - text2text-generation widget: - text: "In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?" - text: "L' embargo non era uniforme in tutta Europa. Dei nove membri della Comunità Economica Europea (CEE), i Paesi Bassi hanno dovuto affrontare un embargo totale, il Regno Unito e la Francia hanno ricevuto forniture quasi ininterrotte (poichè si sono rifiutati di consentire all' America di utilizzare i loro aerodromi e le armi e forniture embargo sia agli arabi che agli israeliani), mentre gli altri sei hanno dovuto affrontare tagli parziali. Il Regno Unito era tradizionalmente un alleato di Israele, e il governo di Harold Wilson ha sostenuto gli israeliani durante la guerra dei sei giorni. Il suo successore, Ted Heath, ribaltò questa politica nel 1970, chiedendo a Israele di ritirarsi ai suoi confini prima del 1967. Domanda: Il Regno Unito e la Francia non hanno avuto interruzioni dell' approvvigionamento petrolifero in quanto non hanno consentito a quale paese di utilizzare il loro aeroporto?" - text: "Nel 1962, il grafico Paul Rand ridisegna il logo ABC nella sua forma più conosciuta (e attuale) con le lettere minuscole \"abc\" racchiuse in un unico cerchio nero. Il nuovo logo esordisce in onda per le promozioni di ABC all' inizio della stagione 1963-64. Le lettere ricordano fortemente il carattere tipografico Bauhaus disegnato da Herbert Bayer negli anni Venti, ma condividono anche similitudini con diversi altri caratteri, come ITC Avant Garde e Horatio, e lo Chalet più simile. La semplicità del logo ha reso più facile la riprogettazione e la duplicazione, il che ha conferito un beneficio per ABC (soprattutto prima dell' avvento della computer grafica). Domanda: Di quale carattere tipografico ricordano le lettere dell' iconico logo ABC?" - text: "La fotorespirazione può verificarsi quando la concentrazione di ossigeno è troppo elevata. Rubisco non è in grado di distinguere molto bene tra ossigeno e anidride carbonica, quindi può accidentalmente aggiungere O2 invece di CO2 a RuBP. Questo processo riduce l' efficienza della fotosintesi: consuma ATP e ossigeno, rilascia CO2 e non produce zucchero. Può sprecare fino alla metà del carbonio fissato dal ciclo di Calvin. Diversi meccanismi si sono evoluti in diversi lignaggi che aumentano la concentrazione di anidride carbonica rispetto all' ossigeno all' interno del cloroplasto, aumentando l' efficienza della fotosintesi. Questi meccanismi sono chiamati meccanismi di concentrazione dell' anidride carbonica, o CCM. Tra questi figurano il metabolismo degli acidi crassulaceanici, la fissazione del carbonio C4 e i pirenoidi. I cloroplasti negli impianti C4 sono notevoli in quanto presentano un chiaro dimorfismo cloroplastico. Domanda: Che cosa può fare rubisco per errore?" metrics: - f1 - exact-match model-index: - name: it5-large-question-answering results: - task: type: question-answering name: "Question Answering" dataset: type: squad_it name: "SQuAD-IT" metrics: - type: f1 value: 0.780 name: "Test F1" - type: exact-match value: 0.691 name: "Test Exact Match" co2_eq_emissions: emissions: 51g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # IT5 Large for Question Answering ⁉️ 🇮🇹 This repository contains the checkpoint for the [IT5 Large](https://huggingface.co/gsarti/it5-large) model fine-tuned on extractive question answering on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qa = pipeline("text2text-generation", model='it5/it5-large-question-answering') qa("In seguito all' evento di estinzione del Cretaceo-Paleogene, l' estinzione dei dinosauri e il clima umido possono aver permesso alla foresta pluviale tropicale di diffondersi in tutto il continente. Dal 66-34 Mya, la foresta pluviale si estendeva fino a sud fino a 45°. Le fluttuazioni climatiche degli ultimi 34 milioni di anni hanno permesso alle regioni della savana di espandersi fino ai tropici. Durante l' Oligocene, ad esempio, la foresta pluviale ha attraversato una banda relativamente stretta. Si espandeva di nuovo durante il Miocene medio, poi si ritrasse ad una formazione prevalentemente interna all' ultimo massimo glaciale. Tuttavia, la foresta pluviale è riuscita ancora a prosperare durante questi periodi glaciali, consentendo la sopravvivenza e l' evoluzione di un' ampia varietà di specie. Domanda: La foresta pluviale amazzonica è diventata per lo più una foresta interna intorno a quale evento globale?") >>> [{"generated_text": "ultimo massimo glaciale"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/it5-large-question-answering") model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-large-question-answering") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
it5/mt5-base-question-generation
9098f5e3f5b44e7b8504da97575291509f36e333
2022-03-09T07:54:16.000Z
[ "pytorch", "tf", "jax", "tensorboard", "mt5", "text2text-generation", "it", "dataset:squad_it", "arxiv:2203.03759", "transformers", "italian", "sequence-to-sequence", "question-generation", "squad_it", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible" ]
text2text-generation
false
it5
null
it5/mt5-base-question-generation
48
null
transformers
6,081
--- language: - it license: apache-2.0 datasets: - squad_it tags: - italian - sequence-to-sequence - question-generation - squad_it - text2text-generation widget: - text: "Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una \"grande pestilenza nell' aria\". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola \"peste\" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia" - text: "Il 14 aprile 2011, ABC ha annullato le lunghe opere di sapone All My Children e One Life to Live dopo 41 e 43 anni in onda, rispettivamente (in seguito al contraccolpo dei tifosi, ABC ha venduto i diritti ad entrambi gli spettacoli a Prospect Park, che alla fine ha rilanciato i saponi su Hulu per un' ulteriore stagione nel 2013 e con entrambe le società che si citano in giudizio per accuse di interferenza con il processo di rilancio degli spettacoli, mancato pagamento delle tasse di licenza. Il talk/lifestyle show che ha sostituito One Life to Live, The Revolution, non è riuscito a generare giudizi soddisfacenti ed è stato a sua volta annullato dopo soli sette mesi. La stagione 2011-12 ha visto l' ABC cadere al quarto posto nel 18-49 demografico nonostante rinnovando una manciata di nuovi spettacoli (compresi i drammi matricole Scandal, Revenge e Once Upon a Time) per la seconda stagione. Risposta: Hulu" - text: "L' American Broadcasting Company (ABC) (stlized nel suo logo come abc dal 1957) è una rete televisiva commerciale americana trasmissione televisiva che è di proprietà del Disney-ABC Television Group, una controllata della divisione Disney Media Networks di The Walt Disney Company. La rete fa parte delle grandi reti televisive Big Three. La rete ha sede a Columbus Avenue e West 66th Street a Manhattan, con ulteriori uffici e stabilimenti di produzione a New York City, Los Angeles e Burbank, California. Risposta: Manhattan" - text: "La disobbedienza civile non rivoluzionaria è una semplice disobbedienza delle leggi sulla base del fatto che sono giudicate \"sbagliate\" da una coscienza individuale, o come parte di uno sforzo per rendere alcune leggi inefficaci, per causarne l' abrogazione, o per esercitare pressioni per ottenere i propri desideri politici su qualche altra questione. La disobbedienza civile rivoluzionaria è più che altro un tentativo attivo di rovesciare un governo (o di cambiare le tradizioni culturali, i costumi sociali, le credenze religiose, ecc. La rivoluzione non deve necessariamente essere politica, cioè \"rivoluzione culturale\", implica semplicemente un cambiamento radicale e diffuso in una sezione del tessuto sociale). Gli atti di Gandhi sono stati descritti come disobbedienza civile rivoluzionaria. È stato affermato che gli ungheresi sotto Ferenc Deák hanno diretto una disobbedienza civile rivoluzionaria contro il governo austriaco. Thoreau ha anche scritto di disobbedienza civile realizzando \"rivoluzione pacifica\". Howard Zinn, Harvey Wheeler e altri hanno identificato il diritto sposato nella Dichiarazione d' Indipendenza di \"alterare o abolire\" un governo ingiusto come principio di disobbedienza civile. Risposta: Ferenc Deák" metrics: - rouge - bertscore model-index: - name: mt5-base-question-generation results: - task: type: question-generation name: "Question generation" dataset: type: squad_it name: "SQuAD-IT" metrics: - type: rouge1 value: 0.346 name: "Test Rouge1" - type: rouge2 value: 0.174 name: "Test Rouge2" - type: rougeL value: 0.324 name: "Test RougeL" - type: bertscore value: 0.495 name: "Test BERTScore" args: - model_type: "dbmdz/bert-base-italian-xxl-uncased" - lang: "it" - num_layers: 10 - rescale_with_baseline: True - baseline_path: "bertscore_baseline_ita.tsv" co2_eq_emissions: emissions: "40g" source: "Google Cloud Platform Carbon Footprint" training_type: "fine-tuning" geographical_location: "Eemshaven, Netherlands, Europe" hardware_used: "1 TPU v3-8 VM" thumbnail: https://gsarti.com/publication/it5/featured.png --- # mT5 Base for Question Generation 💭 🇮🇹 This repository contains the checkpoint for the [mT5 Base](https://huggingface.co/google/mt5-base) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io). A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach. ## Using the model Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as: ```python from transformers import pipelines qg = pipeline("text2text-generation", model='it5/mt5-base-question-generation') qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una "grande pestilenza nell\' aria". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia") >>> [{"generated_text": "Per chi è stato redatto il referto medico?"}] ``` or loaded using autoclasses: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("it5/mt5-base-question-generation") model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-base-question-generation") ``` If you use this model in your research, please cite our work as: ```bibtex @article{sarti-nissim-2022-it5, title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation}, author={Sarti, Gabriele and Nissim, Malvina}, journal={ArXiv preprint 2203.03759}, url={https://arxiv.org/abs/2203.03759}, year={2022}, month={mar} } ```
jaesun/distilbert-base-uncased-finetuned-cola
8d60f5fdf0d336a8f5d3cf8cb1d705ac4c76c16f
2021-10-20T17:47:49.000Z
[ "pytorch", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
jaesun
null
jaesun/distilbert-base-uncased-finetuned-cola
48
null
transformers
6,082
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.51728018358102 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8815 - Matthews Correlation: 0.5173 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5272 | 1.0 | 535 | 0.5099 | 0.4093 | | 0.3563 | 2.0 | 1070 | 0.5114 | 0.5019 | | 0.2425 | 3.0 | 1605 | 0.6696 | 0.4898 | | 0.1726 | 4.0 | 2140 | 0.7715 | 0.5123 | | 0.132 | 5.0 | 2675 | 0.8815 | 0.5173 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1 - Datasets 1.14.0 - Tokenizers 0.10.3
lighteternal/gpt2-finetuned-greek
3a0d959c9494f904c4c0b8e0ab39e0a5dac2c66b
2021-05-23T08:33:11.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "el", "transformers", "causal-lm", "license:apache-2.0" ]
text-generation
false
lighteternal
null
lighteternal/gpt2-finetuned-greek
48
null
transformers
6,083
--- language: - el tags: - pytorch - causal-lm widget: - text: "Το αγαπημένο μου μέρος είναι" license: apache-2.0 --- # Greek (el) GPT2 model <img src="https://huggingface.co/lighteternal/gpt2-finetuned-greek-small/raw/main/GPT2el.png" width="600"/> ### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) * language: el * licence: apache-2.0 * dataset: ~23.4 GB of Greek corpora * model: GPT2 (12-layer, 768-hidden, 12-heads, 117M parameters. OpenAI GPT-2 English model, finetuned for the Greek language) * pre-processing: tokenization + BPE segmentation * metrics: perplexity ### Model description A text generation (autoregressive) model, using Huggingface transformers and fastai based on the English GPT-2. Finetuned with gradual layer unfreezing. This is a more efficient and sustainable alternative compared to training from scratch, especially for low-resource languages. Based on the work of Thomas Dehaene (ML6) for the creation of a Dutch GPT2: https://colab.research.google.com/drive/1Y31tjMkB8TqKKFlZ5OJ9fcMp3p8suvs4?usp=sharing ### How to use ``` from transformers import pipeline model = "lighteternal/gpt2-finetuned-greek" generator = pipeline( 'text-generation', device=0, model=f'{model}', tokenizer=f'{model}') text = "Μια φορά κι έναν καιρό" print("\ ".join([x.get("generated_text") for x in generator( text, max_length=len(text.split(" "))+15, do_sample=True, top_k=50, repetition_penalty = 1.2, add_special_tokens=False, num_return_sequences=5, temperature=0.95, top_p=0.95)])) ``` ## Training data We used a 23.4GB sample from a consolidated Greek corpus from CC100, Wikimatrix, Tatoeba, Books, SETIMES and GlobalVoices containing long senquences. This is a better version of our GPT-2 small model (https://huggingface.co/lighteternal/gpt2-finetuned-greek-small) ## Metrics | Metric | Value | | ----------- | ----------- | | Train Loss | 3.67 | | Validation Loss | 3.83 | | Perplexity | 39.12 | ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call) Based on the work of Thomas Dehaene (ML6): https://blog.ml6.eu/dutch-gpt2-autoregressive-language-modelling-on-a-budget-cff3942dd020
mrm8488/deberta-v3-small-finetuned-cola
eed29ebc1fcbc35eea945e50510c93f1bb4895d4
2021-12-07T17:18:59.000Z
[ "pytorch", "tensorboard", "deberta-v2", "text-classification", "en", "dataset:glue", "arxiv:2006.03654", "arxiv:2111.09543", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
mrm8488
null
mrm8488/deberta-v3-small-finetuned-cola
48
2
transformers
6,084
--- language: - en license: mit tags: - generated_from_trainer datasets: - glue widget: - text: "They represented seriously to the dean Mary as a genuine linguist." metrics: - matthews_correlation model-index: - name: deberta-v3-small results: - task: name: Text Classification type: text-classification dataset: name: GLUE COLA type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.6333205721749096 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # DeBERTa-v3-small fine-tuned on CoLA This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.4051 - Matthews Correlation: 0.6333 ## Model description [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up. The DeBERTa V3 small model comes with 6 layers and a hidden size of 768. Its total parameter number is 143M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. ## Intended uses & limitations More information needed ## Training and evaluation data The Corpus of Linguistic Acceptability (CoLA) in its full form consists of 10657 sentences from 23 linguistics publications, expertly annotated for acceptability (grammaticality) by their original authors. The public version provided here contains 9594 sentences belonging to training and development sets, and excludes 1063 sentences belonging to a held out test set. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | No log | 1.0 | 535 | 0.4051 | 0.6333 | | 0.3371 | 2.0 | 1070 | 0.4455 | 0.6531 | | 0.3371 | 3.0 | 1605 | 0.5755 | 0.6499 | | 0.1305 | 4.0 | 2140 | 0.7188 | 0.6553 | | 0.1305 | 5.0 | 2675 | 0.8047 | 0.6700 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
nielsr/convnext-xlarge-224-22k
c75e01f6e883d3fe653d452a3f9cb18e0a8e5ef1
2022-02-22T11:11:07.000Z
[ "pytorch", "convnext", "image-classification", "transformers" ]
image-classification
false
nielsr
null
nielsr/convnext-xlarge-224-22k
48
null
transformers
6,085
Entry not found
prajjwal1/roberta-large-mnli
2b3512bd70fcc9a896cb85d36a3c21c443f2ae8f
2021-10-05T18:03:08.000Z
[ "pytorch", "roberta", "text-classification", "arxiv:2110.01518", "transformers" ]
text-classification
false
prajjwal1
null
prajjwal1/roberta-large-mnli
48
null
transformers
6,086
If you use the model, please consider citing the paper ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Roberta-large trained on MNLI. ---------------------- | Task | Accuracy | |---------|----------| | MNLI | 90.15 | | MNLI-mm | 90.02 | You can also check out: - `prajjwal1/roberta-base-mnli` - `prajjwal1/roberta-large-mnli` - `prajjwal1/albert-base-v2-mnli` - `prajjwal1/albert-base-v1-mnli` - `prajjwal1/albert-large-v2-mnli` [@prajjwal_1](https://twitter.com/prajjwal_1)
seongju/klue-mrc-koelectra-base
92ce9600694f0837cdd9ebe02d261c0deaedab09
2021-08-19T13:05:26.000Z
[ "pytorch", "electra", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
false
seongju
null
seongju/klue-mrc-koelectra-base
48
null
transformers
6,087
Entry not found
yhavinga/gpt2-large-dutch
3c46b2c7dd9cdaf0be01df572bc0134888f4517b
2022-03-20T10:21:46.000Z
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "nl", "dataset:yhavinga/mc4_nl_cleaned", "transformers", "gpt2-large" ]
text-generation
false
yhavinga
null
yhavinga/gpt2-large-dutch
48
1
transformers
6,088
--- language: nl widget: - text: "In het jaar 2030 zullen we" - text: "Toen ik gisteren volledig in de ban was van" - text: "Studenten en leraren van de Bogazici Universiteit in de Turkse stad Istanbul" - text: "In Israël was een strenge lockdown" tags: - gpt2-large - gpt2 pipeline_tag: text-generation datasets: - yhavinga/mc4_nl_cleaned --- # GPT2-Large pre-trained on cleaned Dutch mC4 🇳🇱 A GPT2 large model (762M parameters) trained from scratch on Dutch, with perplexity 15.1 on cleaned Dutch mC4. ## How To Use You can use this GPT2-model directly with a pipeline for text generation. ```python MODEL_DIR='yhavinga/gpt2-large-dutch' from transformers import pipeline, GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained(MODEL_DIR) model = GPT2LMHeadModel.from_pretrained(MODEL_DIR) generator = pipeline('text-generation', model, tokenizer=tokenizer) generated_text = generator('Het eiland West-', max_length=100, do_sample=True, top_k=40, top_p=0.95, repetition_penalty=2.0)) ``` *"Het eiland West-" - "Terschelling wordt sinds jaar en dag bewoond door de mens. De mensen die in het huidige Terherne wonen doen er alles aan om hun dorp te behouden voor deze diersoort, namelijk; een natuurreservaat dat vooral bestaat uit hoge duinen met lage begroeing waar planten van vroeger worden afgewisseld (zoals wilde hyacinten)en waarop grassen groeien waarvan sommige soorten zeldzame vormen hebben ontwikkeld: duinlelie of blauwe bosbes zijn bijvoorbeeld bekend vanwege onder andere kleurmole"* ## Tokenizer * BPE tokenizer trained from scratch for Dutch on mC4 nl cleaned with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling). ## Dataset This model was trained on of the `full` configuration (33B tokens) of [cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned), which is the original mC4, except * Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed * Sentences with less than 3 words are removed * Sentences with a word of more than 1000 characters are removed * Documents with less than 5 sentences are removed * Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies", "use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed. ## Models TL;DR: [yhavinga/gpt2-medium-dutch](https://huggingface.co/yhavinga/gpt2-medium-dutch) is the best model. * The models with `a`/`b` in the step-column have been trained to step `a` of a total of `b` steps. | | model | params | train seq len | ppl | loss | batch size | epochs | steps | optim | lr | duration | config | |-----------------------------------------------------------------------------------|---------|--------|---------------|------|------|------------|--------|-----------------|-----------|--------|----------|-----------| | [yhavinga/gpt-neo-125M-dutch](https://huggingface.co/yhavinga/gpt-neo-125M-dutch) | gpt neo | 125M | 512 | 20.9 | 3.04 | 128 | 1 | 190000/558608 | adam | 2.4e-3 | 1d 12h | full | | [yhavinga/gpt2-medium-dutch](https://huggingface.co/yhavinga/gpt2-medium-dutch) | gpt2 | 345M | 512 | 15.1 | 2.71 | 128 | 1 | 320000/520502 | adam | 8e-4 | 7d 2h | full | | [yhavinga/gpt2-large-dutch](https://huggingface.co/yhavinga/gpt2-large-dutch) | gpt2 | 762M | 512 | 15.1 | 2.72 | 32 | 1 | 1100000/2082009 | adafactor | 3.3e-5 | 8d 15h | large | | [yhavinga/gpt-neo-1.3B-dutch](https://huggingface.co/yhavinga/gpt-neo-1.3B-dutch) | gpt neo | 1.3B | 512 | 16.0 | 2.77 | 16 | 1 | 960000/3049896 | adafactor | 5e-4 | 7d 11h | full | ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was also instrumental in most, if not all, parts of the training. The following repositories where helpful in setting up the TPU-VM, and training the models: * [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp) * [HUggingFace Flax MLM examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) * [gpt2-medium-persian](https://huggingface.co/flax-community/gpt2-medium-persian) * [gpt2-medium-indonesian](https://huggingface.co/flax-community/gpt2-medium-persian) Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
0x7194633/pyGPT-50M
602850406ef8a27044c92e30e5b2b9202226b3fa
2022-07-01T03:29:17.000Z
[ "pytorch", "gpt2", "text-generation", "en", "code", "transformers", "license:mpl-2.0" ]
text-generation
false
0x7194633
null
0x7194633/pyGPT-50M
48
1
transformers
6,089
--- license: mpl-2.0 language: - en - code --- ## PythonGPT A GPT2-type neural network trained on 16 gigabytes of Pyhon scripts from scratch. It has 50 million parameters. Made as a toy.
ukr-models/uk-ner
3c903f55014e7c10376c6c22e81d96e0cee0a1e4
2022-04-07T05:54:54.000Z
[ "pytorch", "xlm-roberta", "token-classification", "uk", "transformers", "ukrainian", "license:mit", "autotrain_compatible" ]
token-classification
false
ukr-models
null
ukr-models/uk-ner
48
1
transformers
6,090
--- language: - uk tags: - ukrainian widget: - text: "Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера." license: mit --- ## Model Description Fine-tuning of [XLM-RoBERTa-Uk](https://huggingface.co/ukr-models/xlm-roberta-base-uk) model on [synthetic NER dataset](https://huggingface.co/datasets/ukr-models/Ukr-Synth) with B-PER, I-PER, B-LOC, I-LOC, B-ORG, I-ORG tags ## How to Use Huggingface pipeline way (returns tokens with labels): ```py from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-ner') model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-ner') ner = pipeline('ner', model=model, tokenizer=tokenizer) ner("Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера.") ``` If you wish to get predictions split by words, not by tokens, you may use the following approach (download script get_predictions.py from the repository, it uses [package tokenize_uk](https://pypi.org/project/tokenize_uk/) for splitting) ```py from transformers import AutoTokenizer, AutoModelForTokenClassification from get_predictions import get_word_predictions tokenizer = AutoTokenizer.from_pretrained('ukr-models/uk-ner') model = AutoModelForTokenClassification.from_pretrained('ukr-models/uk-ner') get_word_predictions(model, tokenizer, ["Могила Тараса Шевченка — місце поховання видатного українського поета Тараса Шевченка в місті Канів (Черкаська область) на Чернечій горі, над яким із 1939 року височіє бронзовий пам'ятник роботи скульптора Матвія Манізера."]) ```
ml6team/keyphrase-generation-t5-small-openkp
7af3cadf6123a9384aff7f35200d575f3c59ede0
2022-06-16T18:02:54.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:midas/openkp", "arxiv:1911.02671", "transformers", "keyphrase-generation", "license:mit", "model-index", "autotrain_compatible" ]
text2text-generation
false
ml6team
null
ml6team/keyphrase-generation-t5-small-openkp
48
null
transformers
6,091
--- language: en license: mit tags: - keyphrase-generation datasets: - midas/openkp widget: - text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text." example_title: "Example 1" - text: "In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points inF1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition(NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks." example_title: "Example 2" model-index: - name: DeDeckerThomas/keyphrase-generation-t5-small-openkp results: - task: type: keyphrase-generation name: Keyphrase Generation dataset: type: midas/openkp name: openkp metrics: - type: F1@M (Present) value: 0.246 name: F1@M (Present) - type: F1@O (Present) value: 0.151 name: F1@O (Present) - type: F1@M (Absent) value: 0.002 name: F1@M (Absent) - type: F1@O (Absent) value: 7.56e-5 name: F1@O (Absent) --- # 🔑 Keyphrase Generation model: T5-small-OpenKP Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳. Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. ## 📓 Model Description This model uses [T5-small model](https://huggingface.co/t5-small) as its base model and fine-tunes it on the [OpenKP dataset](https://huggingface.co/datasets/midas/openkp). Keyphrase generation transformers are fine-tuned as a text-to-text generation problem where the keyphrases are generated. The result is a concatenated string with all keyphrases separated by a given delimiter (i.e. “;”). These models are capable of generating present and absent keyphrases. ## ✋ Intended Uses & Limitations ### 🛑 Limitations * Only works for English documents. * For a custom model, please consult the training notebook for more information (link incoming). * Sometimes the output doesn't make any sense. ### ❓ How To Use ```python # Model parameters from transformers import ( Text2TextGenerationPipeline, AutoModelForSeq2SeqLM, AutoTokenizer, ) class KeyphraseGenerationPipeline(Text2TextGenerationPipeline): def __init__(self, model, keyphrase_sep_token=";", *args, **kwargs): super().__init__( model=AutoModelForSeq2SeqLM.from_pretrained(model), tokenizer=AutoTokenizer.from_pretrained(model), *args, **kwargs ) self.keyphrase_sep_token = keyphrase_sep_token def postprocess(self, model_outputs): results = super().postprocess( model_outputs=model_outputs ) return [[keyphrase.strip() for keyphrase in result.get("generated_text").split(self.keyphrase_sep_token) if keyphrase != ""] for result in results] ``` ```python # Load pipeline model_name = "ml6team/keyphrase-generation-t5-small-openkp" generator = KeyphraseGenerationPipeline(model=model_name) ``` ```python text = """ Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time. Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text. """.replace("\n", " ") keyphrases = generator(text) print(keyphrases) ``` ``` # Output [['keyphrase extraction', 'text analysis', 'artificial intelligence']] ``` ## 📚 Training Dataset [OpenKP](https://github.com/microsoft/OpenKP) is a large-scale, open-domain keyphrase extraction dataset with 148,124 real-world web documents along with 1-3 most relevant human-annotated keyphrases. You can find more information in the [paper](https://arxiv.org/abs/1911.02671). ## 👷‍♂️ Training Procedure For more in detail information, you can take a look at the [training notebook](). ### Training Parameters | Parameter | Value | | --------- | ------| | Learning Rate | 5e-5 | | Epochs | 50 | | Early Stopping Patience | 1 | ### Preprocessing The documents in the dataset are already preprocessed into list of words with the corresponding keyphrases. The only thing that must be done is tokenization and joining all keyphrases into one string with a certain seperator of choice( ```;``` ). ```python from datasets import load_dataset from transformers import AutoTokenizer # Tokenizer tokenizer = AutoTokenizer.from_pretrained("t5-small", add_prefix_space=True) # Dataset parameters dataset_full_name = "midas/inspec" dataset_subset = "raw" dataset_document_column = "document" keyphrase_sep_token = ";" def preprocess_keyphrases(text_ids, kp_list): kp_order_list = [] kp_set = set(kp_list) text = tokenizer.decode( text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True ) text = text.lower() for kp in kp_set: kp = kp.strip() kp_index = text.find(kp.lower()) kp_order_list.append((kp_index, kp)) kp_order_list.sort() present_kp, absent_kp = [], [] for kp_index, kp in kp_order_list: if kp_index < 0: absent_kp.append(kp) else: present_kp.append(kp) return present_kp, absent_kp def preprocess_fuction(samples): processed_samples = {"input_ids": [], "attention_mask": [], "labels": []} for i, sample in enumerate(samples[dataset_document_column]): input_text = " ".join(sample) inputs = tokenizer( input_text, padding="max_length", truncation=True, ) present_kp, absent_kp = preprocess_keyphrases( text_ids=inputs["input_ids"], kp_list=samples["extractive_keyphrases"][i] + samples["abstractive_keyphrases"][i], ) keyphrases = present_kp keyphrases += absent_kp target_text = f" {keyphrase_sep_token} ".join(keyphrases) with tokenizer.as_target_tokenizer(): targets = tokenizer( target_text, max_length=40, padding="max_length", truncation=True ) targets["input_ids"] = [ (t if t != tokenizer.pad_token_id else -100) for t in targets["input_ids"] ] for key in inputs.keys(): processed_samples[key].append(inputs[key]) processed_samples["labels"].append(targets["input_ids"]) return processed_samples # Load dataset dataset = load_dataset(dataset_full_name, dataset_subset) # Preprocess dataset tokenized_dataset = dataset.map(preprocess_fuction, batched=True) ``` ### Postprocessing For the post-processing, you will need to split the string based on the keyphrase separator. ```python def extract_keyphrases(examples): return [example.split(keyphrase_sep_token) for example in examples] ``` ## 📝 Evaluation Results Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases. In keyphrase generation you also look at F1@O where O stands for the number of ground truth keyphrases. The model achieves the following results on the OpenKP test set: Extractive keyphrases | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | P@O | R@O | F1@O | |:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|:----:|:----:|:----:| | OpenKP Test Set | 0.11 | 0.32 | 0.16 | 0.06 | 0.32 | 0.09 | 0.22 | 0.32 | 0.25 | 0.15 | 0.15 | 0.15 | Abstractive keyphrases | Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | P@O | R@O | F1@O | |:-----------------:|:-----:|:-----:|:-----:|:------:|:-----:|:-------:|:-----:|:-----:|:-----:|:--------:|:--------:|:---------:| | OpenKP Test Set | 0.001 | 0.003 | 0.001 | 0.0004 | 0.004 | 0.0007 | 0.001 | 0.04 | 0.002 | 7.56e-e5 | 7.56e-e5 | 7.56e-e5 | For more information on the evaluation process, you can take a look at the keyphrase extraction evaluation notebook. ## 🚨 Issues Please feel free to start discussions in the Community Tab.
CEBaB/bert-base-uncased.CEBaB.sa.5-class.exclusive.seed_66
f5301ba50903bc3b92ca71585189c27b53bb30f0
2022-05-11T00:48:16.000Z
[ "pytorch", "bert", "transformers" ]
null
false
CEBaB
null
CEBaB/bert-base-uncased.CEBaB.sa.5-class.exclusive.seed_66
48
null
transformers
6,092
Entry not found
danielhou13/longformer-finetuned-news-cogs402
46dc639baa796e9ad30abbfb37135e125582aad6
2022-05-31T03:01:23.000Z
[ "pytorch", "longformer", "text-classification", "transformers" ]
text-classification
false
danielhou13
null
danielhou13/longformer-finetuned-news-cogs402
48
null
transformers
6,093
Entry not found
ibm/roberta-large-vira-intents
f834137cdf775f25d3aa412b97ce579f63c36ffa
2022-06-01T12:06:27.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:ibm/vira-intents", "arxiv:2205.11966", "transformers", "intent detection", "license:other" ]
text-classification
false
ibm
null
ibm/roberta-large-vira-intents
48
null
transformers
6,094
--- language: - en tags: - intent detection license: "other" datasets: - ibm/vira-intents metrics: - accuracy widget: - text: "Should I be concerned about side effects of the vaccine if I'm breastfeeding?} & Is breastfeeding safe with the vaccine" example_title: "Breastfeeding" - text: "Does the vaccine prevent transmission?" example_title: "Transmission" - text: "Will the vaccine make me sterile or infertile? " example_title: "Infertility" --- ## Model Description This model is based on RoBERTa large (Liu, 2019), fine-tuned on a dataset of intent expressions available [here](https://research.ibm.com/haifa/dept/vst/debating_data.shtml) and also on 🤗 Transformer datasets hub [here](https://huggingface.co/datasets/ibm/vira-intents). The model was created as part of the work described in [Benchmark Data and Evaluation Framework for Intent Discovery Around COVID-19 Vaccine Hesitancy ](https://arxiv.org/abs/2205.11966). The model is released under the Community Data License Agreement - Sharing - Version 1.0 ([link](https://cdla.dev/sharing-1-0/)), If you use this model, please cite our paper. The official GitHub is [here](https://github.com/IBM/vira-intent-discovery). The script used for training the model is [trainer.py](https://github.com/IBM/vira-intent-discovery/blob/master/trainer.py). ## Training parameters 1. base_model = 'roberta-large' 1. learning_rate=5e-6 1. per_device_train_batch_size=16, 1. per_device_eval_batch_size=16, 1. num_train_epochs=15, 1. load_best_model_at_end=True, 1. save_total_limit=1, 1. save_strategy='epoch', 1. evaluation_strategy='epoch', 1. metric_for_best_model='accuracy', 1. seed=123 ## Data collator DataCollatorWithPadding
KES/caribe-capitalise
7892f7a15512f997e26519d4f487abf2bd94f5be
2022-06-10T22:27:51.000Z
[ "pytorch", "t5", "text2text-generation", "en", "transformers", "sentence capitalization", "license:mit", "autotrain_compatible" ]
text2text-generation
false
KES
null
KES/caribe-capitalise
48
1
transformers
6,095
--- license: mit language: en tags: - sentence capitalization - text2text-generation --- This model utilises T5-base pre-trained model. It was fine tuned using a custom dataset This model was fine-tuned for capitalisation on text that includes multiple sentences or questions. Interested in Caribbean Creole? Checkout the library [Caribe](https://pypi.org/project/Caribe/) for more info and future updates. ___ # Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("KES/caribe-capitalise") model = AutoModelForSeq2SeqLM.from_pretrained("KES/caribe-capitalise") text = "john is a boy. he is 12 years old. his sister's name is Joy." inputs = tokenizer("text:"+text, truncation=True, return_tensors='pt') output = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True) capitalised_text=tokenizer.batch_decode(output, skip_special_tokens=True) print("".join(capitalised_text)) #Capitalised Output: John is a boy. He is 12 years old. His sister's name is Joy. ``` ___
Tomas23/twitter-roberta-base-mar2022-finetuned-sentiment
cdf42e26f4fc6fcc1e3e8b0ec10402b9f84a7172
2022-06-13T14:23:23.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
Tomas23
null
Tomas23/twitter-roberta-base-mar2022-finetuned-sentiment
48
null
transformers
6,096
Entry not found
emilys/BERTweet-CoNLL
eca14ffff820e794384f4408a8fa6267aa2e1534
2022-06-15T21:19:05.000Z
[ "pytorch", "roberta", "token-classification", "en", "dataset:conll2003", "transformers", "NER", "autotrain_compatible" ]
token-classification
false
emilys
null
emilys/BERTweet-CoNLL
48
null
transformers
6,097
--- language: - en tags: - NER datasets: - conll2003 --- bertweet-base (https://huggingface.co/vinai/bertweet-base) finetuned on CoNLL (2003) English, following https://github.com/huggingface/transformers/tree/main/examples/legacy/token-classification
Anupama/distilbert-base-uncased-finetuned-emotion
1430ab1624afbd5e88ff6561f793c72a0693f466
2022-07-03T13:53:44.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
Anupama
null
Anupama/distilbert-base-uncased-finetuned-emotion
48
null
transformers
6,098
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9219009840141562 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2226 - Accuracy: 0.922 - F1: 0.9219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8435 | 1.0 | 250 | 0.3324 | 0.897 | 0.8930 | | 0.2578 | 2.0 | 500 | 0.2226 | 0.922 | 0.9219 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
postpandas/distilbert-base-uncased-finetuned-emotion
48f718383a5dcdd42161648d6f2052b693d712da
2022-07-14T14:46:49.000Z
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:emotion", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
false
postpandas
null
postpandas/distilbert-base-uncased-finetuned-emotion
48
null
transformers
6,099
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9244103213623817 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2204 - Accuracy: 0.9245 - F1: 0.9244 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8209 | 1.0 | 250 | 0.3154 | 0.91 | 0.9081 | | 0.2531 | 2.0 | 500 | 0.2204 | 0.9245 | 0.9244 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3