modelId
stringlengths
4
112
sha
stringlengths
40
40
lastModified
stringlengths
24
24
tags
sequence
pipeline_tag
stringclasses
29 values
private
bool
1 class
author
stringlengths
2
38
config
null
id
stringlengths
4
112
downloads
float64
0
36.8M
likes
float64
0
712
library_name
stringclasses
17 values
__index_level_0__
int64
0
38.5k
readme
stringlengths
0
186k
razent/SciFive-base-Pubmed_PMC
e26bb9e7ebd7cbaa196be47e9e1f28348f8aa434
2022-03-20T17:46:59.000Z
[ "pytorch", "tf", "jax", "t5", "text2text-generation", "en", "dataset:pubmed", "dataset:pmc/open_access", "arxiv:2106.03598", "transformers", "token-classification", "text-classification", "question-answering", "text-generation", "autotrain_compatible" ]
text-classification
false
razent
null
razent/SciFive-base-Pubmed_PMC
735
1
transformers
2,000
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed - pmc/open_access --- # SciFive Pubmed+PMC Base ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-base-Pubmed_PMC") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-base-Pubmed_PMC") ​ sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ." text = "ncbi_ner: " + sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
NinaErlacher/distilroberta-base-climate-f-finetuned-squad_v2
cb79290c354e432eca24d2e4726d2dd53c4e575f
2022-06-17T12:07:56.000Z
[ "pytorch", "tensorboard", "roberta", "question-answering", "dataset:squad_v2", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
NinaErlacher
null
NinaErlacher/distilroberta-base-climate-f-finetuned-squad_v2
735
null
transformers
2,001
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilroberta-base-climate-f-finetuned-squad_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-climate-f-finetuned-squad_v2 This model is a fine-tuned version of [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.0795 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1205 | 1.0 | 4123 | 1.0338 | | 0.8309 | 2.0 | 8246 | 1.0795 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
trueto/medbert-base-chinese
668ca1e929abd0380c7ff831615b154122daf5c1
2021-05-20T08:08:47.000Z
[ "pytorch", "jax", "bert", "pretraining", "transformers" ]
null
false
trueto
null
trueto/medbert-base-chinese
734
2
transformers
2,002
# [medbert](https://github.com/trueto/medbert) 本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型 ## 评估基准 构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、 中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。 | **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** | | ---- | ---- | ---- |---- |---- |:----:| | CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 | | CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 | | CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 | | CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 | ## 开源模型 在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。 ## 性能表现 在同等实验环境,相同训练参数和脚本下,各模型的性能表现 | **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** | | :---- | :----: | :----: | :----: | :----: | | [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% | | [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% | | [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% | | MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** | |MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% | |MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% | |- | - | - | - | - | | [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% | | MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% | |MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** | ## 引用格式 ``` 杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03. ```
IDEA-CCNL/Erlangshen-Ubert-110M-Chinese
d0257027b0a2190d9e74a019587ecea958f62772
2022-07-02T13:41:04.000Z
[ "pytorch", "bert", "fill-mask", "zh", "transformers", "NLU", "Sentiment", "Chinese", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
IDEA-CCNL
null
IDEA-CCNL/Erlangshen-Ubert-110M-Chinese
733
null
transformers
2,003
--- language: - zh license: apache-2.0 tags: - bert - NLU - Sentiment - Chinese inference: false widget: - text: "今天心情不好" --- # Erlangshen-Ubert-110M, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert). We collect 70+ datasets in the Chinese domain for finetune, with a total of 1065069 samples. Our model is mainly based on [macbert](https://huggingface.co/hfl/chinese-macbert-base) Ubert is a solution we proposed when we were doing the [2022 AIWIN Competition](http://ailab.aiwin.org.cn/competitions/68#results), and achieved **<font color=#FF0000 >the first place in the A/B list</font>**. Compared with the officially provided baseline, an increase of 20 percentage points. Ubert can not only complete common extraction tasks such as entity recognition and event extraction, but also classification tasks such as news classification and natural language reasoning. **<font color=#FF0000 > more detail in our [github](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/dev/yangping/fengshen/examples/ubert)</font>** ## Usage pip install fengshen ```python git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git cd Fengshenbang-LM pip install --editable ./ ``` run the code ```python import argparse from fengshen import UbertPiplines total_parser = argparse.ArgumentParser("TASK NAME") total_parser = UbertPiplines.piplines_args(total_parser) args = total_parser.parse_args() args.pretrained_model_path = "IDEA-CCNL/Erlangshen-Ubert-110M-Chinese" test_data=[ { "task_type": "抽取任务", "subtask_type": "实体识别", "text": "这也让很多业主据此认为,雅清苑是政府公务员挤对了国家的经适房政策。", "choices": [ {"entity_type": "小区名字"}, {"entity_type": "岗位职责"} ], "id": 0} ] model = UbertPiplines(args) result = model.predict(test_data) for line in result: print(line) ``` If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
cointegrated/rut5-base-absum
c9b878be0a5030d7d08eeae6c832aac1221d9fd1
2021-11-12T10:52:26.000Z
[ "pytorch", "t5", "text2text-generation", "ru", "dataset:IlyaGusev/gazeta", "dataset:csebuetnlp/xlsum", "dataset:mlsum", "dataset:wiki_lingua", "transformers", "russian", "summarization", "license:mit", "autotrain_compatible" ]
summarization
false
cointegrated
null
cointegrated/rut5-base-absum
731
1
transformers
2,004
--- language: ["ru"] tags: - russian - summarization datasets: - IlyaGusev/gazeta - csebuetnlp/xlsum - mlsum - wiki_lingua license: mit widget: - text: "Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо." --- This is a model for abstractive Russian summarization, based on [cointegrated/rut5-base-multitask](https://huggingface.co/cointegrated/rut5-base-multitask) and fine-tuned on 4 datasets. It can be used as follows: ```python from transformers import T5ForConditionalGeneration, T5Tokenizer MODEL_NAME = 'cointegrated/rut5-base-absum' model = T5ForConditionalGeneration.from_pretrained(MODEL_NAME) tokenizer = T5Tokenizer.from_pretrained(MODEL_NAME) model.cuda(); model.eval(); def summarize( text, n_words=None, compression=None, max_length=1000, num_beams=3, do_sample=False, repetition_penalty=10.0, **kwargs ): """ Summarize the text The following parameters are mutually exclusive: - n_words (int) is an approximate number of words to generate. - compression (float) is an approximate length ratio of summary and original text. """ if n_words: text = '[{}] '.format(n_words) + text elif compression: text = '[{0:.1g}] '.format(compression) + text x = tokenizer(text, return_tensors='pt', padding=True).to(model.device) with torch.inference_mode(): out = model.generate( **x, max_length=max_length, num_beams=num_beams, do_sample=do_sample, repetition_penalty=repetition_penalty, **kwargs ) return tokenizer.decode(out[0], skip_special_tokens=True) text = """Высота башни составляет 324 метра (1063 фута), примерно такая же высота, как у 81-этажного здания, и самое высокое сооружение в Париже. Его основание квадратно, размером 125 метров (410 футов) с любой стороны. Во время строительства Эйфелева башня превзошла монумент Вашингтона, став самым высоким искусственным сооружением в мире, и этот титул она удерживала в течение 41 года до завершения строительство здания Крайслер в Нью-Йорке в 1930 году. Это первое сооружение которое достигло высоты 300 метров. Из-за добавления вещательной антенны на вершине башни в 1957 году она сейчас выше здания Крайслер на 5,2 метра (17 футов). За исключением передатчиков, Эйфелева башня является второй самой высокой отдельно стоящей структурой во Франции после виадука Мийо.""" print(summarize(text)) # Эйфелева башня достигла высоты 300 метров. print(summarize(text, n_words=10)) # Французская Эйфелева башня достигла высоты 300 метров. ```
facebook/wav2vec2-large-xlsr-53-italian
e2760fe9ade861421db16f4548f6c0d48568ae0c
2021-07-06T02:53:33.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "it", "dataset:common_voice", "transformers", "speech", "audio", "license:apache-2.0" ]
automatic-speech-recognition
false
facebook
null
facebook/wav2vec2-large-xlsr-53-italian
731
2
transformers
2,005
--- language: it datasets: - common_voice tags: - speech - audio - automatic-speech-recognition license: apache-2.0 --- ## Evaluation on Common Voice IT Test ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys model_name = "facebook/wav2vec2-large-xlsr-53-italian" device = "cuda" chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605 model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ds = load_dataset("common_voice", "it", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` **Result**: 22.1 %
nvidia/groupvit-gcc-yfcc
fd613a6633497e83b16be6ddba17644151c44218
2022-06-29T07:21:09.000Z
[ "pytorch", "groupvit", "feature-extraction", "arxiv:2202.11094", "transformers", "vision" ]
feature-extraction
false
nvidia
null
nvidia/groupvit-gcc-yfcc
731
null
transformers
2,006
--- tags: - vision --- # Model Card: GroupViT This checkpoint is uploaded by Jiarui Xu. ## Model Details The GroupViT model was proposed in [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. Inspired by [CLIP](clip), GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories. ### Model Date June 2022 ### Abstract Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision. ### Documents - [GroupViT Paper](https://arxiv.org/abs/2202.11094) ### Use with Transformers ```python from PIL import Image import requests from transformers import AutoProcessor, GroupViTModel model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc") processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities ``` ## Data The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/groupvit.html#). ### BibTeX entry and citation info ```bibtex @article{xu2022groupvit, author = {Xu, Jiarui and De Mello, Shalini and Liu, Sifei and Byeon, Wonmin and Breuel, Thomas and Kautz, Jan and Wang, Xiaolong}, title = {GroupViT: Semantic Segmentation Emerges from Text Supervision}, journal = {arXiv preprint arXiv:2202.11094}, year = {2022}, } ```
xlm-mlm-enfr-1024
31a5a94400075535209e7c1ebc8b01933543b2df
2022-07-22T08:08:19.000Z
[ "pytorch", "tf", "xlm", "fill-mask", "multilingual", "en", "fr", "arxiv:1901.07291", "arxiv:1910.09700", "transformers", "license:cc-by-nc-4.0", "autotrain_compatible" ]
fill-mask
false
null
null
xlm-mlm-enfr-1024
730
null
transformers
2,007
--- language: - multilingual - en - fr license: cc-by-nc-4.0 --- # xlm-mlm-enfr-1024 # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 6. [Environmental Impact](#environmental-impact) 7. [Technical Specifications](#technical-specifications) 8. [Citation](#citation) 9. [Model Card Authors](#model-card-authors) 10. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-enfr-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-French. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details. ## Model Description - **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291) - **Model type:** Language model - **Language(s) (NLP):** English-French - **License:** CC-BY-NC-4.0 - **Related Models:** [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-ende-1024](https://huggingface.co/xlm-mlm-ende-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024) - **Resources for more information:** - [Associated paper](https://arxiv.org/abs/1901.07291) - [GitHub Repo](https://github.com/facebookresearch/XLM) - [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) # Uses ## Direct Use The model is a language model. The model can be used for masked language modeling. ## Downstream Use To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. # Training The model developers write: > In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4. See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for links, citations, and further details on the training data and training procedure. The model developers also write that: > If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data. See the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details. # Evaluation ## Testing Data, Factors & Metrics The model developers evaluated the model on the [WMT'14 English-French](https://huggingface.co/datasets/wmt14) dataset using the [BLEU metric](https://huggingface.co/spaces/evaluate-metric/bleu). See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details on the testing data, factors and metrics. ## Results For xlm-mlm-enfr-1024 results, see Table 1 and Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications The model developers write: > We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models. See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details. # Citation **BibTeX:** ```bibtex @article{lample2019cross, title={Cross-lingual language model pretraining}, author={Lample, Guillaume and Conneau, Alexis}, journal={arXiv preprint arXiv:1901.07291}, year={2019} } ``` **APA:** - Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291. # Model Card Authors This model card was written by the team at Hugging Face. # How to Get Started with the Model More information needed. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
allenai/hvila-row-layoutlm-finetuned-docbank
c01e12d0b0f90b91e155a89f53b7c7e2b65be943
2021-09-27T22:58:30.000Z
[ "pytorch", "hierarchical_model", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
allenai
null
allenai/hvila-row-layoutlm-finetuned-docbank
730
null
transformers
2,008
Entry not found
patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm
5569ff8b50dc9207760d321264a6dfd18ee4324f
2021-12-10T15:49:13.000Z
[ "pytorch", "tf", "jax", "wav2vec2", "automatic-speech-recognition", "es", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
false
patrickvonplaten
null
patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm
729
7
transformers
2,009
--- language: es datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 --- # Wav2Vec2-Large-XLSR-53-Spanish-With-LM This is a model copy of [Wav2Vec2-Large-XLSR-53-Spanish](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-spanish) that has language model support. This model card can be seen as a demo for the [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) integration with Transformers led by [this PR](https://github.com/huggingface/transformers/pull/14339). The PR explains in-detail how the integration works. In a nutshell: This PR adds a new Wav2Vec2WithLMProcessor class as drop-in replacement for Wav2Vec2Processor. The only change from the existing ASR pipeline will be: ## Changes ```diff import torch from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torchaudio.functional as F model_id = "patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm" sample = next(iter(load_dataset("common_voice", "es", split="test", streaming=True))) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy() model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id) input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits -prediction_ids = torch.argmax(logits, dim=-1) -transcription = processor.batch_decode(prediction_ids) +transcription = processor.batch_decode(logits.numpy()).text # => 'bien y qué regalo vas a abrir primero' ``` **Improvement** This model has been compared on 512 speech samples from the Spanish Common Voice Test set and gives a nice *20 %* performance boost: The results can be reproduced by running *from this model repository*: | Model | WER | CER | | ------------- | ------------- | ------------- | | patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm | **8.44%** | **2.93%** | | jonatasgrosman/wav2vec2-large-xlsr-53-spanish | **10.20%** | **3.24%** | ``` bash run_ngram_wav2vec2.py 1 512 ``` ``` bash run_ngram_wav2vec2.py 0 512 ``` with `run_ngram_wav2vec2.py` being https://huggingface.co/patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm/blob/main/run_ngram_wav2vec2.py
tanapatentlm/patentdeberta_base_spec_1024_pwi
74f06791080deb3b9311f9be366f49af102ddf0a
2022-06-17T06:08:37.000Z
[ "pytorch", "deberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
tanapatentlm
null
tanapatentlm/patentdeberta_base_spec_1024_pwi
729
null
transformers
2,010
Entry not found
Elron/bleurt-large-512
00397b0917e464c5ca1a45db156d0b836cd65e97
2021-12-15T01:57:26.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
false
Elron
null
Elron/bleurt-large-512
728
1
transformers
2,011
## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-large-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-large-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([0.9877, 0.0475]) ```
jcblaise/electra-tagalog-small-uncased-discriminator
2ca8fe4e593cd03e416e4c91460c92e6fb95bbd1
2021-11-12T03:24:06.000Z
[ "pytorch", "electra", "pretraining", "tl", "transformers", "tagalog", "filipino", "license:gpl-3.0" ]
null
false
jcblaise
null
jcblaise/electra-tagalog-small-uncased-discriminator
728
null
transformers
2,012
--- language: tl tags: - electra - tagalog - filipino license: gpl-3.0 inference: false --- **Deprecation Notice** This model is deprecated. New Filipino Transformer models trained with a much larger corpora are available. Use [`jcblaise/roberta-tagalog-base`](https://huggingface.co/jcblaise/roberta-tagalog-base) or [`jcblaise/roberta-tagalog-large`](https://huggingface.co/jcblaise/roberta-tagalog-large) instead for better performance. --- # ELECTRA Tagalog Small Uncased Discriminator Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This is the discriminator model, which is the main Transformer used for finetuning to downstream tasks. For generation, mask-filling, and retraining, refer to the Generator models. ## Citations All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work: ``` @inproceedings{cruz2021exploiting, title={Exploiting News Article Structure for Automatic Corpus Generation of Entailment Datasets}, author={Cruz, Jan Christian Blaise and Resabal, Jose Kristian and Lin, James and Velasco, Dan John and Cheng, Charibeth}, booktitle={Pacific Rim International Conference on Artificial Intelligence}, pages={86--99}, year={2021}, organization={Springer} } ``` ## Data and Other Resources Data used to train this model as well as other benchmark datasets in Filipino can be found in my website at https://blaisecruz.com ## Contact If you have questions, concerns, or if you just want to chat about NLP and low-resource languages in general, you may reach me through my work email at [email protected]
asafaya/bert-mini-arabic
52baeaf85fd27116a23bbf1ab49ecfa5a1a9179b
2021-05-19T11:48:07.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "dataset:oscar", "dataset:wikipedia", "transformers", "autotrain_compatible" ]
fill-mask
false
asafaya
null
asafaya/bert-mini-arabic
725
null
transformers
2,013
--- language: ar datasets: - oscar - wikipedia --- # Arabic BERT Mini Model Pretrained BERT Mini language model for Arabic _If you use this model in your work, please cite this paper:_ ``` @inproceedings{safaya-etal-2020-kuisail, title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media", author = "Safaya, Ali and Abdullatif, Moutasem and Yuret, Deniz", booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation", month = dec, year = "2020", address = "Barcelona (online)", publisher = "International Committee for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.semeval-1.271", pages = "2054--2059", } ``` ## Pretraining Corpus `arabic-bert-mini` model was pretrained on ~8.2 Billion words: - Arabic version of [OSCAR](https://traces1.inria.fr/oscar/) - filtered from [Common Crawl](http://commoncrawl.org/) - Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html) and other Arabic resources which sum up to ~95GB of text. __Notes on training data:__ - Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER. - Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model. - The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too. ## Pretraining details - This model was trained using Google BERT's github [repository](https://github.com/google-research/bert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc). - Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256. ## Load Pretrained Model You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-mini-arabic") model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-mini-arabic") ``` ## Results For further details on the models performance or any other queries, please refer to [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT) ## Acknowledgement Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
cmarkea/distilcamembert-base-nli
df95b0550f822191f0b92bada34238e34d74ff16
2022-05-24T15:55:17.000Z
[ "pytorch", "tf", "camembert", "text-classification", "fr", "dataset:flue", "transformers", "zero-shot-classification", "sentence-similarity", "nli", "license:mit" ]
zero-shot-classification
false
cmarkea
null
cmarkea/distilcamembert-base-nli
725
6
transformers
2,014
--- language: fr license: mit tags: - zero-shot-classification - sentence-similarity - nli pipeline_tag: zero-shot-classification widget: - text: "Selon certains physiciens, un univers parallèle, miroir du nôtre ou relevant de ce que l'on appelle la théorie des branes, autoriserait des neutrons à sortir de notre Univers pour y entrer à nouveau. L'idée a été testée une nouvelle fois avec le réacteur nucléaire de l'Institut Laue-Langevin à Grenoble, plus précisément en utilisant le détecteur de l'expérience Stereo initialement conçu pour chasser des particules de matière noire potentielles, les neutrinos stériles." candidate_labels: "politique, science, sport, santé" hypothesis_template: "Ce texte parle de {}." datasets: - flue --- DistilCamemBERT-NLI =================== We present DistilCamemBERT-NLI which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the Natural Language Inference (NLI) task for the french language, also known as recognizing textual entailment (RTE). This model is constructed on the XNLI dataset which consists to determine whether a premise entails, contradicts or neither entails nor contradicts a hypothesis. This modelization is close to [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase for example. Indeed, inference cost can be a technological issue especially as in a context of cross-encoding like for this task. To counteract this effect, we propose this modelization which divides the inference time by 2 with the same consumption power thanks to DistilCamemBERT. Dataset ------- The dataset XNLI from [FLUE](https://huggingface.co/datasets/flue) is composed of 392,702 premises with their hypothesis for the train and 5,010 couples for the test. The goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B?) and is a classification task (given two sentences, predict one of three labels). The sentence A is called *premise* and sentence B is called *hypothesis*, then the goal of modelization is determined as follows: $$P(premise=c\in\{contradiction, entailment, neutral\}\vert hypothesis)$$ Evaluation results ------------------ | **class** | **precision (%)** | **f1-score (%)** | **support** | | :----------------: | :---------------: | :--------------: | :---------: | | **global** | 77.70 | 77.45 | 5,010 | | **contradiction** | 78.00 | 79.54 | 1,670 | | **entailment** | 82.90 | 78.87 | 1,670 | | **neutral** | 72.18 | 74.04 | 1,670 | Benchmark --------- We compare the [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) model to 2 other modelizations working on french language. The first one [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) is based on well named [CamemBERT](https://huggingface.co/camembert-base), the french RoBERTa model and the second one [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) based on [mDeBERTav3](https://huggingface.co/microsoft/mdeberta-v3-base) a multilingual model. To compare the performances the metrics of accuracy and [MCC (Matthews Correlation Coefficient)](https://en.wikipedia.org/wiki/Phi_coefficient) was used and for the mean inference time measure, an **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** was used: | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **51.35** | 77.45 | 66.24 | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 105.0 | 81.72 | 72.67 | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 299.18 | **83.43** | **75.15** | Zero-shot classification ------------------------ The main advantage of such modelization is to create a zero-shot classifier allowing text classification without training. This task can be summarized by: $$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$ For this part, we use 2 datasets, the first one: [allocine](https://huggingface.co/datasets/allocine) used to train the sentiment analysis models. The dataset is composed of 2 classes: "positif" and "négatif" appreciation of movies reviews. Here we use "Ce commentaire est {}." as the hypothesis template and "positif" and "négatif" as candidate labels. | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **195.54** | 80.59 | 63.71 | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 378.39 | **86.37** | **73.74** | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 520.58 | 84.97 | 70.05 | The second one: [mlsum](https://huggingface.co/datasets/mlsum) used to train the summarization models. We use the articles summary part to predict their topics. In this aim, we aggregate sub-topics and select a few of them. In this case, the hypothesis template used is "C'est un article traitant de {}." and the candidate labels are: "économie", "politique", "sport" and "science". | **model** | **time (ms)** | **accuracy (%)** | **MCC (x100)** | | :--------------: | :-----------: | :--------------: | :------------: | | [cmarkea/distilcamembert-base-nli](https://huggingface.co/cmarkea/distilcamembert-base-nli) | **217.77** | **79.30** | **70.55** | | [BaptisteDoyen/camembert-base-xnli](https://huggingface.co/BaptisteDoyen/camembert-base-xnli) | 448.27 | 70.7 | 64.10 | | [MoritzLaurer/mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli) | 591.34 | 64.45 | 58.67 | How to use DistilCamemBERT-NLI ------------------------------ ```python from transformers import pipeline classifier = pipeline( task='zero-shot-classification', model="cmarkea/distilcamembert-base-nli", tokenizer="cmarkea/distilcamembert-base-nli" ) result = classifier ( sequences="Le style très cinéphile de Quentin Tarantino " "se reconnaît entre autres par sa narration postmoderne " "et non linéaire, ses dialogues travaillés souvent " "émaillés de références à la culture populaire, et ses " "scènes hautement esthétiques mais d'une violence " "extrême, inspirées de films d'exploitation, d'arts " "martiaux ou de western spaghetti.", candidate_labels="cinéma, technologie, littérature, politique", hypothesis_template="Ce texte parle de {}." ) result {"labels": ["cinéma", "littérature", "technologie", "politique"], "scores": [0.7164115309715271, 0.12878799438476562, 0.1092301607131958, 0.0455702543258667]} ``` Citation -------- ```bibtex @inproceedings{delestre:hal-03674695, TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}}, AUTHOR = {Delestre, Cyrile and Amar, Abibatou}, URL = {https://hal.archives-ouvertes.fr/hal-03674695}, BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}}, ADDRESS = {Vannes, France}, YEAR = {2022}, MONTH = Jul, KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation}, PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf}, HAL_ID = {hal-03674695}, HAL_VERSION = {v1}, } ```
jonatasgrosman/wav2vec2-xls-r-1b-german
aa70cbb9841c017ce0cb02f00e3006ca8c33d4f1
2022-07-27T23:39:22.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/wav2vec2-xls-r-1b-german
724
2
transformers
2,015
--- language: - de license: apache-2.0 tags: - automatic-speech-recognition - de - hf-asr-leaderboard - mozilla-foundation/common_voice_8_0 - robust-speech-event datasets: - mozilla-foundation/common_voice_8_0 model-index: - name: XLS-R Wav2Vec2 German by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8 type: mozilla-foundation/common_voice_8_0 args: de metrics: - name: Test WER type: wer value: 10.95 - name: Test CER type: cer value: 2.72 - name: Test WER (+LM) type: wer value: 8.13 - name: Test CER (+LM) type: cer value: 2.18 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: de metrics: - name: Dev WER type: wer value: 22.68 - name: Dev CER type: cer value: 9.17 - name: Dev WER (+LM) type: wer value: 17.07 - name: Dev CER (+LM) type: cer value: 8.45 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: de metrics: - name: Test WER type: wer value: 19.67 --- # Fine-tuned XLS-R 1B model for speech recognition in German Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on German using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) ## Usage Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-german") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "de" MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-german" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) ``` ## Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-german --dataset mozilla-foundation/common_voice_8_0 --config de --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-german --dataset speech-recognition-community-v2/dev_data --config de --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr-1b-german, title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {G}erman}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-german}}, year={2022} } ```
bioformers/bioformer-cased-v1.0-ncbi-disease
ae933b056e69818beabffb6bd797921c5d0cbe42
2021-10-19T07:40:17.000Z
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
false
bioformers
null
bioformers/bioformer-cased-v1.0-ncbi-disease
723
null
transformers
2,016
[bioformer-cased-v1.0](https://huggingface.co/bioformers/bioformer-cased-v1.0) fined-tuned on the [NCBI Disease](https://doi.org/10.1016/j.jbi.2013.12.006) dataset for 10 epochs. This fine-tuned model can be used for NER for diseases.
qarib/bert-base-qarib
b4d8b380e40be2e149cf56a8f8a5efa7f246253d
2021-05-20T03:42:19.000Z
[ "pytorch", "jax", "bert", "fill-mask", "ar", "dataset:arabic_billion_words", "dataset:open_subtitles", "dataset:twitter", "arxiv:2102.10684", "transformers", "tf", "QARiB", "qarib", "autotrain_compatible" ]
fill-mask
false
qarib
null
qarib/bert-base-qarib
723
2
transformers
2,017
--- language: ar tags: - pytorch - tf - QARiB - qarib datasets: - arabic_billion_words - open_subtitles - twitter metrics: - f1 widget: - text: " شو عندكم يا [MASK] ." --- # QARiB: QCRI Arabic and Dialectal BERT ## About QARiB QCRI Arabic and Dialectal BERT (QARiB) model, was trained on a collection of ~ 420 Million tweets and ~ 180 Million sentences of text. For the tweets, the data was collected using twitter API and using language filter. `lang:ar`. For the text data, it was a combination from [Arabic GigaWord](url), [Abulkhair Arabic Corpus]() and [OPUS](http://opus.nlpl.eu/). QARiB: Is the Arabic name for "Boat". ## Model and Parameters: - Data size: 14B tokens - Vocabulary: 64k - Iterations: 10M - Number of Layers: 12 ## Training QARiB See details in [Training QARiB](https://github.com/qcri/QARIB/Training_QARiB.md) ## Using QARiB You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. For more details, see [Using QARiB](https://github.com/qcri/QARIB/Using_QARiB.md) ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>>from transformers import pipeline >>>fill_mask = pipeline("fill-mask", model="./models/data60gb_86k") >>> fill_mask("شو عندكم يا [MASK]") [{'sequence': '[CLS] شو عندكم يا عرب [SEP]', 'score': 0.0990147516131401, 'token': 2355, 'token_str': 'عرب'}, {'sequence': '[CLS] شو عندكم يا جماعة [SEP]', 'score': 0.051633741706609726, 'token': 2308, 'token_str': 'جماعة'}, {'sequence': '[CLS] شو عندكم يا شباب [SEP]', 'score': 0.046871256083250046, 'token': 939, 'token_str': 'شباب'}, {'sequence': '[CLS] شو عندكم يا رفاق [SEP]', 'score': 0.03598872944712639, 'token': 7664, 'token_str': 'رفاق'}, {'sequence': '[CLS] شو عندكم يا ناس [SEP]', 'score': 0.031996358186006546, 'token': 271, 'token_str': 'ناس'} ] >>> fill_mask("وقام المدير [MASK]") [ {'sequence': '[CLS] وقام المدير بالعمل [SEP]', 'score': 0.0678194984793663, 'token': 4230, 'token_str': 'بالعمل'}, {'sequence': '[CLS] وقام المدير بذلك [SEP]', 'score': 0.05191086605191231, 'token': 984, 'token_str': 'بذلك'}, {'sequence': '[CLS] وقام المدير بالاتصال [SEP]', 'score': 0.045264165848493576, 'token': 26096, 'token_str': 'بالاتصال'}, {'sequence': '[CLS] وقام المدير بعمله [SEP]', 'score': 0.03732728958129883, 'token': 40486, 'token_str': 'بعمله'}, {'sequence': '[CLS] وقام المدير بالامر [SEP]', 'score': 0.0246378555893898, 'token': 29124, 'token_str': 'بالامر'} ] >>> fill_mask("وقامت المديرة [MASK]") [{'sequence': '[CLS] وقامت المديرة بذلك [SEP]', 'score': 0.23992691934108734, 'token': 984, 'token_str': 'بذلك'}, {'sequence': '[CLS] وقامت المديرة بالامر [SEP]', 'score': 0.108805812895298, 'token': 29124, 'token_str': 'بالامر'}, {'sequence': '[CLS] وقامت المديرة بالعمل [SEP]', 'score': 0.06639821827411652, 'token': 4230, 'token_str': 'بالعمل'}, {'sequence': '[CLS] وقامت المديرة بالاتصال [SEP]', 'score': 0.05613093823194504, 'token': 26096, 'token_str': 'بالاتصال'}, {'sequence': '[CLS] وقامت المديرة المديرة [SEP]', 'score': 0.021778125315904617, 'token': 41635, 'token_str': 'المديرة'}] >>> fill_mask("قللي وشفيييك يرحم [MASK]") [{'sequence': '[CLS] قللي وشفيييك يرحم والديك [SEP]', 'score': 0.4152909517288208, 'token': 9650, 'token_str': 'والديك'}, {'sequence': '[CLS] قللي وشفيييك يرحملي [SEP]', 'score': 0.07663793861865997, 'token': 294, 'token_str': '##لي'}, {'sequence': '[CLS] قللي وشفيييك يرحم حالك [SEP]', 'score': 0.0453166700899601, 'token': 2663, 'token_str': 'حالك'}, {'sequence': '[CLS] قللي وشفيييك يرحم امك [SEP]', 'score': 0.04390475153923035, 'token': 1942, 'token_str': 'امك'}, {'sequence': '[CLS] قللي وشفيييك يرحمونك [SEP]', 'score': 0.027349254116415977, 'token': 3283, 'token_str': '##ونك'}] ``` ## Evaluations: |**Experiment** |**mBERT**|**AraBERT0.1**|**AraBERT1.0**|**ArabicBERT**|**QARiB**| |---------------|---------|--------------|--------------|--------------|---------| |Dialect Identification | 6.06% | 59.92% | 59.85% | 61.70% | **65.21%** | |Emotion Detection | 27.90% | 43.89% | 42.37% | 41.65% | **44.35%** | |Named-Entity Recognition (NER) | 49.38% | 64.97% | **66.63%** | 64.04% | 61.62% | |Offensive Language Detection | 83.14% | 88.07% | 88.97% | 88.19% | **91.94%** | |Sentiment Analysis | 86.61% | 90.80% | **93.58%** | 83.27% | 93.31% | ## Model Weights and Vocab Download From Huggingface site: https://huggingface.co/qarib/bert-base-qarib ## Contacts Ahmed Abdelali, Sabit Hassan, Hamdy Mubarak, Kareem Darwish and Younes Samih ## Reference ``` @article{abdelali2021pretraining, title={Pre-Training BERT on Arabic Tweets: Practical Considerations}, author={Ahmed Abdelali and Sabit Hassan and Hamdy Mubarak and Kareem Darwish and Younes Samih}, year={2021}, eprint={2102.10684}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram
6abd91b5124ece5310d86439128eeee2dc5b6370
2022-06-15T09:31:26.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "it", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "audio", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "speech", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
radiogroup-crits
null
radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram
722
1
transformers
2,018
--- language: - it license: apache-2.0 datasets: - mozilla-foundation/common_voice_8_0 metrics: - wer - cer tags: - audio - automatic-speech-recognition - hf-asr-leaderboard - it - mozilla-foundation/common_voice_8_0 - speech - wav2vec2 model-index: - name: XLS-R Wav2Vec2 Italian by radiogroup crits results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8.0 italian type: mozilla-foundation/common_voice_8_0 args: it metrics: - name: Test WER type: wer value: 9.043 - name: Test CER type: cer value: 2.208 - name: Test WER (+LM) type: wer value: 6.247 - name: Test CER (+LM) type: cer value: 1.673 --- # XLS-R-1B-ITALIAN-DOC4LM-5GRAM ## Language model information Our language model was generated using a dataset of Italian wikipedia articles and manual transcriptions of radio newspapers and television programs. ## Download CommonVoice8.0 dataset for italian language ```python from datasets import load_dataset dataset = load_dataset("mozilla-foundation/common_voice_8_0", "it", use_auth_token=True) ``` ## Evaluation Commands To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`: ```bash python eval.py --model_id radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram --dataset mozilla-foundation/common_voice_8_0 --config it --split test --log_outputs --greedy mv log_mozilla-foundation_common_voice_8_0_it_test_predictions.txt log_mozilla-foundation_common_voice_8_0_it_test_predictions_greedy.txt mv mozilla-foundation_common_voice_8_0_it_test_eval_results.txt mozilla-foundation_common_voice_8_0_it_test_eval_results_greedy.txt ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{crits2022wav2vec2-xls-r-1b-italian-doc4lm-5gram, title={XLS-R Wav2Vec2 Italian by radiogroup crits}, author={Teraoni Prioletti Raffaele, Casagranda Paolo and Russo Francesco}, publisher={Hugging Face}, journal={Hugging Face Hub}, howpublished={\url{https://huggingface.co/radiogroup-crits/wav2vec2-xls-r-1b-italian-doc4lm-5gram}}, year={2022} } ```
aneuraz/awesome-align-with-co
777756717e1fa9556e304d4d5db173ee386b9c16
2022-04-29T16:16:12.000Z
[ "pytorch", "bert", "fill-mask", "de", "fr", "en", "ro", "zh", "arxiv:2101.08231", "transformers", "sentence alignment", "license:bsd-3-clause", "autotrain_compatible" ]
fill-mask
false
aneuraz
null
aneuraz/awesome-align-with-co
721
1
transformers
2,019
--- language: - de - fr - en - ro - zh thumbnail: tags: - sentence alignment license: bsd-3-clause --- # AWESOME: Aligning Word Embedding Spaces of Multilingual Encoders This model comes from the following GitHub repository: [https://github.com/neulab/awesome-align](https://github.com/neulab/awesome-align) It corresponds to this paper: [https://arxiv.org/abs/2101.08231](https://arxiv.org/abs/2101.08231) Please cite the original paper if you decide to use the model: ``` @inproceedings{dou2021word, title={Word Alignment by Fine-tuning Embeddings on Parallel Corpora}, author={Dou, Zi-Yi and Neubig, Graham}, booktitle={Conference of the European Chapter of the Association for Computational Linguistics (EACL)}, year={2021} } ``` `awesome-align` is a tool that can extract word alignments from multilingual BERT (mBERT) [Demo](https://colab.research.google.com/drive/1205ubqebM0OsZa1nRgbGJBtitgHqIVv6?usp=sharing) and allows you to fine-tune mBERT on parallel corpora for better alignment quality (see our paper for more details). ## Usage (copied from this [DEMO](https://colab.research.google.com/drive/1205ubqebM0OsZa1nRgbGJBtitgHqIVv6?usp=sharing) ) ```python from transformers import AutoModel, AutoTokenizer import itertools import torch # load model model = AutoModel.from_pretrained("aneuraz/awesome-align-with-co") tokenizer = AutoTokenizer.from_pretrained("aneuraz/awesome-align-with-co") # model parameters align_layer = 8 threshold = 1e-3 # define inputs src = 'awesome-align is awesome !' tgt = '牛对齐 是 牛 !' # pre-processing sent_src, sent_tgt = src.strip().split(), tgt.strip().split() token_src, token_tgt = [tokenizer.tokenize(word) for word in sent_src], [tokenizer.tokenize(word) for word in sent_tgt] wid_src, wid_tgt = [tokenizer.convert_tokens_to_ids(x) for x in token_src], [tokenizer.convert_tokens_to_ids(x) for x in token_tgt] ids_src, ids_tgt = tokenizer.prepare_for_model(list(itertools.chain(*wid_src)), return_tensors='pt', model_max_length=tokenizer.model_max_length, truncation=True)['input_ids'], tokenizer.prepare_for_model(list(itertools.chain(*wid_tgt)), return_tensors='pt', truncation=True, model_max_length=tokenizer.model_max_length)['input_ids'] sub2word_map_src = [] for i, word_list in enumerate(token_src): sub2word_map_src += [i for x in word_list] sub2word_map_tgt = [] for i, word_list in enumerate(token_tgt): sub2word_map_tgt += [i for x in word_list] # alignment align_layer = 8 threshold = 1e-3 model.eval() with torch.no_grad(): out_src = model(ids_src.unsqueeze(0), output_hidden_states=True)[2][align_layer][0, 1:-1] out_tgt = model(ids_tgt.unsqueeze(0), output_hidden_states=True)[2][align_layer][0, 1:-1] dot_prod = torch.matmul(out_src, out_tgt.transpose(-1, -2)) softmax_srctgt = torch.nn.Softmax(dim=-1)(dot_prod) softmax_tgtsrc = torch.nn.Softmax(dim=-2)(dot_prod) softmax_inter = (softmax_srctgt > threshold)*(softmax_tgtsrc > threshold) align_subwords = torch.nonzero(softmax_inter, as_tuple=False) align_words = set() for i, j in align_subwords: align_words.add( (sub2word_map_src[i], sub2word_map_tgt[j]) ) print(align_words) ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
4d1932e5966d609bac3ca0683ae9765da39a1764
2021-10-18T09:44:25.000Z
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
false
CAMeL-Lab
null
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
720
1
transformers
2,020
--- language: - ar license: apache-2.0 widget: - text: 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع' --- # CAMeLBERT-DA POS-MSA Model ## Model description **CAMeLBERT-DA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model. For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). ## Intended uses You can use the CAMeLBERT-DA POS-MSA model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. #### How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa') >>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع' >>> pos(text) [{'entity': 'noun', 'score': 0.9999913, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9992475, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.999919, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.99993193, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.99999106, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99998987, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.9999933, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.9999899, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99990565, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.99997944, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.99938935, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. ## Citation ```bibtex @inproceedings{inoue-etal-2021-interplay, title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models", author = "Inoue, Go and Alhafni, Bashar and Baimukan, Nurpeiis and Bouamor, Houda and Habash, Nizar", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Online)", publisher = "Association for Computational Linguistics", abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.", } ```
NbAiLab/nb-bert-base-mnli
086be91a73fbf74acb762280730ce369b3b51758
2021-11-17T15:07:03.000Z
[ "pytorch", "jax", "bert", "text-classification", "no", "dataset:mnli", "dataset:multi_nli", "dataset:xnli", "arxiv:1909.00161", "transformers", "nb-bert", "zero-shot-classification", "tensorflow", "norwegian", "license:cc-by-4.0" ]
zero-shot-classification
false
NbAiLab
null
NbAiLab/nb-bert-base-mnli
720
null
transformers
2,021
--- language: no license: cc-by-4.0 thumbnail: https://raw.githubusercontent.com/NBAiLab/notram/master/images/nblogo_2.png pipeline_tag: zero-shot-classification tags: - nb-bert - zero-shot-classification - pytorch - tensorflow - norwegian - bert datasets: - mnli - multi_nli - xnli widget: - example_title: Nyhetsartikkel om FHI text: Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september. candidate_labels: helse, politikk, sport, religion --- **Release 1.0** (March 11, 2021) # NB-Bert base model finetuned on Norwegian machine translated MNLI ## Description The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible. [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport"). When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set. ## Testing the model For testing the model, we recommend the [NbAiLab Colab Notebook](https://colab.research.google.com/gist/peregilk/769b5150a2f807219ab8f15dd11ea449/nbailab-mnli-norwegian-demo.ipynb) ## Hugging Face zero-shot-classification pipeline The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one. ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="NbAiLab/nb-bert-base-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = 'Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.' candidate_labels = ['politikk', 'helse', 'sport', 'religion'] hypothesis_template = 'Dette eksempelet er {}.' classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True) # {'labels': ['helse', 'politikk', 'sport', 'religion'], # 'scores': [0.4210019111633301, 0.0674605593085289, 0.000840459018945694, 0.0007541406666859984], # 'sequence': 'Folkehelseinstituttets mest optimistiske anslag er at alle over 18 år er ferdigvaksinert innen midten av september.'} ``` ## More information For more information on the model, see https://github.com/NBAiLab/notram Here you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline.
bashar-talafha/multi-dialect-bert-base-arabic
f84ad96a07fa1aa6ba176e6e1cea85c4105b663f
2021-05-19T12:08:22.000Z
[ "pytorch", "jax", "bert", "fill-mask", "ar", "dataset:nadi", "arxiv:2007.05612", "transformers", "autotrain_compatible" ]
fill-mask
false
bashar-talafha
null
bashar-talafha/multi-dialect-bert-base-arabic
720
3
transformers
2,022
--- language: ar thumbnail: https://raw.githubusercontent.com/mawdoo3/Multi-dialect-Arabic-BERT/master/multidialct_arabic_bert.png datasets: - nadi --- # Multi-dialect-Arabic-BERT This is a repository of Multi-dialect Arabic BERT model. By [Mawdoo3-AI](https://ai.mawdoo3.com/). <p align="center"> <br> <img src="https://raw.githubusercontent.com/mawdoo3/Multi-dialect-Arabic-BERT/master/multidialct_arabic_bert.png" alt="Background reference: http://www.qfi.org/wp-content/uploads/2018/02/Qfi_Infographic_Mother-Language_Final.pdf" width="500"/> <br> <p> ### About our Multi-dialect-Arabic-BERT model Instead of training the Multi-dialect Arabic BERT model from scratch, we initialized the weights of the model using [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT) and trained it on 10M arabic tweets from the unlabled data of [The Nuanced Arabic Dialect Identification (NADI) shared task](https://sites.google.com/view/nadi-shared-task). ### To cite this work ``` @misc{talafha2020multidialect, title={Multi-Dialect Arabic BERT for Country-Level Dialect Identification}, author={Bashar Talafha and Mohammad Ali and Muhy Eddin Za'ter and Haitham Seelawi and Ibraheem Tuffaha and Mostafa Samir and Wael Farhan and Hussein T. Al-Natsheh}, year={2020}, eprint={2007.05612}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Usage The model weights can be loaded using `transformers` library by HuggingFace. ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic") model = AutoModel.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic") ``` Example using `pipeline`: ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="bashar-talafha/multi-dialect-bert-base-arabic ", tokenizer="bashar-talafha/multi-dialect-bert-base-arabic " ) fill_mask(" سافر الرحالة من مطار [MASK] ") ``` ``` [{'sequence': '[CLS] سافر الرحالة من مطار الكويت [SEP]', 'score': 0.08296813815832138, 'token': 3226}, {'sequence': '[CLS] سافر الرحالة من مطار دبي [SEP]', 'score': 0.05123933032155037, 'token': 4747}, {'sequence': '[CLS] سافر الرحالة من مطار مسقط [SEP]', 'score': 0.046838656067848206, 'token': 13205}, {'sequence': '[CLS] سافر الرحالة من مطار القاهرة [SEP]', 'score': 0.03234650194644928, 'token': 4003}, {'sequence': '[CLS] سافر الرحالة من مطار الرياض [SEP]', 'score': 0.02606341242790222, 'token': 2200}] ``` ### Repository Please check the [original repository](https://github.com/mawdoo3/Multi-dialect-Arabic-BERT) for more information.
laboro-ai/distilbert-base-japanese
9072f3bbae926fa5483f12909f5d107c7941ced1
2020-12-18T03:09:19.000Z
[ "pytorch", "distilbert", "ja", "transformers", "license:cc-by-nc-4.0" ]
null
false
laboro-ai
null
laboro-ai/distilbert-base-japanese
720
1
transformers
2,023
--- language: ja tags: - distilbert license: cc-by-nc-4.0 ---
imthanhlv/gpt2news
142f41d2b8c3b9fdfaf3f57eefa0035862edd5c7
2022-01-01T18:14:53.000Z
[ "pytorch", "jax", "gpt2", "text-generation", "vi", "transformers", "gpt" ]
text-generation
false
imthanhlv
null
imthanhlv/gpt2news
718
1
transformers
2,024
--- language: vi tags: - gpt widget: - text: "Hôm qua những nhà khoa học Mỹ đã phát hiện ra loài cá lợn" --- ### GPT 2 News **Update 02 Jan 2022**: Fixed mismatch tokenizer and model.wte size ### BibTex ``` @article{thanh21gpt2news, author = {Thanh V. Le}, title = {Pretrained GPT-2 on Vietnamese news}, journal = {https://huggingface.co/imthanhlv/gpt2news}, year = {2021}, } ```
howey/roberta-large-qnli
aaa0c066fc55c5768a4c30d7cb72dc8595c839a2
2021-06-03T14:12:46.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
howey
null
howey/roberta-large-qnli
717
null
transformers
2,025
Entry not found
howey/roberta-large-qqp
a4c6d6e56f6099795457ca305901fb8a79b0a180
2021-06-04T06:22:59.000Z
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
false
howey
null
howey/roberta-large-qqp
717
null
transformers
2,026
Entry not found
sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned
63cf16f26e9be7a0ef793e10fe54900c75111046
2022-06-15T22:19:18.000Z
[ "pytorch", "tf", "distilbert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned
715
1
sentence-transformers
2,027
--- pipeline_tag: sentence-similarity license: apache-2.0 tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned') model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-multilingual-en-de-v2-tmp-lng-aligned) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
Helsinki-NLP/opus-mt-sla-en
ad888a10d15c0cbe1e45b94c18e260fdc19035f7
2020-08-21T14:42:49.000Z
[ "pytorch", "marian", "text2text-generation", "be", "hr", "mk", "cs", "ru", "pl", "bg", "uk", "sl", "sla", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-sla-en
714
0
transformers
2,028
--- language: - be - hr - mk - cs - ru - pl - bg - uk - sl - sla - en tags: - translation license: apache-2.0 --- ### sla-eng * source group: Slavic languages * target group: English * OPUS readme: [sla-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-eng/README.md) * model: transformer * source language(s): bel bel_Latn bos_Latn bul bul_Latn ces csb_Latn dsb hrv hsb mkd orv_Cyrl pol rue rus slv srp_Cyrl srp_Latn ukr * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-ceseng.ces.eng | 26.7 | 0.542 | | newstest2009-ceseng.ces.eng | 25.2 | 0.534 | | newstest2010-ceseng.ces.eng | 25.9 | 0.545 | | newstest2011-ceseng.ces.eng | 26.8 | 0.544 | | newstest2012-ceseng.ces.eng | 25.6 | 0.536 | | newstest2012-ruseng.rus.eng | 32.5 | 0.588 | | newstest2013-ceseng.ces.eng | 28.8 | 0.556 | | newstest2013-ruseng.rus.eng | 26.4 | 0.532 | | newstest2014-csen-ceseng.ces.eng | 31.4 | 0.591 | | newstest2014-ruen-ruseng.rus.eng | 29.6 | 0.576 | | newstest2015-encs-ceseng.ces.eng | 28.2 | 0.545 | | newstest2015-enru-ruseng.rus.eng | 28.1 | 0.551 | | newstest2016-encs-ceseng.ces.eng | 30.0 | 0.567 | | newstest2016-enru-ruseng.rus.eng | 27.4 | 0.548 | | newstest2017-encs-ceseng.ces.eng | 26.5 | 0.537 | | newstest2017-enru-ruseng.rus.eng | 31.0 | 0.574 | | newstest2018-encs-ceseng.ces.eng | 27.9 | 0.548 | | newstest2018-enru-ruseng.rus.eng | 26.8 | 0.545 | | newstest2019-ruen-ruseng.rus.eng | 29.1 | 0.562 | | Tatoeba-test.bel-eng.bel.eng | 42.5 | 0.609 | | Tatoeba-test.bul-eng.bul.eng | 55.4 | 0.697 | | Tatoeba-test.ces-eng.ces.eng | 53.1 | 0.688 | | Tatoeba-test.csb-eng.csb.eng | 23.1 | 0.446 | | Tatoeba-test.dsb-eng.dsb.eng | 31.1 | 0.467 | | Tatoeba-test.hbs-eng.hbs.eng | 56.1 | 0.702 | | Tatoeba-test.hsb-eng.hsb.eng | 46.2 | 0.597 | | Tatoeba-test.mkd-eng.mkd.eng | 54.5 | 0.680 | | Tatoeba-test.multi.eng | 53.2 | 0.683 | | Tatoeba-test.orv-eng.orv.eng | 12.1 | 0.292 | | Tatoeba-test.pol-eng.pol.eng | 51.1 | 0.671 | | Tatoeba-test.rue-eng.rue.eng | 19.6 | 0.389 | | Tatoeba-test.rus-eng.rus.eng | 54.1 | 0.686 | | Tatoeba-test.slv-eng.slv.eng | 43.4 | 0.610 | | Tatoeba-test.ukr-eng.ukr.eng | 53.8 | 0.685 | ### System Info: - hf_name: sla-eng - source_languages: sla - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla', 'en'] - src_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.test.txt - src_alpha3: sla - tgt_alpha3: eng - short_pair: sla-en - chrF2_score: 0.6829999999999999 - bleu: 53.2 - brevity_penalty: 0.9740000000000001 - ref_len: 70897.0 - src_name: Slavic languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: sla - tgt_alpha2: en - prefer_old: False - long_pair: sla-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
deepmind/vision-perceiver-learned
e3d8bac358a9994ce2f0887fad97855ab6e14691
2021-12-13T09:25:29.000Z
[ "pytorch", "perceiver", "image-classification", "dataset:imagenet", "arxiv:2107.14795", "transformers", "license:apache-2.0" ]
image-classification
false
deepmind
null
deepmind/vision-perceiver-learned
714
3
transformers
2,029
--- license: apache-2.0 tags: datasets: - imagenet --- # Perceiver IO for vision (learned position embeddings) Perceiver IO model pre-trained on ImageNet (14 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For image classification, the output is a tensor containing the logits, of shape (batch_size, num_labels). <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model directly on raw pixel values, rather than on patches as is done in ViT. This particular model only adds learned 1D position embeddings to the pixel values, hence it is given no privileged information about the 2D structure of images. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by replacing the classification decoder. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=deepmind/perceiver) to look for other fine-tuned versions on a task that may interest you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import PerceiverFeatureExtractor, PerceiverForImageClassificationLearned import requests from PIL import Image feature_extractor = PerceiverFeatureExtractor.from_pretrained("deepmind/vision-perceiver-learned") model = PerceiverForImageClassificationLearned.from_pretrained("deepmind/vision-perceiver-learned") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # prepare input encoding = feature_extractor(image, return_tensors="pt") inputs = encoding.pixel_values # forward pass outputs = model(inputs) logits = outputs.logits print("Predicted class:", model.config.id2label[logits.argmax(-1).item()]) >>> should print Predicted class: tabby, tabby cat ``` ## Training data This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 million images and 1k classes. ## Training procedure ### Preprocessing Images are center cropped and resized to a resolution of 224x224 and normalized across the RGB channels. Note that data augmentation was used during pre-training, as explained in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). ### Pretraining Hyperparameter details can be found in Appendix H of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results This model is able to achieve a top-1 accuracy of 72.7 on ImageNet-1k, despite having no privileged information about the 2D structure of images. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Helsinki-NLP/opus-mt-en-id
ed4f67aeeb72d0f4d672d0bf78b89da24ab7f68b
2021-09-09T21:36:08.000Z
[ "pytorch", "marian", "text2text-generation", "en", "id", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-id
713
null
transformers
2,030
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-id * source languages: en * target languages: id * OPUS readme: [en-id](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-id/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-id/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.en.id | 38.3 | 0.636 |
nfliu/scibert_basevocab_uncased
dea4d1eba23b45c9632dc06523eec6461216bdca
2021-05-20T01:39:31.000Z
[ "pytorch", "jax", "bert", "transformers" ]
null
false
nfliu
null
nfliu/scibert_basevocab_uncased
711
null
transformers
2,031
Entry not found
superb/hubert-base-superb-er
bac0e14e92f7f9fd56671c5060e572e883cad667
2021-11-04T16:03:24.000Z
[ "pytorch", "hubert", "audio-classification", "en", "dataset:superb", "arxiv:2105.01051", "transformers", "speech", "audio", "license:apache-2.0" ]
audio-classification
false
superb
null
superb/hubert-base-superb-er
709
1
transformers
2,032
--- language: en datasets: - superb tags: - speech - audio - hubert - audio-classification license: apache-2.0 widget: - example_title: IEMOCAP clip "happy" src: https://cdn-media.huggingface.co/speech_samples/IEMOCAP_Ses01F_impro03_F013.wav - example_title: IEMOCAP clip "neutral" src: https://cdn-media.huggingface.co/speech_samples/IEMOCAP_Ses01F_impro04_F000.wav --- # Hubert-Base for Emotion Recognition ## Model description This is a ported version of [S3PRL's Hubert for the SUPERB Emotion Recognition task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/emotion). The base model is [hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960), which is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051) ## Task and dataset description Emotion Recognition (ER) predicts an emotion class for each utterance. The most widely used ER dataset [IEMOCAP](https://sail.usc.edu/iemocap/) is adopted, and we follow the conventional evaluation protocol: we drop the unbalanced emotion classes to leave the final four classes with a similar amount of data points and cross-validate on five folds of the standard splits. For the original model's training and evaluation instructions refer to the [S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition). ## Usage examples You can use the model via the Audio Classification pipeline: ```python from datasets import load_dataset from transformers import pipeline dataset = load_dataset("anton-l/superb_demo", "er", split="session1") classifier = pipeline("audio-classification", model="superb/hubert-base-superb-er") labels = classifier(dataset[0]["file"], top_k=5) ``` Or use the model directly: ```python import torch import librosa from datasets import load_dataset from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor def map_to_array(example): speech, _ = librosa.load(example["file"], sr=16000, mono=True) example["speech"] = speech return example # load a demo dataset and read audio files dataset = load_dataset("anton-l/superb_demo", "er", split="session1") dataset = dataset.map(map_to_array) model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-er") feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-base-superb-er") # compute attention masks and normalize the waveform if needed inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt") logits = model(**inputs).logits predicted_ids = torch.argmax(logits, dim=-1) labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()] ``` ## Eval results The evaluation metric is accuracy. | | **s3prl** | **transformers** | |--------|-----------|------------------| |**session1**| `0.6492` | `0.6359` | ### BibTeX entry and citation info ```bibtex @article{yang2021superb, title={SUPERB: Speech processing Universal PERformance Benchmark}, author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others}, journal={arXiv preprint arXiv:2105.01051}, year={2021} } ```
IDEA-CCNL/Wenzhong-GPT2-110M
2beda4458b97e07b4810da6367aa56d7f7d8744e
2022-05-30T07:16:29.000Z
[ "pytorch", "gpt2", "text-generation", "zh", "transformers", "generate", "license:apache-2.0" ]
text-generation
false
IDEA-CCNL
null
IDEA-CCNL/Wenzhong-GPT2-110M
709
2
transformers
2,033
--- language: - zh inference: parameters: temperature: 0.7 top_p: 0.6 repetition_penalty: 1.1 max_new_tokens: 128 num_return_sequences: 3 do_sample: true license: apache-2.0 tags: - generate - gpt2 widget: - 北京是中国的 - 西湖的景色 --- # Wenzhong-GPT2-110M model (chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). Wenzhong-GPT2-110M is one of the Wenzhong series, which has smaller parameters. Wenzhong-GPT2-110M Is the base version of gpt2。 ## Usage ### load model ```python from transformers import GPT2Tokenizer,GPT2LMHeadModel hf_model_path = 'IDEA-CCNL/Wenzhong-GPT2-110M' tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path) model = GPT2LMHeadModel.from_pretrained(hf_model_path) ``` ### generation ```python question = "北京是中国的" inputs = tokenizer(question,return_tensors='pt') generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, max_length=150, # max_new_tokens=80, do_sample=True, top_p = 0.6, # num_beams=5, eos_token_id=50256, pad_token_id=0, num_return_sequences = 5) for idx,sentence in enumerate(generation_output.sequences): print('next sentence %d:\n'%idx, tokenizer.decode(sentence).split('<|endoftext|>')[0]) print('*'*40) ``` ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
rohanrajpal/bert-base-codemixed-uncased-sentiment
78474f5ba8a100d6a40d89dff8ef12be0918c74a
2021-05-20T04:32:54.000Z
[ "pytorch", "tf", "jax", "bert", "text-classification", "hi", "en", "dataset:SAIL 2017", "transformers", "codemix" ]
text-classification
false
rohanrajpal
null
rohanrajpal/bert-base-codemixed-uncased-sentiment
708
null
transformers
2,034
--- language: - hi - en tags: - hi - en - codemix datasets: - SAIL 2017 --- # Model name ## Model description I took a bert-base-multilingual-cased model from huggingface and finetuned it on SAIL 2017 dataset. ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted #Coming soon! ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data I trained on the SAIL 2017 dataset [link](http://amitavadas.com/SAIL/Data/SAIL_2017.zip) on this [pretrained model](https://huggingface.co/bert-base-multilingual-cased). ## Training procedure No preprocessing. ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{khanuja-etal-2020-gluecos, title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}", author = "Khanuja, Simran and Dandapat, Sandipan and Srinivasan, Anirudh and Sitaram, Sunayana and Choudhury, Monojit", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.329", pages = "3575--3585" } ```
mrm8488/CodeBERTaPy
ef04d7213fb852b414cc248fcb5e6a54ffbc5521
2021-05-20T18:01:23.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "code", "arxiv:1909.09436", "transformers", "autotrain_compatible" ]
fill-mask
false
mrm8488
null
mrm8488/CodeBERTaPy
707
2
transformers
2,035
--- language: code thumbnail: --- # CodeBERTaPy CodeBERTaPy is a RoBERTa-like model trained on the [CodeSearchNet](https://github.blog/2019-09-26-introducing-the-codesearchnet-challenge/) dataset from GitHub for `python` by [Manuel Romero](https://twitter.com/mrm8488) The **tokenizer** is a Byte-level BPE tokenizer trained on the corpus using Hugging Face `tokenizers`. Because it is trained on a corpus of code (vs. natural language), it encodes the corpus efficiently (the sequences are between 33% to 50% shorter, compared to the same corpus tokenized by gpt2/roberta). The (small) **model** is a 6-layer, 84M parameters, RoBERTa-like Transformer model – that’s the same number of layers & heads as DistilBERT – initialized from the default initialization settings and trained from scratch on the full `python` corpus for 4 epochs. ## Quick start: masked language modeling prediction ```python PYTHON_CODE = """ fruits = ['apples', 'bananas', 'oranges'] for idx, <mask> in enumerate(fruits): print("index is %d and value is %s" % (idx, val)) """.lstrip() ``` ### Does the model know how to complete simple Python code? ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="mrm8488/CodeBERTaPy", tokenizer="mrm8488/CodeBERTaPy" ) fill_mask(PYTHON_CODE) ## Top 5 predictions: 'val' # prob 0.980728805065155 'value' 'idx' ',val' '_' ``` ### Yes! That was easy 🎉 Let's try with another Flask like example ```python PYTHON_CODE2 = """ @app.route('/<name>') def hello_name(name): return "Hello {}!".format(<mask>) if __name__ == '__main__': app.run() """.lstrip() fill_mask(PYTHON_CODE2) ## Top 5 predictions: 'name' # prob 0.9961813688278198 ' name' 'url' 'description' 'self' ``` ### Yeah! It works 🎉 Let's try with another Tensorflow/Keras like example ```python PYTHON_CODE3=""" model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.<mask>(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) """.lstrip() fill_mask(PYTHON_CODE3) ## Top 5 predictions: 'Dense' # prob 0.4482928514480591 'relu' 'Flatten' 'Activation' 'Conv' ``` > Great! 🎉 ## This work is heavily inspired on [CodeBERTa](https://github.com/huggingface/transformers/blob/master/model_cards/huggingface/CodeBERTa-small-v1/README.md) by huggingface team <br> ## CodeSearchNet citation <details> ```bibtex @article{husain_codesearchnet_2019, title = {{CodeSearchNet} {Challenge}: {Evaluating} the {State} of {Semantic} {Code} {Search}}, shorttitle = {{CodeSearchNet} {Challenge}}, url = {http://arxiv.org/abs/1909.09436}, urldate = {2020-03-12}, journal = {arXiv:1909.09436 [cs, stat]}, author = {Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc}, month = sep, year = {2019}, note = {arXiv: 1909.09436}, } ``` </details> > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
redrussianarmy/gpt2-turkish-cased
d4f4bd09a081f08a61fd58efd6e5fd8c5ae60ebf
2021-05-23T12:12:42.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "tr", "transformers", "turkish", "gpt2-tr", "gpt2-turkish" ]
text-generation
false
redrussianarmy
null
redrussianarmy/gpt2-turkish-cased
707
2
transformers
2,036
--- language: "tr" tags: - turkish - tr - gpt2-tr - gpt2-turkish --- # 🇹🇷 Turkish GPT-2 Model In this repository I release GPT-2 model, that was trained on various texts for Turkish. The model is meant to be an entry point for fine-tuning on other texts. ## Training corpora I used a Turkish corpora that is taken from oscar-corpus. It was possible to create byte-level BPE with Tokenizers library of Huggingface. With the Tokenizers library, I created a 52K byte-level BPE vocab based on the training corpora. After creating the vocab, I could train the GPT-2 for Turkish on two 2080TI over the complete training corpus (five epochs). Logs during training: https://tensorboard.dev/experiment/3AWKv8bBTaqcqZP5frtGkw/#scalars ## Model weights Both PyTorch and Tensorflow compatible weights are available. | Model | Downloads | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | `redrussianarmy/gpt2-turkish-cased` | [`config.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/config.json) • [`merges.txt`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/merges.txt) • [`pytorch_model.bin`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/pytorch_model.bin) • [`special_tokens_map.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/special_tokens_map.json) • [`tf_model.h5`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/tf_model.h5) • [`tokenizer_config.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/tokenizer_config.json) • [`traning_args.bin`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/training_args.bin) • [`vocab.json`](https://huggingface.co/redrussianarmy/gpt2-turkish-cased/resolve/main/vocab.json) ## Using the model The model itself can be used in this way: ``` python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("redrussianarmy/gpt2-turkish-cased") model = AutoModelWithLMHead.from_pretrained("redrussianarmy/gpt2-turkish-cased") ``` Here's an example that shows how to use the great Transformers Pipelines for generating text: ``` python from transformers import pipeline pipe = pipeline('text-generation', model="redrussianarmy/gpt2-turkish-cased", tokenizer="redrussianarmy/gpt2-turkish-cased", config={'max_length':800}) text = pipe("Akşamüstü yolda ilerlerken, ")[0]["generated_text"] print(text) ``` ### How to clone the model repo? ``` git lfs install git clone https://huggingface.co/redrussianarmy/gpt2-turkish-cased ``` ## Contact (Bugs, Feedback, Contribution and more) For questions about the GPT2-Turkish model, just open an issue [here](https://github.com/redrussianarmy/gpt2-turkish/issues) 🤗
tals/roberta_python
f42b279d6af7dc85ab2cceb2f7f54b624326b547
2022-06-07T01:48:03.000Z
[ "pytorch", "roberta", "fill-mask", "python", "dataset:code_search_net", "dataset:Fraser/python-lines", "arxiv:2106.05784", "transformers", "code", "masked-lm", "autotrain_compatible" ]
fill-mask
false
tals
null
tals/roberta_python
706
null
transformers
2,037
# roberta_python --- language: python datasets: - code_search_net - Fraser/python-lines tags: - python - code - masked-lm widget: - text "assert 6 == sum([i for i in range(<mask>)])" --- # Details This is a roBERTa-base model trained on the python part of [CodeSearchNet](https://github.com/github/CodeSearchNet) and reached a dev perplexity of 3.296 This model was used for the Programming Puzzles enumerative solver baseline detailed in [Programming Puzzles paper](https://arxiv.org/abs/2106.05784). See also the [Python Programming Puzzles (P3) Repository](https://github.com/microsoft/PythonProgrammingPuzzles) for more details. # Usage You can either load the model and further fine-tune it for a target task (as done for the puzzle solver), or you can experiment with mask-filling directly with this model as in the following example: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline tokenizer = AutoTokenizer.from_pretrained("tals/roberta_python") model = AutoModelWithLMHead.from_pretrained("tals/roberta_python") demo = pipeline("fill-mask", model=model, tokenizer=tokenizer) code = """sum= 0 for i in range(<mask>): sum += i assert sum == 6 """ demo(code) ``` # BibTeX entry and citation info ```bibtex @inproceedings{ schuster2021programming, title={Programming Puzzles}, author={Tal Schuster and Ashwin Kalyan and Alex Polozov and Adam Tauman Kalai}, booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)}, year={2021}, url={https://openreview.net/forum?id=fe_hCc4RBrg} } ```
vinvino02/glpn-kitti
1284de728759c071a2077a77f86a6d90e960ec91
2022-04-14T11:52:40.000Z
[ "pytorch", "glpn", "arxiv:2201.07436", "transformers", "vision", "depth-estimation", "license:apache-2.0" ]
null
false
vinvino02
null
vinvino02/glpn-kitti
706
null
transformers
2,038
--- license: apache-2.0 tags: - vision - depth-estimation widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # GLPN fine-tuned on KITTI Global-Local Path Networks (GLPN) model trained on KITTI for monocular depth estimation. It was introduced in the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Kim et al. and first released in [this repository](https://github.com/vinvino02/GLPDepth). Disclaimer: The team releasing GLPN did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GLPN uses SegFormer as backbone and adds a lightweight head on top for depth estimation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg) ## Intended uses & limitations You can use the raw model for monocular depth estimation. See the [model hub](https://huggingface.co/models?search=glpn) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import GLPNFeatureExtractor, GLPNForDepthEstimation import torch import numpy as np from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = GLPNFeatureExtractor.from_pretrained("vinvino02/glpn-kitti") model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-kitti") # prepare image for the model inputs = feature_extractor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth # interpolate to original size prediction = torch.nn.functional.interpolate( predicted_depth.unsqueeze(1), size=image.size[::-1], mode="bicubic", align_corners=False, ) # visualize the prediction output = prediction.squeeze().cpu().numpy() formatted = (output * 255 / np.max(output)).astype("uint8") depth = Image.fromarray(formatted) ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/glpn). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-07436, author = {Doyeon Kim and Woonghyun Ga and Pyunghwan Ahn and Donggyu Joo and Sehwan Chun and Junmo Kim}, title = {Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth}, journal = {CoRR}, volume = {abs/2201.07436}, year = {2022}, url = {https://arxiv.org/abs/2201.07436}, eprinttype = {arXiv}, eprint = {2201.07436}, timestamp = {Fri, 21 Jan 2022 13:57:15 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-07436.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
nickmuchi/segformer-b4-finetuned-segments-sidewalk
a8ca92dc8795137a2c54e00ada2c4dbcdfa79be0
2022-03-21T07:32:43.000Z
[ "pytorch", "tensorboard", "segformer", "dataset:segments/sidewalk-semantic", "transformers", "vision", "image-segmentation", "generated_from_trainer", "license:apache-2.0", "model-index" ]
image-segmentation
false
nickmuchi
null
nickmuchi/segformer-b4-finetuned-segments-sidewalk
706
1
transformers
2,039
--- license: apache-2.0 tags: - vision - image-segmentation - generated_from_trainer widget: - src: https://drive.google.com/uc?id=1-ae6Vtvs-fO1j0D2kxEDX4rKxRipda2j example_title: Sidewalk with traffic - src: https://drive.google.com/uc?id=1-dwxxF6LzbEvATr_mwvrAjot-DdBLAM4 example_title: Sidewalk with buildings datasets: - segments/sidewalk-semantic model-index: - name: segformer-b4-finetuned-segments-sidewalk results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b4-finetuned-segments-sidewalk This model is a fine-tuned version of [nvidia/mit-b4](https://huggingface.co/nvidia/mit-b4) on the segments/sidewalk-semantic dataset. It achieves the following results on the evaluation set: - Loss: 0.6463 - Mean Accuracy: 0.5168 - Mean Iou: 0.4317 - Overall Accuracy: 0.8895 - Per Category Accuracy: [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0] - Per Category Iou: [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Accuracy | Mean Iou | Overall Accuracy | Per Category Accuracy | Per Category Iou | |:-------------:|:-----:|:-----:|:---------------:|:-------------:|:--------:|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 1.0086 | 0.25 | 100 | 0.9195 | 0.2302 | 0.1742 | 0.7405 | [nan, 0.754391784765388, 0.8738098328493714, 0.0, 0.6095047025690915, 0.04406067496837279, nan, 0.11344860810198232, 0.03344878303363856, 0.0, 0.9451322667227594, 0.0, 0.0, 0.0, 0.0, 8.118464635968046e-06, 0.0, 0.0, 0.8406900175689528, 0.0, 0.33313290995723815, 0.007980320315659196, 0.0, nan, 0.0, 0.01001465431517245, 0.0, 0.0, 0.9094842682836028, 0.9104621468677264, 0.9500069670140131, 0.0, 0.0, 0.030522857924993155, 0.0] | [nan, 0.5181348731869903, 0.7666613623083653, 0.0, 0.3145052392920833, 0.040279298027504136, nan, 0.09896279300890763, 0.0332534621335044, 0.0, 0.707185048053476, 0.0, 0.0, 0.0, 0.0, 8.11839872703508e-06, 0.0, 0.0, 0.6129636976206597, 0.0, 0.21304181051016494, 0.007979819175153202, 0.0, nan, 0.0, 0.009972716399085856, 0.0, 0.0, 0.8032595523715207, 0.5644424403160349, 0.8548000615746258, 0.0, 0.0, 0.02810796628175876, 0.0] | | 0.6465 | 0.5 | 200 | 0.7250 | 0.2963 | 0.2416 | 0.7963 | [nan, 0.8965158332325365, 0.9203420775747997, 0.0005677570093457944, 0.42947876549598557, 0.20108992228390948, nan, 0.6149826174335852, 0.6106893770460692, 0.0, 0.9320756176369465, 0.0, 0.0, 0.0, 0.0, 0.23413652010131844, 0.0, 0.0, 0.9437607244807804, 0.0, 0.2033741348512844, 0.2597617238717267, 0.0, nan, 0.0, 0.21746480347516617, 0.0, 0.0, 0.8793454644762622, 0.8380851985041863, 0.9445753860505853, 0.0, 0.0, 0.35629926758549024, 0.0] | [nan, 0.6645359168510458, 0.8064416600263559, 0.000566105647428005, 0.4116417722563792, 0.17504073239500048, nan, 0.34611894249410324, 0.4768988514264542, 0.0, 0.7872815412923856, 0.0, 0.0, 0.0, 0.0, 0.22760454893418883, 0.0, 0.0, 0.6497218142931416, 0.0, 0.16433182458127107, 0.24025960226620707, 0.0, nan, 0.0, 0.1865917623179034, 0.0, 0.0, 0.8237045305017561, 0.6485287252686867, 0.8916263487480074, 0.0, 0.0, 0.23161660227979464, 0.0] | | 0.6777 | 1.0 | 400 | 0.6645 | 0.3343 | 0.2755 | 0.8205 | [nan, 0.8955600256602996, 0.9528284776336102, 0.20619042056074766, 0.4578573681184769, 0.34171859852352976, nan, 0.5150824142204389, 0.8000759121317076, 0.0, 0.9308408861203066, 0.0, 0.0, 0.0, 0.0, 0.8202247191011236, 0.0, 0.0, 0.931415684238172, 0.0, 0.22729327499111263, 0.2807173404242283, 0.0, nan, 0.0, 0.3332993143873973, 0.0, 0.0, 0.904612735522824, 0.9085503237620377, 0.9531456202767545, 0.0, 0.0, 0.2395403274915222, 0.0] | [nan, 0.7091852218081763, 0.8215012473174504, 0.20316384883142716, 0.449169741519482, 0.2820828827399737, nan, 0.4034439348068946, 0.5801054036574794, 0.0, 0.8406284073872154, 0.0, 0.0, 0.0, 0.0, 0.5491287380561565, 0.0, 0.0, 0.6833033543785748, 0.0, 0.196701947180513, 0.26816266986235426, 0.0, nan, 0.0, 0.2624543573765898, 0.0, 0.0, 0.8319417451247856, 0.6328739755697549, 0.9148380247362377, 0.0, 0.0, 0.18610354253000033, 0.0] | | 0.4931 | 1.25 | 500 | 0.6513 | 0.3693 | 0.2930 | 0.8232 | [nan, 0.8195930838546497, 0.9565826472101743, 0.3660338785046729, 0.502483997738174, 0.5101274819814215, nan, 0.6120499018406388, 0.8168524932390757, 0.0, 0.9680832750475287, 0.0, 0.0, 0.0, 0.0, 0.7678687406637656, 0.0, 0.0, 0.9132467503439181, 0.07463699730127982, 0.3080053777834345, 0.3700341269744017, 0.0, nan, 0.0, 0.3144554351808238, 0.0, 0.0, 0.8719933435243034, 0.9280312013943278, 0.9461371807749148, 0.0, 0.3623930581804142, 0.40862556355693114, 0.0] | [nan, 0.7255301419742964, 0.8322765227346863, 0.3328323011716717, 0.4866977152337555, 0.31646114214929966, nan, 0.4116248877039441, 0.584768070212383, 0.0, 0.7940437031847611, 0.0, 0.0, 0.0, 0.0, 0.5384221282312557, 0.0, 0.0, 0.7148576049798162, 0.06922710729587371, 0.23689839512021127, 0.330131038978254, 0.0, nan, 0.0, 0.25964434649208096, 0.0, 0.0, 0.8276496500163791, 0.5924934568973941, 0.9145898275185997, 0.0, 0.10460157785142388, 0.3046522912622977, 0.0] | | 0.1718 | 2.0 | 800 | 0.5338 | 0.3766 | 0.3117 | 0.8521 | [nan, 0.9149980619048741, 0.9439616375983239, 0.49970093457943926, 0.7343188057936092, 0.4654595153245685, nan, 0.4401632944315461, 0.7951368790624852, 0.0, 0.9516775700030986, 0.0, 0.0, 0.0, 0.0, 0.7842599207637851, 0.0, 0.0, 0.9120325078402151, 0.0, 0.5436783980174178, 0.289193941696178, 0.0, nan, 0.0, 0.4040691893023499, 0.04438191043850125, 0.0, 0.9289921718405059, 0.9105179916825697, 0.9579859465374478, 0.0, 0.00014225040134934668, 0.5310102962619485, 0.0] | [nan, 0.7682867926029272, 0.863978713337328, 0.3619354489331745, 0.619807980106986, 0.4001297195410576, nan, 0.37693255173950874, 0.6055069405805374, 0.0, 0.8443884543167844, 0.0, 0.0, 0.0, 0.0, 0.5757144134211389, 0.0, 0.0, 0.7512958252799772, 0.0, 0.35684944134400076, 0.2822025918120264, 0.0, nan, 0.0, 0.3086991377431782, 0.04423000485801351, 0.0, 0.8578322873273115, 0.6920597473565505, 0.9258143343645202, 0.0, 0.00013209541062801931, 0.3399454223242722, 0.0] | | 1.7925 | 2.25 | 900 | 0.5745 | 0.3877 | 0.3157 | 0.8463 | [nan, 0.9373443718928436, 0.8936817705653165, 0.5237184579439252, 0.785620810686892, 0.5932309765570626, nan, 0.5731998228133042, 0.7751909664563268, 0.0, 0.9330254836699918, 0.0, 0.0, 0.0, 0.0, 0.8874780801454829, 0.0, 0.0, 0.9253989025665076, 0.0, 0.49743326413606553, 0.3720606075459213, 0.0, nan, 0.0, 0.362670748940179, 0.2263189382021227, 0.0, 0.9355852115710428, 0.9121195658169062, 0.9653801272784691, 0.0, 0.09587677050945966, 0.21074794549629322, 0.0] | [nan, 0.7666762008063966, 0.8459820722288737, 0.35589376130270695, 0.6602856629180212, 0.391087786259542, nan, 0.4283483218139711, 0.618615992154992, 0.0, 0.8563419873974479, 0.0, 0.0, 0.0, 0.0, 0.4695442264821982, 0.0, 0.0, 0.7387838557909564, 0.0, 0.3568544684209477, 0.3548962568907604, 0.0, nan, 0.0, 0.28509334019028026, 0.21794051124482566, 0.0, 0.8588025306782998, 0.6960344960020876, 0.927551192360457, 0.0, 0.09183812508516147, 0.18221393560509547, 0.0] | | 0.4287 | 2.5 | 1000 | 0.5140 | 0.4156 | 0.3337 | 0.8596 | [nan, 0.9114284539509796, 0.9599424299786812, 0.3729602803738318, 0.6955020648206622, 0.6337076451002155, nan, 0.648796319756489, 0.9076149357119134, 0.0, 0.9333320442069727, 0.0, 0.0, 0.0, 0.0, 0.837638825745275, 0.0, 0.0, 0.8487128760410935, 0.14962168247818672, 0.7450834097721757, 0.4416333770387344, 0.0, nan, 0.005162707675408485, 0.4304364892447794, 0.29855310097272386, 0.0, 0.9243997842101966, 0.9100753698167738, 0.9780073694330464, 0.0, 0.3377837387469772, 0.3283183517042185, 0.0] | [nan, 0.8056652041667661, 0.868478873207236, 0.36872340720413566, 0.648560287656455, 0.4227995307199668, nan, 0.5211383920382058, 0.5417303836612635, 0.0, 0.8614512323591124, 0.0, 0.0, 0.0, 0.0, 0.4902451772308277, 0.0, 0.0, 0.7414797203702529, 0.1034994187677877, 0.37103542329614997, 0.38941938864817555, 0.0, nan, 0.004775330844065127, 0.3339817219387496, 0.27392303157209946, 0.0, 0.8695462814099766, 0.7123344518279238, 0.9249476057387171, 0.0, 0.15441354067963511, 0.2686663032210652, 0.0] | | 0.2477 | 2.75 | 1100 | 0.5852 | 0.3976 | 0.3245 | 0.8501 | [nan, 0.9240898770490549, 0.9130342916084687, 0.5360268691588785, 0.6767027987344469, 0.5151102302165186, nan, 0.6523417772790812, 0.8782321962328604, 0.0, 0.9459085723287141, 0.01212233473285585, 0.0, 0.0, 0.0, 0.8298613366240176, 0.0, 0.0, 0.8996769125664682, 0.0046441166244474245, 0.58637589184745, 0.4359797566385237, 0.0, nan, 0.0, 0.4451038886272047, 0.26994748620682013, 0.0, 0.9522730369995648, 0.9058973503358962, 0.9744264856283144, 0.024141075054913176, 0.024040317828039587, 0.315675681715336, 0.0] | [nan, 0.7635041179698989, 0.8504428879888529, 0.32134395517814934, 0.5814428391874907, 0.4398125968608028, nan, 0.5183108660060791, 0.5876442483214019, 0.0, 0.8637126471579993, 0.010904378413403684, 0.0, 0.0, 0.0, 0.5582717546245474, 0.0, 0.0, 0.7543635882159604, 0.004548919124920941, 0.3707771520336274, 0.37139606254827867, 0.0, nan, 0.0, 0.32640450731902027, 0.25674365674787153, 0.0, 0.8589069009951039, 0.7216899081490464, 0.9303705560523882, 0.023933704665274814, 0.02273469779955799, 0.24717820737291407, 0.0] | | 0.2092 | 3.5 | 1400 | 0.5305 | 0.4215 | 0.3450 | 0.8615 | [nan, 0.8854690236777607, 0.9752597083363964, 0.4837301401869159, 0.7543174059151941, 0.32120495047431574, nan, 0.6121067808383275, 0.8640129050623903, 0.006110443680351299, 0.9472197081638014, 0.22567300568041493, 0.0, 0.0, 0.0, 0.849337533285705, 0.0, 0.0, 0.9323370763681338, 0.09924833192602527, 0.4992824257958052, 0.5897763059541461, 0.0, nan, 0.005025401620211451, 0.5194038833935207, 0.26516141898030177, 0.0, 0.9098213390526053, 0.9140251839431679, 0.9696367307434691, 0.0, 0.46129773009002417, 0.39953043905763785, 0.0] | [nan, 0.8279523588823188, 0.8503094621684615, 0.4166789099025304, 0.6531647345358885, 0.2970569371138754, nan, 0.4891076127233826, 0.6267720763107083, 0.0060749588138385505, 0.8628731375345856, 0.1638621555382868, 0.0, 0.0, 0.0, 0.5868382377688277, 0.0, 0.0, 0.766351782387915, 0.08906272053962098, 0.3548571571167739, 0.42844759670807536, 0.0, nan, 0.004661470273574813, 0.3559905085937402, 0.24649831094998764, 0.0, 0.8706735405566627, 0.7172875061476175, 0.937101627261161, 0.0, 0.18277266944717308, 0.30403604315996224, 0.0] | | 0.1763 | 3.75 | 1500 | 0.5284 | 0.4184 | 0.3549 | 0.8725 | [nan, 0.9155522786024052, 0.9647682266779387, 0.44949532710280377, 0.7917047766525447, 0.5148885009996292, nan, 0.6544609508444807, 0.8639037813730607, 0.006400430838062886, 0.9591118988406824, 0.21581460442907713, 0.0, 0.0, 0.0, 0.8629440800155874, 0.0, 0.0, 0.9189088001847752, 0.0, 0.553022223587637, 0.46456492702831864, 0.0, nan, 0.09048469037484554, 0.4453708065107029, 0.3956482240588509, 0.0, 0.9463804808607508, 0.8827003794689641, 0.9646183286805874, 0.0, 0.10191225182385336, 0.42574316887992536, 0.0] | [nan, 0.8411073731152799, 0.8690976727110442, 0.4122661523625844, 0.6761261173524866, 0.4325420396336731, nan, 0.5235010874548043, 0.6267662599177323, 0.006377182482354398, 0.8589461626478264, 0.21441570391575504, 0.0, 0.0, 0.0, 0.5785872529434498, 0.0, 0.0, 0.7644870697544361, 0.0, 0.3931242258826368, 0.4137160566746283, 0.0, nan, 0.07477420233286435, 0.3486446014515762, 0.35308773803167826, 0.0, 0.8775350307334798, 0.7615382190401359, 0.9362335277343948, 0.0, 0.08161239401780339, 0.3123361865981938, 0.0] | | 0.227 | 4.0 | 1600 | 0.5923 | 0.4426 | 0.3538 | 0.8544 | [nan, 0.9577374173182539, 0.9166854278467985, 0.1959217289719626, 0.7810987315371373, 0.5809225413617377, nan, 0.5835888579214346, 0.8662428239312995, 0.024607481668668958, 0.960621119945819, 0.44992590763151397, 0.0, 0.0, 0.0, 0.890757939858414, 0.0, 0.0, 0.8824976680624833, 0.23107998476795974, 0.6677916708726317, 0.5485129952087443, 0.0, nan, 0.13447755045997528, 0.4840215627780395, 0.4094524827723738, 0.0, 0.9258667409261705, 0.8784809934585728, 0.9680485743444954, 0.0, 0.5403279887825397, 0.2843078375615234, 0.0] | [nan, 0.732742632898181, 0.85248637631468, 0.1937195271972472, 0.6916132972252533, 0.4613544304478555, nan, 0.5019837033874182, 0.6339381818434339, 0.024391746227286727, 0.8507334888775837, 0.3399262956570416, 0.0, 0.0, 0.0, 0.5118086361876507, 0.0, 0.0, 0.7596215991272331, 0.14059847786558677, 0.3924964359231432, 0.4511581321221818, 0.0, nan, 0.11381225741975969, 0.3543174804464886, 0.36413975210357263, 0.0, 0.8783724167054704, 0.7445500851078998, 0.9377100490542223, 0.0, 0.1494074611014649, 0.24185599444907813, 0.0] | | | 0.3219 | 4.75 | 1900 | 0.5306 | 0.4360 | 0.3684 | 0.8771 | [nan, 0.9383015101174155, 0.9581139041020363, 0.4607803738317757, 0.811509517207101, 0.6291153866526402, nan, 0.6505845609717001, 0.814323670351568, 0.021541903144289325, 0.9406027168809682, 0.41314727916357946, 0.0, 0.0, 0.0, 0.8354955510813795, 0.0, 0.0, 0.9418887586641801, 0.05121773539297008, 0.6343575406735104, 0.518250578994449, 0.0, nan, 0.027131676506933957, 0.4585466059559324, 0.39812988854667525, 0.0, 0.9202410996786, 0.895342680330491, 0.9736189575948254, 0.00016059513448547392, 0.336889593367067, 0.32415208076113006, 0.0] | [nan, 0.8286943759948178, 0.8911330146359255, 0.44085585238189445, 0.7563455702043241, 0.44281982228819555, nan, 0.5389345827619121, 0.6390151642075557, 0.02125355077350663, 0.8721853143259732, 0.34406869718732325, 0.0, 0.0, 0.0, 0.6106328062420269, 0.0, 0.0, 0.7642481786905918, 0.04822404265103627, 0.40217085841005906, 0.4365575304022451, 0.0, nan, 0.02300777793302594, 0.35943746679548483, 0.36207556675062974, 0.0, 0.8758467465629671, 0.7286601531442717, 0.9422882468777368, 0.00016028416831905857, 0.18664925297515172, 0.274341743647937, 0.0] | | | 0.3758 | 5.25 | 2100 | 0.5413 | 0.4400 | 0.3618 | 0.8749 | [nan, 0.9446099997724584, 0.9535776804748952, 0.5333586448598131, 0.7118822151738956, 0.5725146926401914, nan, 0.637704053404208, 0.8958248327560848, 0.02011268072413936, 0.9449676672959805, 0.4536305260558163, 0.0, 0.0, 0.0, 0.8527716438267194, 0.0, 0.0, 0.9263943868758329, 0.13527541846719315, 0.6231382204452325, 0.5343291629394538, 0.0, nan, 0.07845667993958534, 0.48360548490082167, 0.39496133478097095, 0.0, 0.9342636737434504, 0.9081380373512183, 0.9754223113378334, 0.0, 0.0686053364221992, 0.4949887428280921, 0.0] | [nan, 0.8421459412186475, 0.884886678991681, 0.3243137842681656, 0.6975183850797184, 0.4470212561315764, nan, 0.5491953906967838, 0.5880944000946866, 0.01971493543409405, 0.8720965863289499, 0.2829941580535405, 0.0, 0.0, 0.0, 0.5648458841496203, 0.0, 0.0, 0.7876641278543601, 0.11773309221380866, 0.4507472099997672, 0.4306682617343027, 0.0, nan, 0.053795025325274436, 0.35687388479928317, 0.3506028598965402, 0.0, 0.8763044901374653, 0.7342806685419377, 0.9417441335611155, 0.0, 0.05263732322996086, 0.3527909231538019, 0.0] | | 0.1962 | 6.0 | 2400 | 0.5252 | 0.4591 | 0.3755 | 0.8678 | [nan, 0.8788767058796604, 0.9301585587737999, 0.5368457943925233, 0.8328600223823257, 0.6594750437607246, nan, 0.7274099889861577, 0.8314845566257058, 0.20671941671154564, 0.9452567774639331, 0.5536552235119783, 0.0, 0.0, 0.0, 0.8969685653049295, 0.0, 0.0, 0.9273548947094251, 0.04859351976026093, 0.6165535079211122, 0.5024186037962429, 0.0, nan, 0.07840175751750653, 0.49256293504998166, 0.4105160532671556, 0.0, 0.928572042963352, 0.9119196275909236, 0.976082967184019, 0.09759262712918065, 0.23430673250828102, 0.4679128700481014, 0.0] | [nan, 0.8020983983063393, 0.8683865888896747, 0.4544978013913642, 0.6680523786513721, 0.4517445785165809, nan, 0.5857034011566181, 0.6746845091894639, 0.18334129404416358, 0.8638403093611754, 0.3497406295097313, 0.0, 0.0, 0.0, 0.5136113874503752, 0.0, 0.0, 0.7818072530904586, 0.04626054062573883, 0.40338464571865573, 0.41853055526845995, 0.0, nan, 0.05885020509966401, 0.3764221220090192, 0.37385233165849424, 0.0, 0.8760216287329546, 0.7184759765101966, 0.9447723343539753, 0.07888984275215143, 0.17396158662623154, 0.3506487661563549, 0.0] | | 0.2721 | 6.25 | 2500 | 0.5120 | 0.4726 | 0.3905 | 0.8834 | [nan, 0.9352277032235452, 0.9553332100455781, 0.5201098130841122, 0.8315588432600179, 0.6507746356557826, nan, 0.7171028251625792, 0.8676946434502064, 0.12399022329011143, 0.9414992885437384, 0.5631225817074175, 0.0, 0.0, 0.0, 0.8815434824965902, 0.0, 0.0, 0.9265160801760165, 0.12371893574396928, 0.6983379489227609, 0.496123187961817, 0.0, nan, 0.1353837704242757, 0.5335426806929398, 0.5267111298220735, 0.0, 0.9267000099723489, 0.9157963608485102, 0.9708294620227798, 0.0039371710389987154, 0.44802779979272084, 0.43061615557802646, 0.0] | [nan, 0.847290915944923, 0.8918843187400161, 0.4215259288995603, 0.7694117638497967, 0.498788432969163, nan, 0.5567520477680967, 0.6726198795136411, 0.11618337797445752, 0.8753637372987935, 0.42321077786886513, 0.0, 0.0, 0.0, 0.581673157378788, 0.0, 0.0, 0.7933263418076343, 0.10532064834390416, 0.437053368284101, 0.4288208971032145, 0.0, nan, 0.09955372468245795, 0.3973712316699539, 0.442531089433316, 0.0, 0.880946087123613, 0.7345359613309864, 0.9452321649786941, 0.003849095209395844, 0.23329171252010497, 0.3386007935784502, 0.0] | | 0.2409 | 6.5 | 2600 | 0.5224 | 0.4636 | 0.3840 | 0.8786 | [nan, 0.8731382676849351, 0.9738163801183563, 0.5331343457943926, 0.8196854363098576, 0.6540081867354192, nan, 0.6300072908533401, 0.8875978554822792, 0.13449190107295247, 0.955765201040042, 0.6083600889108421, 0.0, 0.03281733746130031, 0.0, 0.8703400012989544, 0.0, 0.0, 0.9262836625295774, 0.08389211741916257, 0.6663345782989761, 0.5452994228436286, 0.0, nan, 0.13288480021968968, 0.47811535039514313, 0.4147924929649243, 0.0, 0.9382028859601423, 0.8756597961457425, 0.965266610679491, 0.010467176426706453, 0.4342701538336483, 0.3917412023665201, 0.0] | [nan, 0.8209592404927408, 0.8860938595226477, 0.41218836114746504, 0.7196016259460952, 0.4954368536125842, nan, 0.545313357840212, 0.6491223200313668, 0.12371625097650668, 0.8633659080664855, 0.4708871648638746, 0.0, 0.03281733746130031, 0.0, 0.5802203868677137, 0.0, 0.0, 0.7907500494259085, 0.06952381605757291, 0.447113968783744, 0.44327869995554786, 0.0, nan, 0.08728984775236309, 0.38119151688382136, 0.37855655092920265, 0.0, 0.8832564638909316, 0.7526222693644393, 0.9416404778849121, 0.009589327157183334, 0.18190330268981955, 0.32252322488728213, 0.0] | | | 0.1524 | 10.5 | 4200 | 0.5353 | 0.5128 | 0.4237 | 0.8872 | [nan, 0.9268517790355991, 0.9602839791773874, 0.537267523364486, 0.8456677302072528, 0.6567083558655384, nan, 0.7076703913792123, 0.8633391848934858, 0.3143875056961763, 0.9515964493686976, 0.6206264921379765, 0.0, 0.7490196078431373, 0.08954470929499306, 0.8721747743066831, 0.0, 0.005131830440133009, 0.9147190737070242, 0.11450520703985165, 0.6915674424660561, 0.5259122991900205, 0.0019833510251969382, nan, 0.2044761773994233, 0.5593918459203433, 0.4851432496510159, 0.0, 0.9463960710558084, 0.8834918590669917, 0.9670624325154579, 0.012832069294210286, 0.5599179011969355, 0.44183701402816805, 0.0] | [nan, 0.8497898154944094, 0.8911284588944798, 0.4558941463477496, 0.7715538102169041, 0.5041805687956784, nan, 0.5916295134976238, 0.6664176289411136, 0.25352865518566153, 0.8836310493548173, 0.5013133395398324, 0.0, 0.6053882725832013, 0.05452311472892029, 0.5946321429362145, 0.0, 0.005111887747118043, 0.802846410488875, 0.09434940383618455, 0.47282749487636766, 0.44441582446257716, 0.001977936260307555, nan, 0.14078808047194072, 0.4107132907440319, 0.42875046507529324, 0.0, 0.8865359213150946, 0.7513094837462199, 0.9478585417349973, 0.011508324602586469, 0.19474424489161243, 0.34180230893483227, 0.0] | | 0.052 | 10.75 | 4300 | 0.5611 | 0.5030 | 0.4222 | 0.8855 | [nan, 0.932148839850802, 0.9568949634271852, 0.5225233644859814, 0.8511642191077112, 0.6031687568751455, nan, 0.7201923889006668, 0.8793424111590834, 0.1743029951530718, 0.9511564170902311, 0.5728369144644768, 0.018116900290928325, 0.7155830753353973, 0.08790515827973262, 0.8945492628434111, 0.0, 0.0, 0.9018928482213427, 0.19409261742744086, 0.6978142148450815, 0.5187192887865012, 0.004106374657802112, nan, 0.18591239873678428, 0.5679096666143298, 0.48372515565797347, 0.0, 0.9465148790940053, 0.8887757437702006, 0.9729464658947179, 0.03061668531642422, 0.3269727082444268, 0.4968253657882534, 0.0] | [nan, 0.8544673632153686, 0.8915093314898118, 0.4824501321862451, 0.7281104549174552, 0.4796578889108752, nan, 0.5955885392390377, 0.6806501724220245, 0.15806082007550856, 0.8869557339277052, 0.5018390970394144, 0.017487873372478938, 0.5719234576047509, 0.08299595141700405, 0.5743453150410742, 0.0, 0.0, 0.7988127196821454, 0.14769412965284384, 0.4636640495670947, 0.44194705232908676, 0.004079706927175844, nan, 0.14373978216098007, 0.4138202592132837, 0.4263783910470499, 0.0, 0.8825003483580057, 0.7459231292221788, 0.9497549296351595, 0.022555788364877087, 0.19864442770898405, 0.36609089056617755, 0.0] | | 0.0897 | 11.0 | 4400 | 0.5797 | 0.4966 | 0.4137 | 0.8864 | [nan, 0.9266090680496935, 0.9675701132103213, 0.5286179906542056, 0.8135055236213754, 0.6141498963415911, nan, 0.7310209435363914, 0.8153911847037054, 0.24547412900285845, 0.9446611067589995, 0.6598542850086441, 0.0, 0.5599071207430341, 0.13658721150208097, 0.8912937585243879, 0.0, 0.004870002356452753, 0.9252981123672058, 0.10847033891289591, 0.6586394910124014, 0.4795176884335903, 0.01181630258673669, nan, 0.18618701084717837, 0.5559088292248914, 0.4992355587068755, 0.0, 0.9406880436912528, 0.9118086274033954, 0.9573602602596679, 0.003960483235940155, 0.3327033672702148, 0.4804871031358067, 0.0] | [nan, 0.8565575968459415, 0.8928102104157912, 0.43275555700074025, 0.7654702047573079, 0.47074416606474334, nan, 0.6054622841435586, 0.6863363711152467, 0.21403286978508218, 0.8828456438079144, 0.4322928605137194, 0.0, 0.4530688935281837, 0.09709521247982786, 0.5749041704195555, 0.0, 0.004865289040020926, 0.7951008940737603, 0.09395592969976839, 0.4548604901862724, 0.41665801557197046, 0.011736958934517204, nan, 0.1216732767438939, 0.41094472698150475, 0.430227229329769, 0.0, 0.8867287999971621, 0.7466484878252573, 0.9415279772911855, 0.0036285882442284325, 0.19204917359734425, 0.36246293958863207, 0.0] | | 0.0936 | 11.25 | 4500 | 0.5731 | 0.5011 | 0.4193 | 0.8864 | [nan, 0.9324196276009762, 0.9569564158641476, 0.5246004672897197, 0.8364710008894733, 0.6578250088383729, nan, 0.7038215792022807, 0.8665369834416663, 0.21309913418120055, 0.9410960435297098, 0.49318761834197744, 0.028167151547209734, 0.5808565531475748, 0.11010215664018161, 0.8849288822497889, 0.0, 0.0565548660749352, 0.9216694582309478, 0.11269226311693903, 0.6871508134702065, 0.5262584704743466, 0.01969383764456115, nan, 0.2076616778799945, 0.571397916993772, 0.476856262879174, 0.0, 0.9377623285515337, 0.907275545210859, 0.973954665451519, 0.050830950308757096, 0.38818102379646, 0.4678081196891568, 0.0] | [nan, 0.858380886499719, 0.8914561596816896, 0.45129869803574746, 0.786844102694609, 0.48464472942061587, nan, 0.6094618696875397, 0.6854209198991233, 0.18657623184200503, 0.8857526637100221, 0.394797106941035, 0.023946037099494097, 0.49684424239749303, 0.062077792789589706, 0.5615273263032089, 0.0, 0.055464256368118324, 0.7962485307269822, 0.09311408578835408, 0.4733745462314789, 0.44196131097098196, 0.019312422955759485, nan, 0.14722087024238295, 0.4185961804636968, 0.4181839379748557, 0.0, 0.8886792481667263, 0.7473472827679579, 0.9501856968302422, 0.031198480139267574, 0.2030701847638892, 0.3556589318498682, 0.0] | | 0.033 | 14.25 | 5700 | 0.5935 | 0.5181 | 0.4292 | 0.8880 | [nan, 0.9232290780535377, 0.9550432923803572, 0.5331775700934579, 0.8469649770868216, 0.6796985960845084, nan, 0.7591958688611619, 0.8564643924657209, 0.21028211607771655, 0.9524029393967549, 0.6051700008232486, 0.0, 0.6860681114551084, 0.21654685332324378, 0.8960592972657011, 0.0, 0.03558243657214673, 0.9155229117646998, 0.140697693670425, 0.711005584058588, 0.5227324249145294, 0.037180848092072186, nan, 0.2080186736235068, 0.5726225990474695, 0.5346435930956549, 0.0, 0.9410130186192625, 0.9154633602859255, 0.9760592954761752, 0.01645064030834266, 0.4608913003718832, 0.4701447510293469, 0.0] | [nan, 0.8573293198744064, 0.8916240779976521, 0.48186665258934697, 0.7676170029872194, 0.4823511054134466, nan, 0.6260715377125842, 0.6901341142647419, 0.1894206549118388, 0.8862935130575381, 0.49201833941300493, 0.0, 0.5435813573180703, 0.1092586700604518, 0.5822497006272321, 0.0, 0.035439538946984116, 0.8016860332567224, 0.11209233305853257, 0.4701563285996208, 0.45173968006036097, 0.03573442156415282, nan, 0.1250185671139278, 0.43006031638093856, 0.44816121842496287, 0.0, 0.8878007481353359, 0.7386750898148962, 0.9519721480330992, 0.013876810802543318, 0.25855582662623405, 0.3720678838361397, 0.0] | | 0.0548 | 14.5 | 5800 | 0.5902 | 0.5151 | 0.4174 | 0.8882 | [nan, 0.9249082282350853, 0.9577153821767257, 0.5438259345794393, 0.8625692959476665, 0.6265525664540941, nan, 0.7491911978889274, 0.8432461925321441, 0.249306102158333, 0.951930364538209, 0.6013830575450728, 0.0, 0.7704850361197111, 0.20002522386177324, 0.8704780151977658, 0.0, 0.0013615060351373288, 0.9208633435979287, 0.11193893938641368, 0.6970564096712325, 0.4979168453686571, 0.03908039555282418, nan, 0.18904297679527668, 0.5623985973726906, 0.5131506060136048, 0.0, 0.9399214361687687, 0.9123994793332818, 0.9756660223299524, 0.04515831571967342, 0.4303481070535878, 0.49404040291178064, 0.0] | [0.0, 0.8607762479438139, 0.8922939816555095, 0.45337232891467816, 0.7416336434657338, 0.4957900790517687, nan, 0.6227225352163122, 0.6905205002583658, 0.2142437565638406, 0.8883435707029895, 0.4944664432937354, 0.0, 0.5822804554671658, 0.1227364185110664, 0.6143083859952676, 0.0, 0.0013572770933389015, 0.7986526753983755, 0.09318127002721979, 0.47663610300281495, 0.44101175423554057, 0.037423427761281866, nan, 0.14246983588236511, 0.42780903014161104, 0.4432599000899573, 0.0, 0.8868797486244817, 0.7354235169834137, 0.9525392249964284, 0.03855126495647117, 0.2526545610728006, 0.37165059315614124, 0.0] | | 0.1047 | 14.75 | 5900 | 0.5997 | 0.5159 | 0.4159 | 0.8881 | [nan, 0.9210892560336101, 0.9617335675034919, 0.5317464953271028, 0.8683264925417152, 0.6381114337134347, nan, 0.7416693813461018, 0.862755610380984, 0.2719665271966527, 0.9489817238040484, 0.570408331275212, 0.0005289605924358636, 0.6938596491228071, 0.22575356287047546, 0.8948821198934858, 0.0, 0.011022962322938758, 0.9258684979714679, 0.17593834335005545, 0.6548460763101033, 0.4725421838812847, 0.04097994301357618, nan, 0.22218865851984074, 0.5752629926205056, 0.5366821032106535, 0.0, 0.936931478673554, 0.9021336855923136, 0.9725860103434604, 0.020141738157403954, 0.43632262391026033, 0.4934216774582814, 0.0] | [0.0, 0.8607109591035689, 0.8928295853674818, 0.4670190706507743, 0.7523185639791471, 0.4845338501499847, nan, 0.6282224979925543, 0.6928170564904808, 0.23142272983643541, 0.8873278318309525, 0.46953884728763595, 0.0005215803885773895, 0.5542412002308136, 0.10845198424719782, 0.5869154300379641, 0.0, 0.010907018316536697, 0.793456051943224, 0.12649239962384984, 0.4589822701689517, 0.42143872921678477, 0.03893105461493551, nan, 0.13440869146302972, 0.4245448084603441, 0.46174816509389, 0.0, 0.8878226827336242, 0.7447736277446672, 0.951929183073613, 0.018382891806658124, 0.25878028202964926, 0.37484668044597425, 0.0] | | 0.1363 | 15.0 | 6000 | 0.6052 | 0.5193 | 0.4155 | 0.8887 | [nan, 0.9281772418265013, 0.9663767872895684, 0.5342161214953272, 0.8447924129735698, 0.6015187219527939, nan, 0.7291077408868643, 0.8812164919106135, 0.23211400637971746, 0.9479408328730995, 0.633386844488351, 0.0030415234065062154, 0.789422084623323, 0.21314163198385672, 0.8954179385594596, 0.0, 0.0066242505171104655, 0.9164480291997693, 0.1360949684597427, 0.6964961019847766, 0.4960711090960334, 0.03860550868763618, nan, 0.19802279280516272, 0.5609541005914063, 0.5661075535662848, 0.0, 0.9376398917610389, 0.9059173441584945, 0.9782134208899593, 0.041454266650089104, 0.43892377410636263, 0.49969692229478707, 0.0] | [0.0, 0.8633930449091305, 0.8952460293484353, 0.42706756384454103, 0.7593774610091322, 0.47377891058119026, nan, 0.6217821374684249, 0.6898326802726141, 0.20124995510218743, 0.8868864734587292, 0.4952526552944963, 0.0028388052332757345, 0.6066698390038862, 0.10356026717323365, 0.5863739068024136, 0.0, 0.00656256484747873, 0.7990222508044155, 0.11130896362146828, 0.4768559231889487, 0.4358850122678166, 0.03689958080794596, nan, 0.14020726799012267, 0.42208907144066693, 0.46374312526092243, 0.0, 0.889531203939725, 0.7432560391610733, 0.952160090573041, 0.03558025789239662, 0.21245893254116582, 0.3712419453581397, 0.0] | | 0.0804 | 15.25 | 6100 | 0.6205 | 0.5110 | 0.4268 | 0.8877 | [nan, 0.9338093608996594, 0.9656453309931633, 0.5360116822429907, 0.8032054069910557, 0.6059132718486427, nan, 0.7301936126609202, 0.8766143189258433, 0.22587928248891834, 0.9574923159422327, 0.619350456902939, 0.0011901613329806928, 0.7703818369453045, 0.07655442048177576, 0.8504335260115607, 0.0, 0.020239310868483754, 0.9198111518664089, 0.12485306048113379, 0.7319227623900414, 0.495000428884777, 0.03547684228169171, nan, 0.1875600713991487, 0.5538912440466844, 0.5455451906671689, 0.0, 0.9362906678973961, 0.9101525873385327, 0.9729007364591106, 0.02293143105806291, 0.4597532971610884, 0.48345782331547454, 0.0] | [nan, 0.856464729269542, 0.8942823604125036, 0.4347924144963024, 0.7282825257603309, 0.4836585626064097, nan, 0.6163747573889081, 0.6892970262677814, 0.20072891932188414, 0.888225522138808, 0.5066929332727181, 0.0011893749174045195, 0.6024777046931117, 0.05147557666214383, 0.6220782459974346, 0.0, 0.020031615227137266, 0.7981944383082095, 0.09975989363883506, 0.476298280003313, 0.4345003764655265, 0.03419217618393775, nan, 0.1330243066375818, 0.42041703246719714, 0.45861972618049734, 0.0, 0.8892991369897043, 0.7440154875361404, 0.9524152608652374, 0.021443727473549588, 0.22949422815524131, 0.36944182958821886, 0.0] | | 0.0627 | 15.5 | 6200 | 0.6244 | 0.5088 | 0.4226 | 0.8864 | [nan, 0.9363099227676078, 0.9557843398515034, 0.5258376168224299, 0.8250218829308421, 0.6537759869721766, nan, 0.7370216777925434, 0.8573990605873701, 0.24421061352997225, 0.944441326435564, 0.6453651107269285, 0.0, 0.574406604747162, 0.202547610039097, 0.9001834773007729, 0.0, 0.08682219254837274, 0.9295308868150898, 0.08372655176410206, 0.6741101275248591, 0.4846229490117269, 0.03799094921503995, nan, 0.18766991624330634, 0.5747971947453813, 0.5357957944650019, 0.0, 0.9393777953152539, 0.9065412893119918, 0.9711350422513085, 0.01408833768494343, 0.423479444817005, 0.43092900998340755, 0.0] | [nan, 0.8597774723874926, 0.8905873458192073, 0.4468008441348313, 0.7358981742624778, 0.4808541172889169, nan, 0.6284059730270303, 0.6908370828825592, 0.2063894967177243, 0.8877064612239235, 0.5085303752716421, 0.0, 0.4786515887689728, 0.07696731524968849, 0.5910784632525015, 0.0, 0.08625308882819613, 0.7927730663764808, 0.07191564097641445, 0.4573643410852713, 0.43199170940310977, 0.036449399656946824, nan, 0.12474672799956191, 0.42888997799442735, 0.45055805027110624, 0.0, 0.8884059722861457, 0.7421115189770542, 0.9513756980737487, 0.012830765528906378, 0.21910649885920366, 0.3464300992446894, 0.0] | | 0.0906 | 15.75 | 6300 | 0.6277 | 0.5077 | 0.4232 | 0.8874 | [nan, 0.9291486180310576, 0.9587963707454238, 0.5362032710280373, 0.8561640657502444, 0.6342631999714216, nan, 0.7070024940578683, 0.8671632585282536, 0.2429056713202701, 0.9448969225566771, 0.5583271589692929, 0.0010579211848717272, 0.6710010319917441, 0.23294236347584815, 0.9067513151912711, 0.0, 0.020684418610740187, 0.9250756288677204, 0.07677279425156046, 0.6503387447644879, 0.5319197495312902, 0.03860550868763618, nan, 0.18569270904846905, 0.5416470403517035, 0.5072344951363807, 0.0, 0.9414354322663816, 0.9037269864207472, 0.9731874869200364, 0.013277591280202247, 0.39988619967892053, 0.4915501377118052, 0.0] | [nan, 0.8573471144295101, 0.892101583588469, 0.4449642809016976, 0.7400242676373722, 0.48442379031764893, nan, 0.6140014998720169, 0.6924650683478314, 0.21178574008524165, 0.8871035802257583, 0.4782118177972077, 0.00099601593625498, 0.5315565729234794, 0.08438028233359221, 0.5871221081515825, 0.0, 0.020441960358122443, 0.7966462351239197, 0.06850549580427845, 0.4652701824381677, 0.4532145005879428, 0.03686906413403052, nan, 0.1488673139158576, 0.4142177021859072, 0.4423489401170992, 0.0, 0.888882064716084, 0.7468477974750474, 0.9515378343546987, 0.012387656809223801, 0.2237051521076804, 0.3671609871108074, 0.0] | | 0.0798 | 16.0 | 6400 | 0.6190 | 0.5286 | 0.4172 | 0.8869 | [nan, 0.926680657145317, 0.9583277241233551, 0.5414509345794393, 0.8395448350384849, 0.6163055970613488, nan, 0.729106879083869, 0.8763296484319401, 0.26653962467376446, 0.94462856417892, 0.6354449658351856, 0.0, 0.7736326109391125, 0.21591625677891285, 0.8849045268558811, 0.34363411619283063, 0.10316026497002069, 0.9218656576332847, 0.10944717627775294, 0.7009902670312324, 0.5122599776979916, 0.038968657466897594, nan, 0.1919538651654538, 0.5525226356832574, 0.538875717356141, 0.0, 0.9457572762531493, 0.901183634297817, 0.9780756945897774, 0.023115338389489825, 0.3853969802271942, 0.4585034944719744, 0.0] | [0.0, 0.8564334135192141, 0.8938306198574103, 0.41026489890361634, 0.7353951913707414, 0.47809949912634986, nan, 0.6215698951590981, 0.6951678039270297, 0.23431724238396126, 0.8861469346690092, 0.5033256170323759, 0.0, 0.5823655078656049, 0.06725329981143935, 0.60684460181721, 0.013995167136528394, 0.10232968859569384, 0.80017144909153, 0.09089721553798556, 0.48491411153457703, 0.44620918590626235, 0.03736540418921091, nan, 0.14435885256397019, 0.42539846918525115, 0.4624629192971781, 0.0, 0.8873440144497453, 0.7475156108906514, 0.9524719380738451, 0.01972869725160058, 0.22189851053623036, 0.35861227450389216, 0.0] | | 0.0901 | 16.25 | 6500 | 0.5917 | 0.5200 | 0.4299 | 0.8896 | [nan, 0.9258199912150333, 0.9603701848856869, 0.5186892523364486, 0.8721793039773063, 0.647948819969426, nan, 0.7465402918754385, 0.8815201404374436, 0.21442478975931065, 0.9491194402298921, 0.6424219972009549, 0.00039672044432689763, 0.7311661506707946, 0.1943498549627948, 0.8921543157758005, 0.15327564894932014, 0.07967428586390177, 0.9293905669893677, 0.12015927416016821, 0.6698895330720515, 0.5201315450880439, 0.040560925191351474, nan, 0.17654812577234655, 0.5835060449050087, 0.5231215794021847, 0.0, 0.9400508616673928, 0.8957790972168599, 0.9722137189382809, 0.011464420406979153, 0.38557987360035767, 0.46186248931546336, 0.0] | [nan, 0.866351138156412, 0.8939541036386832, 0.46360912979965524, 0.7507890322152613, 0.48660598648618647, nan, 0.6225598103833513, 0.6911588008377322, 0.19347001326929186, 0.887840691207522, 0.5082802755206722, 0.00036527456471447707, 0.5638678869876641, 0.0832837918175431, 0.6045529063562446, 0.006450606044842116, 0.07925304719241588, 0.7975401296695107, 0.09911841629051973, 0.4713279486495917, 0.45141671341630396, 0.03856573705179283, nan, 0.12819285757013818, 0.4279405668488608, 0.45535903716704923, 0.0, 0.8891564381205536, 0.7534260714863522, 0.9520390401591446, 0.010587073054631307, 0.21693992819738858, 0.3621346900827125, 0.0] | | 0.0653 | 16.5 | 6600 | 0.6069 | 0.5188 | 0.4270 | 0.8875 | [nan, 0.9290124922971863, 0.9589720557965155, 0.5377873831775701, 0.8408719669628694, 0.6464453726960179, nan, 0.7621001449552638, 0.8857807088295299, 0.2068851236588094, 0.9480908117204224, 0.6177862846793447, 0.0, 0.7590299277605779, 0.18791777021061926, 0.9075956355134117, 0.0, 0.058230565810488834, 0.9227427600247443, 0.14023410983625556, 0.6694696680432973, 0.503836987023172, 0.03972288954690206, nan, 0.19629273650968007, 0.5403046004082274, 0.5528350801001529, 0.0, 0.9376581699207615, 0.901014031526811, 0.9752275577414824, 0.015813440258609972, 0.5130362332093723, 0.44827147941026946, 0.0] | [nan, 0.8616804147441266, 0.8938918495590652, 0.4436595217282778, 0.7588707802865634, 0.4758728817247983, nan, 0.628730181301102, 0.688001179245283, 0.18745190773792766, 0.8877420745200684, 0.49290617097441625, 0.0, 0.5890833366705378, 0.07141145458902469, 0.5823605098793022, 0.0, 0.05773773981671383, 0.7947286013642479, 0.11004573329175761, 0.45664170004530313, 0.44804481905654414, 0.037985842126352344, nan, 0.1362925675933341, 0.4181863845162963, 0.46249953657361065, 0.0, 0.888743313770925, 0.7487091113564399, 0.952506386954324, 0.013629087889199198, 0.23068137169799252, 0.34552559761867596, 0.0] | | 0.0946 | 16.75 | 6700 | 0.6065 | 0.5143 | 0.4299 | 0.8883 | [nan, 0.9366806425081413, 0.9542471674446813, 0.5289754672897197, 0.8420186089455377, 0.6348452391657562, nan, 0.7554582292706217, 0.8872989514636808, 0.24603338994987364, 0.95065695923075, 0.5426442743064132, 0.0, 0.6714138286893705, 0.17089166351368396, 0.8694632071182697, 0.0, 0.019113450108658656, 0.9217120922782911, 0.13903375883706684, 0.6740194249750934, 0.5118203708015244, 0.03178948544611431, nan, 0.20950157901963476, 0.5704453865075627, 0.5623407413972658, 0.0, 0.9411122045154043, 0.9100815747962009, 0.9743145830094165, 0.0857785237680799, 0.4308967871730781, 0.48645508025274165, 0.0] | [nan, 0.8651947384722789, 0.8930717543250574, 0.4526545293143849, 0.7524401466986995, 0.4887861010723328, nan, 0.6214073859834178, 0.6850152009083916, 0.21553648224427951, 0.8870252213407757, 0.45774305555555556, 0.0, 0.5674414547991802, 0.07292395457725634, 0.6296601151175575, 0.0, 0.018957592126106943, 0.7990749594007368, 0.11146433406780111, 0.4733450112755498, 0.44892412444043184, 0.03086520206129645, nan, 0.14343460931037075, 0.423674789416196, 0.4623610858079796, 0.0, 0.8878002154581935, 0.7401265142858424, 0.9527410923966566, 0.060905676756307404, 0.2440383021821195, 0.37124052036090577, 0.0] | | 0.0849 | 17.0 | 6800 | 0.6239 | 0.5140 | 0.4277 | 0.8874 | [nan, 0.9305970330977147, 0.9554562297838712, 0.5320046728971962, 0.8489963736857462, 0.6542095907740937, nan, 0.7229605001215142, 0.8664610713099588, 0.28969717055387545, 0.9528962660454964, 0.4980859471474438, 0.0, 0.7176470588235294, 0.20759238239374447, 0.8862034811976359, 0.0, 0.031864477783887096, 0.9191836449171626, 0.12003509991887283, 0.6955934653201726, 0.5165258494982048, 0.04092407397061288, nan, 0.19217355485376905, 0.5895090804417229, 0.503489840686003, 0.0, 0.9408365537389992, 0.904218558679801, 0.9778653391859837, 0.011972108251481619, 0.48105021439167633, 0.4599672061542931, 0.0] | [nan, 0.8636437394553574, 0.8929500733790351, 0.4345244853931126, 0.7599993804727837, 0.46696218452852767, nan, 0.6206510046358703, 0.6983976442693793, 0.2497009515987931, 0.8874926753329814, 0.43156730923551545, 0.0, 0.5706314364255529, 0.11078207026517702, 0.6145475017593244, 0.0, 0.03131271548397056, 0.8003820861050736, 0.10237293400828867, 0.4670301606353909, 0.4459244664251144, 0.038865601952565394, nan, 0.13528195016335132, 0.4290314962729347, 0.43912572952498746, 0.0, 0.8877216097613865, 0.738180307717246, 0.9528556585267144, 0.010467599586006663, 0.24685847767824554, 0.3594826033565289, 0.0] | | 0.0623 | 17.25 | 6900 | 0.6172 | 0.5119 | 0.4289 | 0.8887 | [nan, 0.9328785695913208, 0.9578098581195325, 0.5317383177570093, 0.8561058685577084, 0.6304827168234579, nan, 0.7396010541574238, 0.8636618114532428, 0.2868801524503915, 0.9518605630620964, 0.4947929529925084, 0.0009256810367627612, 0.7112487100103199, 0.18766553159288688, 0.8812836916282393, 0.0, 0.01743775037310502, 0.9291997485832975, 0.11260120200665574, 0.6826961479212292, 0.49109604568235565, 0.042125258394323704, nan, 0.18536317451599615, 0.5637959909980635, 0.5345549622210897, 0.0, 0.9375897612200349, 0.9104269853176398, 0.9785152351649676, 0.016857308632765553, 0.471885224247597, 0.4792468588859031, 0.0] | [nan, 0.8649230898296971, 0.8934913832615394, 0.4476893494179728, 0.7525214888224941, 0.47904609433387446, nan, 0.6239313691633799, 0.6925921698436251, 0.24592492631130367, 0.887597908356459, 0.43200359389038634, 0.000914435009797518, 0.5808680994521702, 0.10441372535260683, 0.6200052546206393, 0.0, 0.01701975415910659, 0.7967171468468032, 0.09773096322694678, 0.46324810420871126, 0.4373241271317872, 0.03999681722939819, nan, 0.13242564545240523, 0.42549338304851775, 0.45084188297733174, 0.0, 0.888754441570771, 0.7411121674604253, 0.9532170914369867, 0.015176070871411481, 0.2681904277926638, 0.37097400203468917, 0.0] | | 0.087 | 17.5 | 7000 | 0.5958 | 0.5165 | 0.4323 | 0.8903 | [nan, 0.9358029442279695, 0.9581817889436154, 0.5173516355140186, 0.8565989717971686, 0.667348278703771, nan, 0.7453587599689061, 0.8783982540209707, 0.2597456398359501, 0.9499820544177967, 0.5674240553223018, 0.0, 0.7777605779153767, 0.14150586454786226, 0.8944761966616873, 0.0, 0.04935459377372817, 0.9190064859631538, 0.13516780079140384, 0.6902990697136872, 0.5223050718688348, 0.039750824068383706, nan, 0.1931621584511877, 0.5658763803841524, 0.501960958099754, 0.0, 0.9402762475045608, 0.9019702878007346, 0.9759436269037568, 0.012736230262339924, 0.4254506289499888, 0.5057514930417828, 0.0] | [nan, 0.8672982432946728, 0.8947683772895187, 0.45221659685446863, 0.7622893195763734, 0.4902560352855047, nan, 0.6223052874324095, 0.6932109212359029, 0.22966612333107453, 0.8909383965244376, 0.46376665320952765, 0.0, 0.5938460326215428, 0.08434187777193114, 0.602773750581284, 0.0, 0.048440150074523305, 0.8000458716174862, 0.11235893201211121, 0.479082966550413, 0.45730325325150806, 0.03797907547774101, nan, 0.13441877352901832, 0.42968388297967464, 0.43185024209844064, 0.0, 0.8885136898541194, 0.7448990572757507, 0.9530770665482792, 0.011476439106252173, 0.27282086031874275, 0.3826734258440253, 0.0] | | 0.0493 | 17.75 | 7100 | 0.6044 | 0.5187 | 0.4325 | 0.8897 | [nan, 0.9240685866116948, 0.9622943353488201, 0.5353317757009346, 0.853514520592762, 0.6373741840672775, nan, 0.7478235165354141, 0.8836883806993405, 0.21751108165209826, 0.9509281473980792, 0.5420474191158311, 0.0, 0.7930340557275541, 0.22083490982469417, 0.8908310060401377, 0.0, 0.0858534286387558, 0.9207060529378274, 0.1411447209390884, 0.681761326480902, 0.5542661781464825, 0.03930387172467736, nan, 0.1931621584511877, 0.5752080389386088, 0.49312002836187985, 0.0, 0.9390712329452002, 0.9078367511279274, 0.9729394719810368, 0.022296821252434828, 0.4083602593021602, 0.5050154471862657, 0.0] | [nan, 0.8665364871726114, 0.892965816013915, 0.4547348114599635, 0.7642413653965189, 0.4857421136997843, nan, 0.6253954022706847, 0.6870444418213474, 0.19578268327242895, 0.8874360309454634, 0.462182366980205, 0.0, 0.6077345881608605, 0.08939146416173167, 0.6003337345442609, 0.0, 0.0839241381075478, 0.8010272384750775, 0.11626241894020498, 0.4793339806464354, 0.46760060321222136, 0.03759519038076152, nan, 0.13732648718299134, 0.4276941756073643, 0.42612058896739236, 0.0, 0.8882284916106664, 0.7388891943971531, 0.9525770980335972, 0.01913195000088903, 0.25993428881875097, 0.3840528604415517, 0.0] | | 0.0609 | 18.0 | 7200 | 0.6040 | 0.5216 | 0.4331 | 0.8892 | [nan, 0.9227158454479248, 0.9619075870212453, 0.5316542056074767, 0.8629644863429278, 0.6514016366079864, nan, 0.7428586694795917, 0.8715519286425962, 0.2045030862918928, 0.9466966687245525, 0.5841977442990038, 0.005950806664903465, 0.7702786377708978, 0.22789759112120064, 0.8969036175878418, 0.0, 0.10873720315241013, 0.9154051507310187, 0.16112021722213943, 0.6850397847716271, 0.5074181749114659, 0.04494664506397005, nan, 0.19590827955512838, 0.5833045480713874, 0.5258912942323458, 0.0, 0.940934664449275, 0.8882331527914135, 0.9774381724580755, 0.014391396245182146, 0.43477819098132453, 0.5255548975681157, 0.0] | [nan, 0.8627327541149343, 0.8943888286230383, 0.44826842363954605, 0.7637335274754071, 0.48244240753868006, nan, 0.625331534198079, 0.6944541055496749, 0.18654700047236655, 0.8893611006867107, 0.4845014167207183, 0.005280450598451068, 0.5995903120857935, 0.10169968482665466, 0.5777541863213714, 0.0, 0.10625831542319107, 0.8006913747953047, 0.12712606139777924, 0.4783386384345389, 0.44333322627096416, 0.042293134265587215, nan, 0.148674558186062, 0.4270657907089471, 0.4375414792419438, 0.0, 0.8881646826265218, 0.746841100561318, 0.9521439225045568, 0.01294715575036877, 0.24666520631333802, 0.38409386690619945, 0.0] | | 0.0594 | 18.25 | 7300 | 0.6184 | 0.5184 | 0.4328 | 0.8884 | [nan, 0.9404973526006469, 0.9537239028155554, 0.5275303738317757, 0.8254461719223712, 0.6778219046293364, nan, 0.7472383523016173, 0.8659581534373962, 0.2943783918140768, 0.9543757743601257, 0.5650160533465053, 0.0, 0.7537667698658411, 0.19283642325640055, 0.8840439696044684, 0.0, 0.053517660304244236, 0.9223867864255677, 0.14299077799301313, 0.6933990487935829, 0.5170742093202789, 0.040644728755796417, nan, 0.19868186187010847, 0.5769927251792537, 0.5184906162061554, 0.005237711522965351, 0.936523983230326, 0.8965774712364731, 0.9780089834131267, 0.013717932777984998, 0.4056981446483367, 0.5054707620798113, 0.0] | [nan, 0.8646951423015076, 0.8916557550473645, 0.4456280068092665, 0.7798208455321158, 0.4668012972723517, nan, 0.6275296552822227, 0.693191442493572, 0.24416726797924612, 0.8882015249296725, 0.4734908589168679, 0.0, 0.6010533245556287, 0.10449699289229086, 0.6037870806764625, 0.0, 0.0522041170761608, 0.8024731726060429, 0.12131790023739622, 0.47577199080928667, 0.44858497899759875, 0.038707102952913006, nan, 0.1414826837710464, 0.42720162129381883, 0.43218883327484625, 0.005164878823996822, 0.8886286814206171, 0.7396195316490108, 0.952706951959097, 0.011655776057680246, 0.24503522596165647, 0.3835704565398948, 0.0] | | 0.0616 | 18.5 | 7400 | 0.6177 | 0.5082 | 0.4272 | 0.8887 | [nan, 0.9388723599691342, 0.9564944313754319, 0.5251226635514019, 0.8417103211148066, 0.6482573931295971, nan, 0.7321895483979944, 0.8855861839920293, 0.2417250093210158, 0.9506753528629689, 0.5459990121017535, 0.0, 0.656656346749226, 0.11275066212637155, 0.8765912190686498, 0.0, 0.07320713219699945, 0.9230813488667519, 0.11395056209539893, 0.703570900866502, 0.5234722511549255, 0.043466115425442764, nan, 0.1751201427982974, 0.5677919087245512, 0.4888879041013937, 0.00040290088638195, 0.9391572478144832, 0.8977247029883181, 0.9766107386702634, 0.018289713622611795, 0.4217114755430917, 0.4846827041793997, 0.0] | [nan, 0.8641564182971058, 0.8921133993393542, 0.4501424016407233, 0.7647378890792713, 0.4769587373086239, nan, 0.6209624017506187, 0.6859163987138264, 0.20884410959394406, 0.8903311694707657, 0.45434149683164926, 0.0, 0.5354933726067747, 0.07164035579774021, 0.6122940826221327, 0.0, 0.06951938138690669, 0.8003213370838211, 0.09716584900998836, 0.4828652554046836, 0.45382137270368395, 0.04121417598135297, nan, 0.13381035314854062, 0.43221966358833797, 0.42342013855571975, 0.00040160642570281126, 0.8881950211846364, 0.7398417591158966, 0.9530845970447974, 0.014810386777414213, 0.2365547272188405, 0.37402163767775426, 0.0] | | 0.0611 | 18.75 | 7500 | 0.6099 | 0.5177 | 0.4324 | 0.8902 | [nan, 0.9345079533755389, 0.9638643589649342, 0.5356553738317757, 0.8422997643013702, 0.6257334001805861, nan, 0.7471220088972541, 0.8814537173221996, 0.2763370479307345, 0.9466207360377004, 0.6049436074750967, 0.0, 0.7059855521155831, 0.14970361962416445, 0.8782149119958433, 0.0, 0.0958028958186055, 0.9234898906602255, 0.14089637245649764, 0.6854742792438918, 0.5173606430820885, 0.04232080004469523, nan, 0.19343677056158176, 0.5813811692050034, 0.5071015488245331, 0.00040290088638195, 0.9400356746670351, 0.8951641148114238, 0.9764509546423178, 0.03372756848605413, 0.4723729399093662, 0.4701335776577261, 0.0] | [nan, 0.8647971283970989, 0.8977857991553266, 0.4345779290016539, 0.7684148484664771, 0.4855945598832977, nan, 0.6259089780170273, 0.686933822387541, 0.2366516479228013, 0.8888089337936385, 0.48289741736216074, 0.0, 0.5985650538104821, 0.061681563084597796, 0.6094675222969052, 0.0, 0.09345866005976859, 0.7993214394154491, 0.11438556403104944, 0.4762232900770807, 0.45242021144786737, 0.04009209272785011, nan, 0.14212501513256123, 0.43339055459103054, 0.4277836968915307, 0.00040032025620496394, 0.8873505568836287, 0.7422385564869821, 0.9528040989243474, 0.029041136219678652, 0.23652292476444373, 0.3661642120469451, 0.0] | | 0.0526 | 19.0 | 7600 | 0.6228 | 0.5108 | 0.4297 | 0.8909 | [nan, 0.9405315503656566, 0.9623814025398809, 0.5330642523364486, 0.8317861268903274, 0.6622725273804787, nan, 0.7263120519701678, 0.8674004839398396, 0.27552922656282364, 0.9455175897361646, 0.5819338108174859, 0.0, 0.6111971104231166, 0.16710808424769832, 0.8864145612781711, 0.0, 0.0827900400596968, 0.930233313789279, 0.11843739134753886, 0.6995346374019279, 0.5042107294717365, 0.042153192915805354, nan, 0.18371550185363175, 0.5630920605013869, 0.5005871795439941, 0.0056406124093473006, 0.9407823912509976, 0.8985265242187241, 0.9751204970628252, 0.012990074184591156, 0.42681216850576115, 0.4687243361620586, 0.0] | [nan, 0.8642299686902748, 0.8983701844671692, 0.4505770666371748, 0.7744797343632894, 0.49247659714013137, nan, 0.623426329007179, 0.696151825084343, 0.23867367627796818, 0.8898312419634539, 0.48430193720774883, 0.0, 0.5244863620262132, 0.07708866651151966, 0.5993412927130506, 0.0, 0.08080962968642183, 0.7977044198782267, 0.10166926045153175, 0.47672785170429793, 0.4451483954200063, 0.04006265597621197, nan, 0.1264172335600907, 0.43160647951283304, 0.42598284151975113, 0.00554016620498615, 0.8878311660408268, 0.74270285241124, 0.9536917187049466, 0.011887351052557973, 0.24007269734586106, 0.3689853153957455, 0.0] | | 0.054 | 19.25 | 7700 | 0.6199 | 0.5112 | 0.4157 | 0.8897 | [nan, 0.9383711032345364, 0.9577791893332354, 0.532998831775701, 0.8352225138198671, 0.6740592830016223, nan, 0.7513879337239024, 0.8669212886084358, 0.21351340154935997, 0.9451751851979368, 0.5077796986910348, 0.0, 0.7028895768833849, 0.18400807163576743, 0.8914236539585634, 0.0, 0.1072709658838007, 0.9291372462420467, 0.11183132171062435, 0.6577470949582549, 0.5160479493180732, 0.04262807978099335, nan, 0.1900590416037347, 0.5664154498351389, 0.5106689415257805, 0.0012087026591458502, 0.9410463493811095, 0.8949234994980861, 0.9775344732695309, 0.011246839902192383, 0.42160986811355644, 0.47790186427705494, 0.0] | [0.0, 0.8647432445871411, 0.896112476860621, 0.45036567465468447, 0.76789556797279, 0.4910576591298745, nan, 0.6249728507663073, 0.6958387758910245, 0.19385049365303245, 0.8887827463711233, 0.4413911550021468, 0.0, 0.5792159197210647, 0.08409221902017291, 0.5936591009850886, 0.0, 0.10176353700943865, 0.7979000623472865, 0.09749989173896098, 0.46787846117983983, 0.45133395403669296, 0.04032236755185625, nan, 0.1322593590552084, 0.4340972401884397, 0.4265909006774516, 0.0011904761904761906, 0.8880726081330668, 0.743872268803543, 0.953516990645358, 0.009541850530053972, 0.23069652626428858, 0.3703797514940341, 0.0] | | 0.0671 | 19.5 | 7800 | 0.6217 | 0.5094 | 0.4146 | 0.8892 | [nan, 0.9331891438463118, 0.9574927175990591, 0.5350619158878505, 0.834028291700058, 0.6744756411977813, nan, 0.7431025597272566, 0.8738719931679082, 0.2327354074319566, 0.9446516741270925, 0.5379723388490986, 0.0, 0.669969040247678, 0.18249463992937318, 0.8913668247061116, 0.0, 0.09954703741523316, 0.9238793920053711, 0.0888259739399659, 0.6886532573187448, 0.5368212898403323, 0.03941560981060394, nan, 0.18061238500617877, 0.5652404877793479, 0.5268662338525626, 0.0060435132957292505, 0.9420171078199074, 0.9042006331836784, 0.9732816357580515, 0.009485473911061379, 0.3114064500396269, 0.49469125180868956, 0.0] | [0.0, 0.8617017485872825, 0.8957626230741332, 0.4508312580591182, 0.7683050299189929, 0.4878950714613818, nan, 0.624948812708509, 0.6911476098809349, 0.20973251451290761, 0.8882723484572987, 0.46124933827421916, 0.0, 0.5501928047798635, 0.07156988821841923, 0.5965012359764214, 0.0, 0.09680704791974334, 0.7988314631673791, 0.07901907356948229, 0.4711932405689982, 0.46080549284533756, 0.03769502030348365, nan, 0.13494050061551088, 0.43071416464770335, 0.43780380026513477, 0.005912495072920773, 0.8877312783085815, 0.7390862578001592, 0.9533931934816451, 0.008087813065948142, 0.20454363437358178, 0.3783462459982845, 0.0] | | 0.0512 | 19.75 | 7900 | 0.6300 | 0.5080 | 0.4263 | 0.8887 | [nan, 0.9391756156362827, 0.957153465687716, 0.531875, 0.8363349452907067, 0.6442373192444947, nan, 0.7406369413577534, 0.8858234094036154, 0.26463399478023114, 0.9530349257345309, 0.5036634559973656, 0.0, 0.6101651186790505, 0.1925841846386682, 0.8746996168084692, 0.0, 0.0674207315476658, 0.9178750280173988, 0.11324690806139175, 0.6909895794473874, 0.5175153479480927, 0.042963294038773116, nan, 0.2016476726623644, 0.5813497671010625, 0.5020052735370366, 0.008058017727639, 0.9412167663408764, 0.897734355178538, 0.9747767193057303, 0.01633407932363546, 0.3496514865166941, 0.49998742995692663, 0.0] | [nan, 0.8625082043880324, 0.8957494129402008, 0.43782876705742063, 0.7496431303023787, 0.48514174134060595, nan, 0.6274006504670441, 0.6871961161760971, 0.2302687309626372, 0.8882991958037961, 0.4373045513839996, 0.0, 0.5170981283890153, 0.08045310853530031, 0.6189258899694966, 0.0, 0.06474078543772313, 0.7999986290910134, 0.09763826734899257, 0.47261393142851427, 0.4453505921742053, 0.040873817370043586, nan, 0.1437999373335422, 0.43193558986563074, 0.42771380026430056, 0.007840062720501764, 0.887320160440498, 0.7455157136812743, 0.9534156947680599, 0.013436060460141392, 0.21404224616226705, 0.3788044726196485, 0.0] | | 0.0535 | 20.0 | 8000 | 0.6326 | 0.5129 | 0.4292 | 0.8889 | [nan, 0.9375849538350132, 0.9591767441005661, 0.5300221962616822, 0.8259597228240738, 0.6596635135950806, nan, 0.7492101575548236, 0.8658110736822129, 0.2693152160404325, 0.9484445354169388, 0.5863176092862435, 0.0, 0.6744066047471621, 0.20784462101147685, 0.883142820029876, 0.0, 0.07781530646977194, 0.9271092315337143, 0.10147518998658918, 0.678314629589805, 0.497267391277709, 0.043242639253589586, nan, 0.18442949334065634, 0.576354215732454, 0.5145022268507234, 0.007252215954875101, 0.939646591781763, 0.9018448093278766, 0.9767371671098836, 0.012725869285921506, 0.41707817675628445, 0.45857891473041446, 0.0] | [nan, 0.8619435562270654, 0.8965635233177199, 0.4407369269775891, 0.7663725441548623, 0.48239880840583743, nan, 0.6305089171096815, 0.6940516487277982, 0.23291892085557667, 0.8902205646366161, 0.48581173260572985, 0.0, 0.5452649144764289, 0.09688988182726792, 0.6044686963431372, 0.0, 0.07672845562038519, 0.7962772336784573, 0.08572747363415112, 0.4690486788330029, 0.43758222088032955, 0.04117568825641708, nan, 0.13543326140878018, 0.4322105242501251, 0.4339781328847771, 0.007067137809187279, 0.8877484539815808, 0.7395098273111396, 0.9530623665306688, 0.010661406489721605, 0.2371072088724584, 0.3613527133617203, 0.0] | | 0.0467 | 20.25 | 8100 | 0.6268 | 0.5170 | 0.4303 | 0.8886 | [nan, 0.9395265086570245, 0.956900821509961, 0.5300023364485982, 0.8314043061203785, 0.6477819071422676, nan, 0.7464739330448017, 0.8916828770697918, 0.24499772152947513, 0.9451416993546665, 0.549950605087676, 0.0, 0.687203302373581, 0.1523521251103544, 0.8917889848671819, 0.0, 0.08004084518105412, 0.915062008738324, 0.1551515753572079, 0.6881485415176292, 0.526278382981852, 0.04472316889211688, nan, 0.18451187697377455, 0.5879677605066206, 0.549156898805699, 0.007655116841257051, 0.940224100990058, 0.9054685173132715, 0.9762965505479732, 0.02776741680135936, 0.449734804608913, 0.49033782689095345, 0.0] | [nan, 0.8644696780108341, 0.8944980656632955, 0.440104340976533, 0.7641389998117053, 0.4770745740308388, nan, 0.6297284505666034, 0.6844286473848664, 0.21773065311832707, 0.8890008282328474, 0.46004855121119775, 0.0, 0.5750680081177943, 0.06133536430566133, 0.6000371448704572, 0.0, 0.07885979620791951, 0.8006806868947128, 0.1252363801594355, 0.4706566275608475, 0.45444853884552, 0.04241284306453322, nan, 0.13328969033307544, 0.4323046138453842, 0.45063456852976475, 0.007448059584476676, 0.888463849852071, 0.7450400534159003, 0.9535229169698916, 0.021638336996913712, 0.23653075402126864, 0.371412309599829, 0.0] | | 0.0566 | 20.5 | 8200 | 0.6333 | 0.5121 | 0.4287 | 0.8890 | [nan, 0.9382327153916955, 0.9575874232706021, 0.5340771028037383, 0.8342787755625269, 0.6541523107263972, nan, 0.7406429739787204, 0.8870285144944726, 0.2079415054476159, 0.9479172512933317, 0.5500535111550177, 0.0, 0.7218266253869969, 0.17152226005801488, 0.8854728193803988, 0.0, 0.06920116251669153, 0.9246219694901651, 0.12077186708389212, 0.6759797704055135, 0.5097310892447952, 0.045561204536566285, nan, 0.1750377591651792, 0.5736405505835558, 0.5156101127827879, 0.00684931506849315, 0.9398823262828916, 0.9029458484550981, 0.9765633952545758, 0.017017903767251024, 0.4133390233493873, 0.48943837047548283, 0.0] | [nan, 0.8643736263008805, 0.8951902105356352, 0.44089650982245326, 0.7609522214327652, 0.4848458703216258, nan, 0.6265179780801705, 0.6811413623628766, 0.1878590542487696, 0.887796763348636, 0.46558542236468475, 0.0, 0.5934331650617232, 0.06971498872257535, 0.6047629609093429, 0.0, 0.06810626948746361, 0.7983954196511591, 0.10178182731484066, 0.4720678124715856, 0.44954610542241913, 0.0431413003227001, nan, 0.12741374485267662, 0.432512153928718, 0.4367328553732968, 0.006685017695635077, 0.8879940574069723, 0.7494547941207608, 0.9536808104413358, 0.013580974233357105, 0.23932508912918143, 0.374424364423531, 0.0] | | 0.0445 | 20.75 | 8300 | 0.6446 | 0.5134 | 0.4274 | 0.8856 | [nan, 0.9405399334753671, 0.9458917035764169, 0.5273960280373832, 0.8282526135651365, 0.6846166732980127, nan, 0.7372879749180856, 0.8847701285761731, 0.2182567629147852, 0.9486374327394391, 0.565180703054252, 0.0, 0.6657378740970072, 0.14856854584436877, 0.8831509384945119, 0.0, 0.06705417223051345, 0.9206841150299712, 0.12586301097700292, 0.6806553405515008, 0.5199094440427905, 0.04444382367730041, nan, 0.17805849237951393, 0.5833280996493432, 0.5248720391748466, 0.007252215954875101, 0.9356924613611799, 0.9010464353082633, 0.9759161892423923, 0.023617845745783083, 0.4449998983925705, 0.5172488924395381, 0.0] | [nan, 0.8666434932726657, 0.8860462410088557, 0.4516813574923211, 0.7742782740775649, 0.4555874524449895, nan, 0.6267926037830955, 0.6896407624091181, 0.1957204153277486, 0.8882182070612508, 0.46149838666308146, 0.0, 0.5469962267350659, 0.06421718273004798, 0.6011771207515888, 0.0, 0.06543011164763292, 0.79986647852113, 0.10526898843730527, 0.4713830230218466, 0.45188595346756627, 0.04203767801939388, nan, 0.1276553855846278, 0.42972506139948413, 0.441923808813104, 0.007075471698113208, 0.8884781477624152, 0.7456781431206605, 0.9535186762124032, 0.016432559463950374, 0.2430653450400151, 0.37996353686275436, 0.0] | | 0.0523 | 21.0 | 8400 | 0.6334 | 0.5087 | 0.4256 | 0.8903 | [nan, 0.933221079502352, 0.9637948085900169, 0.5297546728971962, 0.8356436570172051, 0.6448230539257773, nan, 0.7465713167832686, 0.8749679745694359, 0.2327354074319566, 0.9465962111947419, 0.5354408495924919, 0.0, 0.6270897832817337, 0.14024467145920042, 0.8939972072481652, 0.009888751545117428, 0.05998481397114654, 0.9259419692666467, 0.10259275815824766, 0.6911110038285254, 0.5109028637249255, 0.044248282026928876, nan, 0.19286008512975422, 0.5704035170356414, 0.5006314949812767, 0.0, 0.9387582194599503, 0.9072224581646499, 0.9775237134023292, 0.011000766712254964, 0.4426019630555386, 0.48799979887931083, 0.0] | [nan, 0.8627899844290204, 0.898045292380419, 0.4429741700156492, 0.7733528050732301, 0.48122023215814036, nan, 0.6285033134107889, 0.6922586045743415, 0.2067303269489062, 0.888126363728484, 0.4555339601828019, 0.0, 0.512374046123361, 0.062230678829257376, 0.5926462119703566, 0.00044943820224719103, 0.05796624750145485, 0.8002256522783529, 0.08795100349163994, 0.4798915494731881, 0.45172247073689, 0.0420103434557751, nan, 0.13598869181318254, 0.4315342675118884, 0.4297071129707113, 0.0, 0.8889534278458562, 0.7430008362351238, 0.9537407288817968, 0.009678051537276564, 0.23964350552896518, 0.3711983987778357, 0.0] | | 0.0715 | 21.25 | 8500 | 0.6366 | 0.5151 | 0.4287 | 0.8894 | [nan, 0.9370145031789949, 0.9615540919282511, 0.5349906542056074, 0.8234293246215806, 0.6427307923986297, nan, 0.7520265297434068, 0.877506286473407, 0.2407929077426571, 0.9458038701145451, 0.5871614390384458, 0.0, 0.6843137254901961, 0.1972505990667171, 0.8854890563096707, 0.054388133498145856, 0.06252454638284502, 0.9220868993644009, 0.11473699895693637, 0.6793299129694406, 0.505244648130675, 0.04341024638247947, nan, 0.19102018399011397, 0.5753257968283875, 0.5107132569630631, 0.0, 0.9400241164189752, 0.9050651936505135, 0.9789779094546415, 0.014533859670935389, 0.41945579060740923, 0.49523735034665384, 0.0] | [nan, 0.8636190041686136, 0.8961979040679402, 0.44008160621637177, 0.7735135302856915, 0.47552992149378714, nan, 0.6295369121222396, 0.6946632262523146, 0.2137970353477765, 0.8882677382290695, 0.4793581450054608, 0.0, 0.555406650473239, 0.08438545376065609, 0.5980720618958058, 0.002378506946321423, 0.06108823002737203, 0.7997681127577295, 0.0970839783417272, 0.47365876347968716, 0.44734126160727244, 0.041260653691952316, nan, 0.13688871396241267, 0.4310366799265186, 0.42952982613070945, 0.0, 0.8887487055026462, 0.7433844306901257, 0.9533070831491001, 0.012093141544284045, 0.23472485984284203, 0.3736148179836323, 0.0] | | 0.0856 | 21.5 | 8600 | 0.6332 | 0.5104 | 0.4282 | 0.8891 | [nan, 0.9354302285089335, 0.9598914301992207, 0.5326285046728972, 0.8348257505275104, 0.6418013774311685, nan, 0.7519851631996333, 0.8757413294112065, 0.2316790256431501, 0.9473149777460632, 0.5441672841030707, 0.0, 0.6676986584107327, 0.19119687224114013, 0.8908797168279535, 0.0, 0.05576938182389443, 0.9230974918555517, 0.1150019040050332, 0.6832652332737915, 0.5057945396840957, 0.04410860941952064, nan, 0.19250308938624194, 0.5698984665305908, 0.50395515277747, 0.0040290088638195, 0.9408126308534799, 0.8986623443239606, 0.9766785258336341, 0.01867306975009325, 0.40035359385478264, 0.4951898635172656, 0.0] | [nan, 0.8652175117062043, 0.8949487144681932, 0.4437434730009742, 0.7611759319446382, 0.47865894832193984, nan, 0.6331643341293494, 0.6931150372692965, 0.2068423485899214, 0.8889820786499946, 0.4611976486594917, 0.0, 0.5675936485656636, 0.08603859250851305, 0.595085736597217, 0.0, 0.05421502748930971, 0.799696203512091, 0.09667497111998775, 0.4707822447654798, 0.4485026865801383, 0.041887733446519526, nan, 0.13581323258742614, 0.4329091328339933, 0.42695701145109816, 0.003957261574990107, 0.8887286680634571, 0.7476012702986532, 0.953293396822863, 0.014771330218834523, 0.23667139184546263, 0.3740649694565481, 0.0] | | | 0.0426 | 22.25 | 8900 | 0.6388 | 0.5153 | 0.4321 | 0.8907 | [nan, 0.9365843032790866, 0.9619280328787767, 0.5323341121495327, 0.832118008177492, 0.6589330390083284, nan, 0.7530012289310712, 0.8876025999905109, 0.2356145656406645, 0.9495151391383951, 0.5967728657281633, 0.0, 0.6851909184726522, 0.16698196493883213, 0.8856433071377541, 0.0, 0.046160291152829054, 0.9249913955800083, 0.14087981589099158, 0.6780864102710397, 0.5070796622838727, 0.043214704732107936, nan, 0.19390361114925167, 0.577557963050191, 0.5263122908865303, 0.009266720386784852, 0.9401577082628303, 0.9045005405226523, 0.9759350190099954, 0.014261884039951924, 0.44343514397772765, 0.48190053464583205, 0.0] | [nan, 0.8638275353000382, 0.8975929370440341, 0.44847327680807825, 0.7680456934961463, 0.4896127563059361, nan, 0.6344922288860472, 0.6906430201049919, 0.21071058091286307, 0.8908914064913077, 0.4893922260291313, 0.0, 0.5741773684438103, 0.0915502696722445, 0.6133303348044865, 0.0, 0.045543787135107205, 0.799706519605589, 0.11493135050077327, 0.47303106132662764, 0.44896719237169413, 0.04119511090991399, nan, 0.13769769301273427, 0.43323479414732197, 0.4435750434181777, 0.008966861598440545, 0.8892865533176849, 0.7464162172003368, 0.9537521470921787, 0.012501163611760084, 0.24370386088743454, 0.37164396457569027, 0.0] | | 0.0544 | 22.5 | 9000 | 0.6275 | 0.5126 | 0.4297 | 0.8902 | [nan, 0.9362912936349177, 0.962198079008307, 0.5305654205607476, 0.829452734049054, 0.6501778145136554, nan, 0.7606583485441561, 0.8785880343502396, 0.2379137495339492, 0.9477460490242178, 0.5748332921709064, 0.0, 0.6779153766769865, 0.15399167612561482, 0.8968792621939339, 0.0, 0.062053255832220565, 0.9268894385323623, 0.11712114438980778, 0.6830882170073133, 0.515366328868847, 0.046119894966199226, nan, 0.1939585335713305, 0.5666535824566913, 0.5097161596242051, 0.0064464141821112, 0.9399919952412273, 0.8983810519232679, 0.9745475341343337, 0.015694289029798168, 0.43490011989676686, 0.47604289457365206, 0.0] | [nan, 0.8648796447130465, 0.8972780355218145, 0.44448663694053075, 0.7723828909831303, 0.4856595115662902, nan, 0.6367705951823552, 0.693571040656192, 0.2097133467226584, 0.8885713515050402, 0.47493538294109644, 0.0, 0.5753448653382964, 0.07485745815707191, 0.589861603519713, 0.0, 0.060925449871465295, 0.7986432258569581, 0.09907840555757864, 0.4719490094091225, 0.45171147174755927, 0.04363338442835245, nan, 0.13716960245479792, 0.4304074481173985, 0.4370060790273556, 0.00631163708086785, 0.8878797422918536, 0.748175287257327, 0.9535688641919678, 0.013234083170064194, 0.2360317635381052, 0.36728912241605793, 0.0] | | 0.0701 | 22.75 | 9100 | 0.6508 | 0.5132 | 0.4302 | 0.8902 | [nan, 0.9420095059141509, 0.9626173339520694, 0.5384521028037383, 0.8237863722622742, 0.6345902505663333, nan, 0.7493342571861443, 0.8728092233240025, 0.24462488089813164, 0.9462424874982255, 0.5649748909195687, 0.0, 0.6890092879256966, 0.18148568545844368, 0.8978859518087939, 0.0, 0.06417406331003063, 0.926905788482557, 0.10334608188877299, 0.6837845785184178, 0.5068636881640055, 0.044555561763226996, nan, 0.19329946450638474, 0.5856309206050139, 0.5353969555294587, 0.008058017727639, 0.9389002783925003, 0.9000722535382172, 0.9752872750044519, 0.01801255750341912, 0.4159604950313967, 0.4749814242696805, 0.0] | [nan, 0.8667971887550201, 0.8964523921395798, 0.43883250929953793, 0.7789739251684871, 0.4822597903246794, nan, 0.6338344499902683, 0.6949882507612449, 0.21506355392067597, 0.8897027195058894, 0.47454492022058187, 0.0, 0.5744214058332616, 0.09034404821697639, 0.5890266504761296, 0.0, 0.06334315397736083, 0.7983683031468644, 0.08797806890816708, 0.47160166966502776, 0.4468892814313033, 0.04230993686667728, nan, 0.13598253612549263, 0.43447527412791603, 0.442910823939144, 0.007836990595611285, 0.8890303591865106, 0.7479650947941834, 0.9538041433738902, 0.014260666277030976, 0.23761100470137558, 0.3677322595225377, 0.0] | | 0.0588 | 23.0 | 9200 | 0.6510 | 0.5156 | 0.4306 | 0.8898 | [nan, 0.9386450845503147, 0.9615407102293612, 0.5321039719626168, 0.8252994992682097, 0.646236577683447, nan, 0.7500099107344458, 0.8891493096740523, 0.2356145656406645, 0.948320024675765, 0.5611467852144563, 0.0, 0.7061919504643963, 0.15790137470046664, 0.8929012145223095, 0.0, 0.06268164323305318, 0.9247904360655894, 0.12226195797943674, 0.6746470281016981, 0.5158947761834156, 0.04522599027878652, nan, 0.1926953178635178, 0.5791620871931753, 0.5486694289955906, 0.014504431909750202, 0.9393220200484532, 0.9030809791181759, 0.9764800062837624, 0.014337001118985454, 0.46371598691296306, 0.476005184444432, 0.0] | [nan, 0.8636880663267268, 0.8963496684957871, 0.4393286431075093, 0.7694031519559503, 0.48618816019454364, nan, 0.6323091767222339, 0.6843731284418411, 0.20910695246148756, 0.8901931512501616, 0.4713865836791148, 0.0, 0.594294150853272, 0.07763859605605854, 0.5971841386537511, 0.0, 0.061455525606469004, 0.799169285452784, 0.10285033809898536, 0.4708681854568623, 0.4517361674617981, 0.04280237937871778, nan, 0.1379100253532753, 0.432983014903532, 0.45285296269202635, 0.013830195927775643, 0.8892098290384068, 0.7459428984706676, 0.9536680185853351, 0.012051498108992573, 0.23353802067342136, 0.36591936147117593, 0.0] | | 0.067 | 23.25 | 9300 | 0.6275 | 0.5128 | 0.4311 | 0.8905 | [nan, 0.9372797021893622, 0.9638153118797325, 0.5312441588785046, 0.8278251787794161, 0.6422768634184979, nan, 0.7515353020360958, 0.8786212459078616, 0.24139359542648825, 0.9490656742280216, 0.5420885815427677, 0.0, 0.7038183694530443, 0.17707150964812712, 0.8822822627784633, 0.0, 0.06734218312256172, 0.9252767953435341, 0.10501829500488419, 0.6879495810858851, 0.5059293320425944, 0.04416447846248394, nan, 0.19404091720444872, 0.5719029674988224, 0.5293478983403869, 0.008058017727639, 0.9393905631474131, 0.9031768115782158, 0.9770540451989742, 0.01500269385386879, 0.4205734723322969, 0.4884174036436365, 0.0] | [nan, 0.8641485198316792, 0.897149130251509, 0.4431534355853929, 0.7712457425720085, 0.4882715323914724, nan, 0.6318488634618116, 0.69528994349434, 0.21461061083181407, 0.890398769558611, 0.46117346313448776, 0.0, 0.5855585129217824, 0.08629909644108427, 0.608788204714529, 0.0, 0.0658912742737101, 0.7992632312490636, 0.09043857647998176, 0.47160302909046053, 0.44752081120336445, 0.04198645598194131, nan, 0.13798894682367646, 0.43383933729163815, 0.44664223751121745, 0.007836990595611285, 0.8889539638268134, 0.7463182889742939, 0.9538402391601662, 0.01284986599932556, 0.2406063988095238, 0.3716953276213374, 0.0] | | 0.0513 | 23.5 | 9400 | 0.6472 | 0.5144 | 0.4306 | 0.8897 | [nan, 0.938401309042541, 0.9600648179629494, 0.5333469626168225, 0.832045261686822, 0.6450022850427629, nan, 0.7455948939896135, 0.883593490534706, 0.23551099879862464, 0.9506135691239773, 0.5523380258500041, 0.0, 0.6968524251805985, 0.18312523647370413, 0.8904413197376112, 0.0, 0.06160814808996413, 0.9256348385566595, 0.12978691700193712, 0.6801915871922148, 0.5208407367015084, 0.04416447846248394, nan, 0.1951942880681038, 0.5735463442717329, 0.5357736367463606, 0.010072522159548751, 0.9380115028759878, 0.9056712133078884, 0.9770508172388136, 0.017681006258029756, 0.4195573980369445, 0.4783152790270228, 0.0] | [nan, 0.8645788687513425, 0.8959992534632647, 0.44551363683824813, 0.7647562903055005, 0.48403962995403316, nan, 0.6342904860496079, 0.6900071507171095, 0.2094308344078099, 0.8896775711392028, 0.4683431642874594, 0.0, 0.5778034484233945, 0.08829968377523717, 0.5990191205946445, 0.0, 0.060376680693831467, 0.7987594181280973, 0.10780592458123607, 0.47080665968645763, 0.45253694794349175, 0.04196862307876085, nan, 0.13750677087363616, 0.4326699094290159, 0.44833404409174343, 0.009754194303550527, 0.8891644113783483, 0.7456061236432407, 0.9539508207140677, 0.014409173235161254, 0.23587072008774035, 0.3678274990977986, 0.0] | | 0.0514 | 23.75 | 9500 | 0.6439 | 0.5126 | 0.4298 | 0.8893 | [nan, 0.9377822895762951, 0.9605358193045652, 0.5385, 0.8340916008081545, 0.6271635536295225, nan, 0.7452691324573968, 0.884822318166722, 0.22701851775135673, 0.9488086350085531, 0.537766526714415, 0.0, 0.6666150670794634, 0.20002522386177324, 0.8838085341300254, 0.0, 0.05781164087660042, 0.9238019884436897, 0.11829666054073742, 0.6694155391023081, 0.5142496967171933, 0.043549918989887706, nan, 0.19379376630509407, 0.5833176322813628, 0.5375905696749462, 0.014101531023368252, 0.9389680151020606, 0.9049790133806934, 0.9761012589582619, 0.02082556260101952, 0.414029953870227, 0.5005852053386369, 0.0] | [nan, 0.863411965165267, 0.894931428278196, 0.4402552004737254, 0.7611011560258087, 0.4837046157587918, nan, 0.6314089786667951, 0.6898753375504013, 0.2022476056909819, 0.8895664124405706, 0.4596777031068576, 0.0, 0.5673444293179922, 0.08523215821152193, 0.6083079089415631, 0.0, 0.056674965989886805, 0.7993862287218525, 0.09987768652804473, 0.4710007534678047, 0.450200875376809, 0.041379127295891285, nan, 0.1393342283999368, 0.4316562226473846, 0.44881423656073105, 0.013539651837524178, 0.8892954904899649, 0.7457058534465373, 0.9537927510495554, 0.016624966398544282, 0.24126375122858124, 0.37717282181124784, 0.0] | | 0.0396 | 24.0 | 9600 | 0.6535 | 0.5114 | 0.4293 | 0.8894 | [nan, 0.9355970923117436, 0.9613217787436595, 0.5374941588785047, 0.8288621111896686, 0.642493049404965, nan, 0.7527694039253403, 0.878070882952982, 0.22343510501677782, 0.9446323372316829, 0.5478719025273731, 0.0, 0.6478844169246646, 0.1983856728465128, 0.8865769305708905, 0.0, 0.07386170240620009, 0.92611209153323, 0.1052169737909568, 0.6754384809956214, 0.5089943264670923, 0.04279568690988323, nan, 0.19272277907455718, 0.5795022766525357, 0.533735126631362, 0.008058017727639, 0.9392768622420797, 0.9018779025514876, 0.9758392561919, 0.014779932860872808, 0.4110833384137048, 0.4900487159002665, 0.0] | [nan, 0.8639528354166897, 0.8950065886128323, 0.44207385913246505, 0.7660355663095111, 0.48472638815638147, nan, 0.632634318964356, 0.6931134697057083, 0.20094633110411506, 0.8905903659512103, 0.4648726053472574, 0.0, 0.5535911115030201, 0.08658556723729839, 0.604755865918694, 0.0, 0.0724857392466211, 0.7980282230680995, 0.09017126154632008, 0.4707250951496855, 0.44738482499754295, 0.04074793201585233, nan, 0.13850404578646142, 0.43285457950063133, 0.4469182529964006, 0.007840062720501764, 0.8885988668670501, 0.746866946124605, 0.9537924535842215, 0.012023161337086795, 0.24114295250810605, 0.37191019096397804, 0.0] | | 0.0572 | 24.25 | 9700 | 0.6468 | 0.5169 | 0.4312 | 0.8893 | [nan, 0.9401996856733055, 0.9583929096522826, 0.5344988317757009, 0.8275082400146594, 0.6494017622545427, nan, 0.7543103076809053, 0.8711154338852778, 0.24802187331703882, 0.9453213909924968, 0.5670947559068082, 0.0, 0.7040763673890609, 0.20204313280363223, 0.8891017730726765, 0.0, 0.06668761291336109, 0.9255172844843733, 0.1113677378764549, 0.6754443327730256, 0.5202249807001851, 0.044248282026928876, nan, 0.19305231360703007, 0.5827890301983566, 0.55261350291374, 0.014101531023368252, 0.9394324953961886, 0.9048990380903004, 0.9755035483352065, 0.0154197231547101, 0.45343331504399603, 0.47399118420979125, 0.0] | [nan, 0.863689319961114, 0.895499199129711, 0.4429491151299229, 0.765606502579043, 0.48571154804691785, nan, 0.6324972973597951, 0.6956526681114833, 0.21654760828284655, 0.8900625950293436, 0.47545424740738185, 0.0, 0.5803666368933691, 0.08725014977397745, 0.5992339680455242, 0.0, 0.06544361365913821, 0.7982999807741021, 0.09452243441114062, 0.4717078672807595, 0.4521680319629779, 0.04200588718873478, nan, 0.13927135130851676, 0.4339583670272156, 0.4507663389242337, 0.01348747591522158, 0.8884945203133995, 0.7465496843182982, 0.9537005332798949, 0.012399112712579277, 0.24028127759471044, 0.3662329926099869, 0.0] | | 0.1 | 24.5 | 9800 | 0.6434 | 0.5135 | 0.4300 | 0.8895 | [nan, 0.9377224102212196, 0.9606645248290818, 0.5361588785046729, 0.8331230894215592, 0.6375564947567199, nan, 0.7494747310743753, 0.8814869288798216, 0.23789303616554125, 0.9491298161249899, 0.5208281880299662, 0.0, 0.7291537667698659, 0.1923319460209358, 0.8872670000649477, 0.0, 0.058754221977849345, 0.9251466166261608, 0.10029967383565953, 0.684280516653427, 0.5108906098741529, 0.04338231186099782, nan, 0.1931896196622271, 0.581302663945151, 0.5429748953047794, 0.014101531023368252, 0.939044218900316, 0.9053540699149504, 0.9762874046608516, 0.016517986655062374, 0.4174033205307972, 0.4717006430275368, 0.0] | [nan, 0.8641608155359141, 0.8958643122776131, 0.4417664033758718, 0.7644541831979321, 0.4846296892790795, nan, 0.6335999382179972, 0.6905137105945841, 0.21054850773630565, 0.8890883354259757, 0.44958072768618534, 0.0, 0.6023700925018117, 0.08546290069491146, 0.6030192343768966, 0.0, 0.057282891713891865, 0.7981027891830667, 0.08634672672073433, 0.470738722708764, 0.44815859378883993, 0.04122753457750405, nan, 0.1376066035521477, 0.4340720968586592, 0.4532255678035067, 0.01352918438345574, 0.888563607775072, 0.7458284701692807, 0.9538944088343424, 0.01350879014029907, 0.2349899322716456, 0.3667384437299315, 0.0] | | 0.0547 | 24.75 | 9900 | 0.6482 | 0.5155 | 0.4313 | 0.8898 | [nan, 0.9397340904212859, 0.9603330836947732, 0.5307733644859813, 0.8309005858255233, 0.6429241895489165, nan, 0.7515697741559071, 0.8821369265075675, 0.23520029827250508, 0.948613379528076, 0.5628961883592657, 0.0, 0.7383384932920537, 0.19170134947660486, 0.8888176268104176, 0.0, 0.06747309716440185, 0.9241314709843229, 0.1176757893342605, 0.6804680836745651, 0.509839842170402, 0.04290742499580982, nan, 0.19313469724014828, 0.5775631967341812, 0.5366821032106535, 0.009669621273166801, 0.9403802717370998, 0.9035215326574961, 0.9734618635336802, 0.012358054623067678, 0.41701721229856326, 0.48626373626373626, 0.0] | [nan, 0.8640778611527823, 0.8958137823018933, 0.4460626314967881, 0.7641756445447411, 0.4858917928580605, nan, 0.6328187132466054, 0.6908867956078256, 0.20850548118768247, 0.8893168906380365, 0.47044860327507915, 0.0, 0.6030682345007797, 0.08536927829261444, 0.6011740028114567, 0.0, 0.06583048076431819, 0.7992350659678636, 0.09887388797306791, 0.4713607906006725, 0.44755617108819296, 0.040873892333484124, nan, 0.13801020408163264, 0.4335135793399971, 0.45185060816356987, 0.0093603744149766, 0.8886009280250379, 0.7464543006342957, 0.9536265277974683, 0.010431767147039596, 0.2352570275599578, 0.3719794479055262, 0.0] | | 0.0627 | 25.0 | 10000 | 0.6463 | 0.5168 | 0.4317 | 0.8895 | [nan, 0.9354022848098984, 0.9601675641402632, 0.5369719626168225, 0.8337939300328185, 0.6403441237446122, nan, 0.7582108280375539, 0.8834986003700717, 0.24187000289987157, 0.948116751458167, 0.5520704700749156, 0.0, 0.7381320949432405, 0.19649388321352, 0.888963759173865, 0.0, 0.07624433796769041, 0.9231866922167408, 0.1182221559959602, 0.6801081993642044, 0.5121910497873957, 0.04447175819878205, nan, 0.19406837841548813, 0.5788088135238394, 0.5379894086104895, 0.008460918614020952, 0.9391146435745414, 0.9050362370798539, 0.9765451034803329, 0.015450806083965353, 0.41939482614968804, 0.4941702933568719, 0.0] | [nan, 0.8640678937775673, 0.895377615265056, 0.442350332594235, 0.7643727945096741, 0.4849891658522591, nan, 0.6340492784936108, 0.6910083381883088, 0.21346568681218236, 0.8895978581938467, 0.46446072065520405, 0.0, 0.601404187337089, 0.08586860670194003, 0.6029780227646933, 0.0, 0.07410800631139614, 0.7995575849393181, 0.09964415294445995, 0.4716975388811325, 0.4492564945882909, 0.04216548363174065, nan, 0.13932260862707987, 0.43292556418938755, 0.4516033033256454, 0.00821917808219178, 0.8889508587805682, 0.7461158390782254, 0.954070468766836, 0.012555965083260888, 0.23512657506778772, 0.3742610137901782, 0.0] | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 2.0.0 - Tokenizers 0.11.6
voidful/albert_chinese_small
d99b52392f291fca5a7b8df972af38019a25ddf8
2021-08-03T05:06:47.000Z
[ "pytorch", "albert", "fill-mask", "zh", "transformers", "autotrain_compatible" ]
fill-mask
false
voidful
null
voidful/albert_chinese_small
704
null
transformers
2,040
--- language: zh pipeline_tag: fill-mask widget: - text: "今天[MASK]情很好" --- # albert_chinese_small This a albert_chinese_small model from [brightmart/albert_zh project](https://github.com/brightmart/albert_zh), albert_small_google_zh model converted by huggingface's [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py) ## Notice *Support AutoTokenizer* Since sentencepiece is not used in albert_chinese_base model you have to call BertTokenizer instead of AlbertTokenizer !!! we can eval it using an example on MaskedLM 由於 albert_chinese_base 模型沒有用 sentencepiece 用AlbertTokenizer會載不進詞表,因此需要改用BertTokenizer !!! 我們可以跑MaskedLM預測來驗證這個做法是否正確 ## Justify (驗證有效性) ```python from transformers import AutoTokenizer, AlbertForMaskedLM import torch from torch.nn.functional import softmax pretrained = 'voidful/albert_chinese_small' tokenizer = AutoTokenizer.from_pretrained(pretrained) model = AlbertForMaskedLM.from_pretrained(pretrained) inputtext = "今天[MASK]情很好" maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103) input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=input_ids) loss, prediction_scores = outputs[:2] logit_prob = softmax(prediction_scores[0, maskpos],dim=-1).data.tolist() predicted_index = torch.argmax(prediction_scores[0, maskpos]).item() predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0] print(predicted_token, logit_prob[predicted_index]) ``` Result: `感 0.6390823125839233`
jonatasgrosman/wav2vec2-large-xlsr-53-polish
de2e25d651a4cdcd690a6b39d6f7966e072ff9b2
2022-07-27T23:36:03.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "pl", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_6_0", "transformers", "audio", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/wav2vec2-large-xlsr-53-polish
703
0
transformers
2,041
--- language: pl license: apache-2.0 datasets: - common_voice - mozilla-foundation/common_voice_6_0 metrics: - wer - cer tags: - audio - automatic-speech-recognition - hf-asr-leaderboard - mozilla-foundation/common_voice_6_0 - pl - robust-speech-event - speech - xlsr-fine-tuning-week model-index: - name: XLSR Wav2Vec2 Polish by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice pl type: common_voice args: pl metrics: - name: Test WER type: wer value: 14.21 - name: Test CER type: cer value: 3.49 - name: Test WER (+LM) type: wer value: 10.98 - name: Test CER (+LM) type: cer value: 2.93 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: pl metrics: - name: Dev WER type: wer value: 33.18 - name: Dev CER type: cer value: 15.92 - name: Dev WER (+LM) type: wer value: 29.31 - name: Dev CER (+LM) type: cer value: 15.17 --- # Fine-tuned XLSR-53 large model for speech recognition in Polish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Polish using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-polish") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "pl" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-polish" SAMPLES = 5 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | """CZY DRZWI BYŁY ZAMKNIĘTE?""" | PRZY DRZWI BYŁY ZAMKNIĘTE | | GDZIEŻ TU POWÓD DO WYRZUTÓW? | WGDZIEŻ TO POM DO WYRYDÓ | | """O TEM JEDNAK NIE BYŁO MOWY.""" | O TEM JEDNAK NIE BYŁO MOWY | | LUBIĘ GO. | LUBIĄ GO | | — TO MI NIE POMAGA. | TO MNIE NIE POMAGA | | WCIĄŻ LUDZIE WYSIADAJĄ PRZED ZAMKIEM, Z MIASTA, Z PRAGI. | WCIĄŻ LUDZIE WYSIADAJĄ PRZED ZAMKIEM Z MIASTA Z PRAGI | | ALE ON WCALE INACZEJ NIE MYŚLAŁ. | ONY MONITCENIE PONACZUŁA NA MASU | | A WY, CO TAK STOICIE? | A WY CO TAK STOICIE | | A TEN PRZYRZĄD DO CZEGO SŁUŻY? | A TEN PRZYRZĄD DO CZEGO SŁUŻY | | NA JUTRZEJSZYM KOLOKWIUM BĘDZIE PIĘĆ PYTAŃ OTWARTYCH I TEST WIELOKROTNEGO WYBORU. | NAJUTRZEJSZYM KOLOKWIUM BĘDZIE PIĘĆ PYTAŃ OTWARTYCH I TEST WIELOKROTNEGO WYBORU | ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-polish --dataset mozilla-foundation/common_voice_6_0 --config pl --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-polish --dataset speech-recognition-community-v2/dev_data --config pl --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-polish, title={Fine-tuned {XLSR}-53 large model for speech recognition in {P}olish}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-polish}}, year={2021} } ```
KoboldAI/fairseq-dense-355M
907f5296869e6553e325c67bffc15cafa2dcf68f
2022-02-01T22:49:26.000Z
[ "pytorch", "xglm", "text-generation", "transformers" ]
text-generation
false
KoboldAI
null
KoboldAI/fairseq-dense-355M
701
2
transformers
2,042
Entry not found
jonatasgrosman/wav2vec2-large-xlsr-53-italian
a959a4f932bf7f4d6942ced49c98fc6366715eaf
2022-07-27T23:37:11.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "it", "dataset:common_voice", "dataset:mozilla-foundation/common_voice_6_0", "transformers", "audio", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/wav2vec2-large-xlsr-53-italian
701
5
transformers
2,043
--- language: it license: apache-2.0 datasets: - common_voice - mozilla-foundation/common_voice_6_0 metrics: - wer - cer tags: - audio - automatic-speech-recognition - hf-asr-leaderboard - it - mozilla-foundation/common_voice_6_0 - robust-speech-event - speech - xlsr-fine-tuning-week model-index: - name: XLSR Wav2Vec2 Italian by Jonatas Grosman results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice it type: common_voice args: it metrics: - name: Test WER type: wer value: 9.41 - name: Test CER type: cer value: 2.29 - name: Test WER (+LM) type: wer value: 6.91 - name: Test CER (+LM) type: cer value: 1.83 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: it metrics: - name: Dev WER type: wer value: 21.78 - name: Dev CER type: cer value: 7.94 - name: Dev WER (+LM) type: wer value: 15.82 - name: Dev CER (+LM) type: cer value: 6.83 --- # Fine-tuned XLSR-53 large model for speech recognition in Italian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Italian using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-italian") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "it" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-italian" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | POI LEI MORÌ. | POI LEI MORÌ | | IL LIBRO HA SUSCITATO MOLTE POLEMICHE A CAUSA DEI SUOI CONTENUTI. | IL LIBRO HA SUSCITATO MOLTE POLEMICHE A CAUSA DEI SUOI CONTENUTI | | "FIN DALL'INIZIO LA SEDE EPISCOPALE È STATA IMMEDIATAMENTE SOGGETTA ALLA SANTA SEDE." | FIN DALL'INIZIO LA SEDE EPISCOPALE È STATA IMMEDIATAMENTE SOGGETTA ALLA SANTA SEDE | | IL VUOTO ASSOLUTO? | IL VUOTO ASSOLUTO | | DOPO ALCUNI ANNI, EGLI DECISE DI TORNARE IN INDIA PER RACCOGLIERE ALTRI INSEGNAMENTI. | DOPO ALCUNI ANNI EGLI DECISE DI TORNARE IN INDIA PER RACCOGLIERE ALTRI INSEGNAMENTI | | SALVATION SUE | SALVATION SOO | | IN QUESTO MODO, DECIO OTTENNE IL POTERE IMPERIALE. | IN QUESTO MODO DECHO OTTENNE IL POTERE IMPERIALE | | SPARTA NOVARA ACQUISISCE IL TITOLO SPORTIVO PER GIOCARE IN PRIMA CATEGORIA. | PARCANOVARACFILISCE IL TITOLO SPORTIVO PER GIOCARE IN PRIMA CATEGORIA | | IN SEGUITO, KYGO E SHEAR HANNO PROPOSTO DI CONTINUARE A LAVORARE SULLA CANZONE. | IN SEGUITO KIGO E SHIAR HANNO PROPOSTO DI CONTINUARE A LAVORARE SULLA CANZONE | | ALAN CLARKE | ALAN CLARK | ## Evaluation 1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-italian --dataset mozilla-foundation/common_voice_6_0 --config it --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-italian --dataset speech-recognition-community-v2/dev_data --config it --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-italian, title={Fine-tuned {XLSR}-53 large model for speech recognition in {I}talian}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian}}, year={2021} } ```
tscholak/1zha5ono
cb38df00e41169d44053dd24405d229856fdfb06
2022-01-10T21:50:11.000Z
[ "pytorch", "t5", "text2text-generation", "en", "dataset:spider", "arxiv:2109.05093", "transformers", "text2sql", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
tscholak
null
tscholak/1zha5ono
700
null
transformers
2,044
--- language: - en thumbnail: "https://repository-images.githubusercontent.com/401779782/c2f46be5-b74b-4620-ad64-57487be3b1ab" tags: - text2sql widget: - "How many singers do we have? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : singer_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name, theme, stadium_id, year | singer_in_concert : concert_id, singer_id" license: "apache-2.0" datasets: - spider metrics: - spider --- ## tscholak/1zha5ono Fine-tuned weights for [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) based on [t5.1.1.lm100k.base](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k). ### Training Data The model has been fine-tuned on the 7000 training examples in the [Spider text-to-SQL dataset](https://yale-lily.github.io/spider). The model solves Spider's zero-shot text-to-SQL translation task, and that means that it can generalize to unseen SQL databases. ### Training Objective This model was initialized with [t5.1.1.lm100k.base](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) and fine-tuned with the text-to-text generation objective. Questions are always grounded in a database schema, and the model is trained to predict the SQL query that would be used to answer the question. The input to the model is composed of the user's natural language question, the database identifier, and a list of tables and their columns: ``` [question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... ``` The model outputs the database identifier and the SQL query that will be executed on the database to answer the user's question: ``` [db_id] | [sql] ``` ### Performance Out of the box, this model achieves 59.4 % exact-set match accuracy and 60.0 % execution accuracy on the Spider development set. Using the PICARD constrained decoding method (see [the official PICARD implementation](https://github.com/ElementAI/picard)), the model's performance can be improved to **66.6 %** exact-set match accuracy and **68.4 %** execution accuracy on the Spider development set. ### Usage Please see [the official repository](https://github.com/ElementAI/picard) for scripts and docker images that support evaluation and serving of this model. ### References 1. [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) 2. [Official PICARD code](https://github.com/ElementAI/picard) ### Citation ```bibtex @inproceedings{Scholak2021:PICARD, author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau}, title = "{PICARD}: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.779", pages = "9895--9901", } ```
yuchenlin/BART0pp
ea0d45ccae6e8b4a14c5ad2bb06f23ea871f5e7c
2021-12-10T05:49:45.000Z
[ "pytorch", "bart", "text2text-generation", "en", "dataset:bigscience/P3", "transformers", "license:apache-2.0", "autotrain_compatible" ]
text2text-generation
false
yuchenlin
null
yuchenlin/BART0pp
700
5
transformers
2,045
--- datasets: - bigscience/P3 language: en license: apache-2.0 widget: - text: "A is the son's of B's uncle. What is the family relationship between A and B?" - text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old." - text: "Task: copy but say the opposite.\n PSG won its match against Barca." - text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy." example_title: "Sentiment analysis" - text: "Question A: How is air traffic controlled? \nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates." - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady. \nIn the previous sentence, decide who 'her' is referring to." example_title: "Coreference resolution" - text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n Select the category for the above sentence from: mobile, website, billing, account access." - text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences 1 and 2 have the same meaning?" example_title: "Paraphrase identification" - text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best." - text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1, LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out. Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?" - text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n Sentence A: you can leave the books on the table over there.\n Sentence B: the tables in this book are very hard to read." - text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n Which book is the leftmost book?" example_title: "Logic puzzles" - text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?" example_title: "Reading comprehension" - text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n Which of the following best characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places where people live." --- TBA
tomh/toxigen_roberta
0e65216a558feba4bb167d47e49f9a9e229de6ab
2022-05-01T19:42:09.000Z
[ "pytorch", "roberta", "text-classification", "en", "arxiv:2203.09509", "transformers" ]
text-classification
false
tomh
null
tomh/toxigen_roberta
699
null
transformers
2,046
--- language: - en tags: - text-classification --- Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar. This model comes from the paper [ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection](https://arxiv.org/abs/2203.09509) and can be used to detect implicit hate speech. Please visit the [Github Repository](https://github.com/microsoft/TOXIGEN) for the training dataset and further details. ```bibtex @inproceedings{hartvigsen2022toxigen, title = "{T}oxi{G}en: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection", author = "Hartvigsen, Thomas and Gabriel, Saadia and Palangi, Hamid and Sap, Maarten and Ray, Dipankar and Kamar, Ece", booktitle = "Proceedings of the 60th Annual Meeting of the Association of Computational Linguistics", year = "2022" } ```
gengp/gpt-2-komodoh
22a3ad22afec32f08d5057cf74b3ee7057a6833c
2022-06-17T17:06:22.000Z
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "license:gpl-3.0" ]
text-generation
false
gengp
null
gengp/gpt-2-komodoh
699
null
transformers
2,047
--- license: gpl-3.0 ---
hf-internal-testing/tiny-random-reformer
92d3924b57fe38f8c03ad579f85e7c4b3614e804
2022-04-04T13:23:05.000Z
[ "pytorch", "reformer", "transformers" ]
null
false
hf-internal-testing
null
hf-internal-testing/tiny-random-reformer
698
null
transformers
2,048
Entry not found
theainerd/Wav2Vec2-large-xlsr-hindi
abe6b3384821d1cc890f782c84e828883f3f3a3e
2021-03-29T07:14:33.000Z
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hi", "dataset:Interspeech 2021", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
theainerd
null
theainerd/Wav2Vec2-large-xlsr-hindi
697
1
transformers
2,049
--- language: hi datasets: - Interspeech 2021 metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Hindi by Shyam Sunder Kumar results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice hi type: common_voice args: hi metrics: - name: Test WER type: wer value: 72.62 --- # Wav2Vec2-Large-XLSR-53-hindi Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) hindi using the [Multilingual and code-switching ASR challenges for low resource Indian languages](https://navana-tech.github.io/IS21SS-indicASRchallenge/data.html). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi") model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the hindi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "hi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi") model = Wav2Vec2ForCTC.from_pretrained("theainerd/Wav2Vec2-large-xlsr-hindi") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 72.62 % ## Training The script used for training can be found [Hindi ASR Fine Tuning Wav2Vec2](https://colab.research.google.com/drive/1m-F7et3CHT_kpFqg7UffTIwnUV9AKgrg?usp=sharing)
hfl/chinese-xlnet-mid
e103afe9441e257b0d55b78fbe1015805f384edb
2021-03-03T01:46:39.000Z
[ "pytorch", "tf", "xlnet", "text-generation", "zh", "arxiv:2004.13922", "transformers", "license:apache-2.0" ]
text-generation
false
hfl
null
hfl/chinese-xlnet-mid
695
4
transformers
2,050
--- language: - zh license: "apache-2.0" --- ## Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. We welcome all experts and scholars to download and use this model. This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
Helsinki-NLP/opus-mt-en-ur
aa89f9a5c095d34e13ed9a07be73357c6fc785c4
2021-01-18T08:18:58.000Z
[ "pytorch", "marian", "text2text-generation", "en", "ur", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-en-ur
694
1
transformers
2,051
--- language: - en - ur tags: - translation license: apache-2.0 --- ### eng-urd * source group: English * target group: Urdu * OPUS readme: [eng-urd](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md) * model: transformer-align * source language(s): eng * target language(s): urd * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.eng.urd | 12.1 | 0.390 | ### System Info: - hf_name: eng-urd - source_languages: eng - target_languages: urd - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-urd/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'ur'] - src_constituents: {'eng'} - tgt_constituents: {'urd'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-urd/opus-2020-06-17.test.txt - src_alpha3: eng - tgt_alpha3: urd - short_pair: en-ur - chrF2_score: 0.39 - bleu: 12.1 - brevity_penalty: 1.0 - ref_len: 12155.0 - src_name: English - tgt_name: Urdu - train_date: 2020-06-17 - src_alpha2: en - tgt_alpha2: ur - prefer_old: False - long_pair: eng-urd - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
DaNLP/da-bert-emotion-classification
500107b8f0b4ef43f2162dff0cd733631df14f4c
2021-09-23T13:37:15.000Z
[ "pytorch", "tf", "bert", "text-classification", "da", "dataset:social media", "transformers", "emotion", "license:cc-by-sa-4.0" ]
text-classification
false
DaNLP
null
DaNLP/da-bert-emotion-classification
692
1
transformers
2,052
--- language: - da tags: - bert - pytorch - emotion license: cc-by-sa-4.0 datasets: - social media metrics: - f1 widget: - text: Jeg ejer en rød bil og det er en god bil. --- # Danish BERT for emotion classification The BERT Emotion model classifies a Danish text in one of the following class: * Glæde/Sindsro * Tillid/Accept * Forventning/Interrese * Overasket/Målløs * Vrede/Irritation * Foragt/Modvilje * Sorg/trist * Frygt/Bekymret It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data. This model should be used after detecting whether the text contains emotion or not, using the binary [BERT Emotion model](https://huggingface.co/DaNLP/da-bert-emotion-binary). See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("DaNLP/da-bert-emotion-classification") tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-emotion-classification") ``` ## Training data The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
google/vit-huge-patch14-224-in21k
274b0d6e8a17e6ea6436338480a0ad100623115f
2022-01-28T10:24:44.000Z
[ "pytorch", "tf", "jax", "vit", "feature-extraction", "dataset:imagenet-21k", "arxiv:2010.11929", "arxiv:2006.03677", "transformers", "vision", "license:apache-2.0" ]
feature-extraction
false
google
null
google/vit-huge-patch14-224-in21k
692
1
transformers
2,053
--- license: apache-2.0 tags: - vision datasets: - imagenet-21k inference: false --- # Vision Transformer (huge-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-huge-patch14-224-in21k') model = ViTModel.from_pretrained('google/vit-huge-patch14-224-in21k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change. ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
snunlp/KR-ELECTRA-discriminator
efc5fd1b31c3213ad68e84b49a127b249281efcc
2022-05-04T06:22:51.000Z
[ "pytorch", "electra", "pretraining", "ko", "transformers" ]
null
false
snunlp
null
snunlp/KR-ELECTRA-discriminator
692
null
transformers
2,054
--- language: - "ko" --- ## KoRean based ELECTRA (KR-ELECTRA) This is a release of a Korean-specific ELECTRA model with comparable or better performances developed by the Computational Linguistics Lab at Seoul National University. Our model shows remarkable performances on tasks related to informal texts such as review documents, while still showing comparable results on other kinds of tasks. ### Released Model We pre-trained our KR-ELECTRA model following a base-scale model of [ELECTRA](https://github.com/google-research/electra). We trained the model based on Tensorflow-v1 using a v3-8 TPU of Google Cloud Platform. #### Model Details We followed the training parameters of the base-scale model of [ELECTRA](https://github.com/google-research/electra). ##### Hyperparameters | model | # of layers | embedding size | hidden size | # of heads | | ------: | ----------: | -------------: | ----------: | ---------: | | Discriminator | 12 | 768 | 768 | 12 | | Generator | 12 | 768 | 256 | 4 | ##### Pretraining | batch size | train steps | learning rates | max sequence length | generator size | | ---------: | ----------: | -------------: | ------------------: | -------------: | | 256 | 700000 | 2e-4 | 128 | 0.33333 | #### Training Dataset 34GB Korean texts including Wikipedia documents, news articles, legal texts, news comments, product reviews, and so on. These texts are balanced, consisting of the same ratios of written and spoken data. #### Vocabulary vocab size 30,000 We used morpheme-based unit tokens for our vocabulary based on the [Mecab-Ko](https://bitbucket.org/eunjeon/mecab-ko-dic/src/master/) morpheme analyzer. #### Download Link * Tensorflow-v1 model ([download](https://drive.google.com/file/d/1L_yKEDaXM_yDLwHm5QrXAncQZiMN3BBU/view?usp=sharing)) * PyTorch models on HuggingFace ```python from transformers import ElectraModel, ElectraTokenizer model = ElectraModel.from_pretrained("snunlp/KR-ELECTRA-discriminator") tokenizer = ElectraTokenizer.from_pretrained("snunlp/KR-ELECTRA-discriminator") ``` ### Finetuning We used and slightly edited the finetuning codes from [KoELECTRA](https://github.com/monologg/KoELECTRA), with additionally adjusted hyperparameters. You can download the codes and config files that we used for our model from our [github](https://github.com/snunlp/KR-ELECTRA). #### Experimental Results | | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) | **Korean-Hate-Speech (Dev)**<br/>(F1) | | :-------------------- | :----------------: | :--------------------: | :----------------: | :------------------: | :-----------------------: | :-------------------------: | :---------------------------: | :-----------------------------------: | | KoBERT | 89.59 | 87.92 | 81.25 | 79.62 | 81.59 | 94.85 | 51.75 / 79.15 | 66.21 | | XLM-Roberta-Base | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 | | HanBERT | 90.06 | 87.70 | 82.95 | 80.32 | 82.73 | 94.72 | 78.74 / 92.02 | 68.32 | | KoELECTRA-Base | 90.33 | 87.18 | 81.70 | 80.64 | 82.00 | 93.54 | 60.86 / 89.28 | 66.09 | | KoELECTRA-Base-v2 | 89.56 | 87.16 | 80.70 | 80.72 | 82.30 | 94.85 | 84.01 / 92.40 | 67.45 | | KoELECTRA-Base-v3 | 90.63 | **88.11** | **84.45** | 82.24 | **85.53** | 95.25 | 84.83 / **93.45** | 67.61 | | **KR-ELECTRA (ours)** | **91.168** | 87.90 | 82.05 | **82.51** | 85.41 | **95.51** | **84.93** / 93.04 | **74.50** | The baseline results are brought from [KoELECTRA](https://github.com/monologg/KoELECTRA)'s. ### Citation ```bibtex @misc{kr-electra, author = {Lee, Sangah and Hyopil Shin}, title = {KR-ELECTRA: a KoRean-based ELECTRA model}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/snunlp/KR-ELECTRA}} } ```
uer/gpt2-distil-chinese-cluecorpussmall
a74565bf920f47043b84b62d6444e4a55a74a574
2022-07-15T08:27:10.000Z
[ "pytorch", "tf", "jax", "gpt2", "text-generation", "zh", "dataset:CLUECorpusSmall", "transformers" ]
text-generation
false
uer
null
uer/gpt2-distil-chinese-cluecorpussmall
692
3
transformers
2,055
--- language: zh datasets: CLUECorpusSmall widget: - text: "米饭是一种用稻米与水煮成的食物" --- # Chinese GPT2-distil Model ## Model description The model is used to generate Chinese texts. You can download the model either from the [GPT2-Chinese Github page](https://github.com/Morizeyao/GPT2-Chinese), or via HuggingFace from the link [gpt2-distil-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-distil-chinese-cluecorpussmall). The model is called GPT2-distil because the configuration of model follows [distilgpt2](https://huggingface.co/distilgpt2), which has 6 layers, 768 dimension, and 12 heads. The pre-training does not involve the supervision of larger models. ## How to use You can use the model directly with a pipeline for text generation: ```python >>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline >>> tokenizer = BertTokenizer.from_pretrained("uer/gpt2-distil-chinese-cluecorpussmall") >>> model = GPT2LMHeadModel.from_pretrained("uer/gpt2-distil-chinese-cluecorpussmall") >>> text_generator = TextGenerationPipeline(model, tokenizer) >>> text_generator("这是很久之前的事情了", max_length=100, do_sample=True) [{'generated_text': '这是很久之前的事情了 。 我 现 在 想 起 来 就 让 自 己 很 伤 心 , 很 失 望 。 我 现 在 想 到 , 我 觉 得 大 多 数 人 的 生 活 比 我 的 生 命 还 要 重 要 , 对 一 些 事 情 的 看 法 , 对 一 些 人 的 看 法 , 都 是 在 发 泄 。 但 是 , 我 们 的 生 活 是 需 要 一 个 信 用 体 系 的 。 我 不 知'}] ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. ## Training procedure The model is pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 1024. Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_lm_seq128_dataset.pt \ --seq_length 128 --processes_num 32 --data_processor lm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_lm_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/gpt2/distil_config.json \ --output_model_path models/cluecorpussmall_gpt2_distil_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_lm_seq1024_dataset.pt \ --seq_length 1024 --processes_num 32 --data_processor lm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_lm_seq1024_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_gpt2_distil_seq128_model.bin-1000000 \ --config_path models/gpt2/distil_config.json \ --output_model_path models/cluecorpussmall_gpt2_distil_seq1024_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_gpt2_from_uer_to_huggingface.py --input_model_path cluecorpussmall_gpt2_distil_seq1024_model.bin-250000 \ --output_model_path pytorch_model.bin \ --layers_num 6 ``` ### BibTeX entry and citation info ``` @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ```
elozano/tweet_sentiment_eval
76eddba683fc8f13d01a2068dadcea76f0edb0fd
2022-02-07T17:50:59.000Z
[ "pytorch", "roberta", "text-classification", "en", "dataset:tweet_eval", "transformers", "license:mit" ]
text-classification
false
elozano
null
elozano/tweet_sentiment_eval
691
2
transformers
2,056
--- license: mit datasets: - tweet_eval language: en widget: - text: "I love summer!" example_title: "Positive" - text: "Does anyone want to play?" example_title: "Neutral" - text: "This movie is just awful! 😫" example_title: "Negative" ---
schen/longformer-chinese-base-4096
f0e53c8afe22f6b7cca5d5278fda13e26951a3b6
2021-05-20T05:07:16.000Z
[ "pytorch", "jax", "bert", "pretraining", "transformers" ]
null
false
schen
null
schen/longformer-chinese-base-4096
689
4
transformers
2,057
Entry not found
speechbrain/m-ctc-t-large
ab27b818661fa5e07bd53eae8065c4bb7b671790
2022-06-05T15:41:09.000Z
[ "pytorch", "mctct", "automatic-speech-recognition", "en", "dataset:common_voice", "dataset:voxpopuli", "arxiv:2111.00161", "transformers", "speech", "license:apache-2.0" ]
automatic-speech-recognition
false
speechbrain
null
speechbrain/m-ctc-t-large
689
8
transformers
2,058
--- language: en datasets: - common_voice - voxpopuli multilinguality: - multilingual tags: - speech license: apache-2.0 --- # M-CTC-T ​ Massively multilingual speech recognizer from Meta AI. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal. ​ ![model image](https://raw.githubusercontent.com/cwkeam/scientific-images/main/MCTCT/mctct-arch.png) ​ The original Flashlight code, model checkpoints, and Colab notebook can be found at https://github.com/flashlight/wav2letter/tree/main/recipes/mling_pl . ​ ​ ## Citation ​ [Paper](https://arxiv.org/abs/2111.00161) ​ Authors: Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert ​ ``` @article{lugosch2021pseudo, title={Pseudo-Labeling for Massively Multilingual Speech Recognition}, author={Lugosch, Loren and Likhomanenko, Tatiana and Synnaeve, Gabriel and Collobert, Ronan}, journal={ICASSP}, year={2022} } ``` ## Contribution A huge thanks to [Chan Woo Kim](https://huggingface.co/cwkeam) for porting the model from Flashlight C++ to PyTorch. ​ # Training method ​ ![model image](https://raw.githubusercontent.com/cwkeam/scientific-images/main/MCTCT/mctct-slimipl.png) ​ For more information on how the model was trained, please take a look at the [official paper](https://arxiv.org/abs/2111.00161). ​ # Usage ​ To transcribe audio files the model can be used as a standalone acoustic model as follows: ​ ```python import torch import torchaudio from datasets import load_dataset from transformers import MCTCTForCTC, MCTCTProcessor model = MCTCTForCTC.from_pretrained("speechbrain/m-ctc-t-large") processor = MCTCTProcessor.from_pretrained("speechbrain/m-ctc-t-large") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # feature extraction input_features = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt").input_features # retrieve logits with torch.no_grad(): logits = model(input_features).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` Results for Common Voice, averaged over all languages: ​ *Character error rate (CER)*: ​ | "Valid" | "Test" | |---|---| | 21.4 | 23.3 | # Questions & Help If you have questions regarding this model or need help, please consider opening a discussion or pull request on this repo and tag @lorenlugosch, @cwkeam or @patrickvonplaten
manu/lilt-infoxlm-base
8575dfa0e9ad599465896d09da012b2150d601e9
2022-03-30T14:47:15.000Z
[ "pytorch", "liltrobertalike", "fill-mask", "es", "fr", "ru", "en", "it", "dataset:iit-cdip", "transformers", "token-classification", "license:mit", "autotrain_compatible" ]
token-classification
false
manu
null
manu/lilt-infoxlm-base
687
2
transformers
2,059
--- language: - es - fr - ru - en - it tags: - token-classification - fill-mask license: mit datasets: - iit-cdip --- This model is the pretrained infoxlm checkpoint from the paper "LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding". Original repository: https://github.com/jpWang/LiLT To use it, it is necessary to fork the modeling and configuration files from the original repository, and load the pretrained model from the corresponding classes (LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel). They can also be preloaded with the AutoConfig/model factories as such: ```python from transformers import AutoModelForTokenClassification, AutoConfig from path_to_custom_classes import ( LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel ) def patch_transformers(): AutoConfig.register("liltrobertalike", LiLTRobertaLikeConfig) AutoModel.register(LiLTRobertaLikeConfig, LiLTRobertaLikeModel) AutoModelForTokenClassification.register(LiLTRobertaLikeConfig, LiLTRobertaLikeForTokenClassification) # etc... ``` To load the model, it is then possible to use: ```python # patch_transformers() must have been executed beforehand tokenizer = AutoTokenizer.from_pretrained("microsoft/infoxlm-base") model = AutoModel.from_pretrained("manu/lilt-infoxlm-base") model = AutoModelForTokenClassification.from_pretrained("manu/lilt-infoxlm-base") # to be fine-tuned on a token classification task ```
SEBIS/code_trans_t5_large_source_code_summarization_python_multitask_finetune
368b888aa864b0546765fc126e70146e0458f8d8
2021-06-23T09:21:31.000Z
[ "pytorch", "jax", "t5", "feature-extraction", "transformers", "summarization" ]
summarization
false
SEBIS
null
SEBIS/code_trans_t5_large_source_code_summarization_python_multitask_finetune
684
2
transformers
2,060
--- tags: - summarization widget: - text: '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' --- # CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
speechbrain/asr-transformer-transformerlm-librispeech
586c7897e606d6a00f0513e1ae527a5824d10eac
2022-06-05T15:55:26.000Z
[ "en", "dataset:librispeech", "arxiv:2106.04624", "speechbrain", "automatic-speech-recognition", "CTC", "Attention", "Transformer", "pytorch", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
speechbrain
null
speechbrain/asr-transformer-transformerlm-librispeech
681
3
speechbrain
2,061
--- language: - en thumbnail: null tags: - automatic-speech-recognition - CTC - Attention - Transformer - pytorch - speechbrain - hf-asr-leaderboard license: apache-2.0 datasets: - librispeech metrics: - wer - cer model-index: - name: Transformer+TransformerLM by SpeechBrain results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 2.26 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 5.52 --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Transformer for LibriSpeech (with Transformer LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on LibriSpeech (EN) within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The performance of the model is the following: | Release | Test clean WER | Test other WER | GPUs | |:-------------:|:--------------:|:--------------:|:--------:| | 24-03-22 | 2.26 | 5.52 | 1xA100 40GB | ## Pipeline description This ASR system is composed of 3 different but linked blocks: - Tokenizer (unigram) that transforms words into subword units and trained with the train transcriptions of LibriSpeech. - Neural language model (Transformer LM) trained on the full 10M words dataset. - Acoustic model made of a transformer encoder and a joint decoder with CTC + transformer. Hence, the decoding also incorporates the CTC probabilities. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed. ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Transcribing your own audio files (in English) ```python from speechbrain.pretrained import EncoderDecoderASR asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-transformer-transformerlm-librispeech", savedir="pretrained_models/asr-transformer-transformerlm-librispeech") asr_model.transcribe_file("speechbrain/asr-transformer-transformerlm-librispeech/example.wav") ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ## Parallel Inference on a Batch Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model. ### Training The model was trained with SpeechBrain (Commit hash: 'f73fcc35'). To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LibriSpeech/ASR/transformer python train.py hparams/transformer.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1ZudxqMWb8VNCJKvY2Ws5oNY3WI1To0I7?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
uer/chinese_roberta_L-4_H-768
89a95a918d125753e32c32d2b9061455b1d4c5ac
2022-07-15T08:12:40.000Z
[ "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "dataset:CLUECorpusSmall", "arxiv:1909.05658", "arxiv:1908.08962", "transformers", "autotrain_compatible" ]
fill-mask
false
uer
null
uer/chinese_roberta_L-4_H-768
681
null
transformers
2,062
--- language: zh datasets: CLUECorpusSmall widget: - text: "北京是[MASK]国的首都。" --- # Chinese RoBERTa Miniatures ## Model description This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details. You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below: | | H=128 | H=256 | H=512 | H=768 | | -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: | | **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] | | **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] | | **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] | | **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] | | **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] | | **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] | Here are scores on the devlopment set of six Chinese tasks: | Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) | | -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: | | RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 | | RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 | | RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 | | RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 | | RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 | For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128: - epochs: 3, 5, 8 - batch sizes: 32, 64 - learning rates: 3e-5, 1e-4, 3e-4 ## How to use You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium): ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512') >>> unmasker("中国的首都是[MASK]京。") [ {'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]', 'score': 0.8701988458633423, 'token': 1266, 'token_str': '北'}, {'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]', 'score': 0.1194809079170227, 'token': 1298, 'token_str': '南'}, {'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]', 'score': 0.0037803512532263994, 'token': 691, 'token_str': '东'}, {'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]', 'score': 0.0017127094324678183, 'token': 3249, 'token_str': '普'}, {'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]', 'score': 0.001687526935711503, 'token': 3307, 'token_str': '望'} ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512') model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512') model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall. ## Training procedure Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. Taking the case of RoBERTa-Medium Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq128_dataset.pt \ --processes_num 32 --seq_length 128 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 \ --data_processor mlm --target mlm ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq512_dataset.pt \ --processes_num 32 --seq_length 512 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 \ --data_processor mlm --target mlm ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \ --output_model_path pytorch_model.bin \ --layers_num 8 --type mlm ``` ### BibTeX entry and citation info ``` @article{devlin2018bert, title={Bert: Pre-training of deep bidirectional transformers for language understanding}, author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1810.04805}, year={2018} } @article{liu2019roberta, title={Roberta: A robustly optimized bert pretraining approach}, author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1907.11692}, year={2019} } @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128 [2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256 [2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512 [2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768 [4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128 [4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256 [4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512 [4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768 [6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128 [6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256 [6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512 [6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768 [8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128 [8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256 [8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512 [8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768 [10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128 [10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256 [10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512 [10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768 [12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128 [12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256 [12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512 [12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768
PlanTL-GOB-ES/bsc-bio-ehr-es
08d77ab94c269b7f7e53a6936b65b66434016af1
2022-04-11T11:02:25.000Z
[ "pytorch", "roberta", "fill-mask", "es", "transformers", "biomedical", "clinical", "ehr", "spanish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
false
PlanTL-GOB-ES
null
PlanTL-GOB-ES/bsc-bio-ehr-es
680
3
transformers
2,063
--- language: - es tags: - biomedical - clinical - ehr - spanish license: apache-2.0 metrics: - ppl widget: - text: "El único antecedente personal a reseñar era la <mask> arterial." - text: "Las radiologías óseas de cuerpo entero no detectan alteraciones <mask>, ni alteraciones vertebrales." - text: "En el <mask> toraco-abdómino-pélvico no se encontraron hallazgos patológicos de interés." --- # Biomedical-clinical language model for Spanish Biomedical pretrained language model for Spanish. For more details about the corpus, the pretraining and the evaluation, check the official [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). ## Tokenization and model pretraining This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a **biomedical-clinical** corpus in Spanish collected from several sources (see next section). The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 48 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM, using Adam optimizer with a peak learning rate of 0.0005 and an effective batch size of 2,048 sentences. ## Training corpora and preprocessing The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers, and a real-world clinical corpus collected from more than 278K clinical documents and notes. To obtain a high-quality training corpus while retaining the idiosyncrasies of the clinical language, a cleaning pipeline has been applied only to the biomedical corpora, keeping the clinical corpus uncleaned. Essentially, the cleaning operations used are: - data parsing in different formats - sentence splitting - language detection - filtering of ill-formed sentences - deduplication of repetitive contents - keep the original document boundaries Then, the biomedical corpora are concatenated and further global deduplication among the biomedical corpora has been applied. Eventually, the clinical corpus is concatenated to the cleaned biomedical corpus resulting in a medium-size biomedical-clinical corpus for Spanish composed of more than 1B tokens. The table below shows some basic statistics of the individual cleaned corpora: | Name | No. tokens | Description | |-----------------------------------------------------------------------------------------|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [Medical crawler](https://zenodo.org/record/4561970) | 903,558,13 | Crawler of more than 3,000 URLs belonging to Spanish biomedical and health domains. | | Clinical cases misc. | 102,855,267 | A miscellany of medical content, essentially clinical cases. Note that a clinical case report is a scientific publication where medical practitioners share patient cases and it is different from a clinical note or document. | | EHR documents | 95,267,20 | Collection of more than 278K clinical documents, including discharge reports, clinical course notes and X-ray reports, for a total of 91M tokens. | | [Scielo](https://zenodo.org/record/2541681#.YlP1DshBwio) | 60,007,289 | Publications written in Spanish crawled from the Spanish SciELO server in 2017. | | [BARR2_background](https://temu.bsc.es/BARR2/downloads/background_set.raw_text.tar.bz2) | 24,516,442 | Biomedical Abbreviation Recognition and Resolution (BARR2) containing Spanish clinical case study sections from a variety of clinical disciplines. | | Wikipedia_life_sciences | 13,890,501 | Wikipedia articles crawled 04/01/2021 with the [Wikipedia API python library](https://pypi.org/project/Wikipedia-API/) starting from the "Ciencias\_de\_la\_vida" category up to a maximum of 5 subcategories. Multiple links to the same articles are then discarded to avoid repeating content. | | Patents | 13,463,387 | Google Patent in Medical Domain for Spain (Spanish). The accepted codes (Medical Domain) for Json files of patents are: "A61B", "A61C","A61F", "A61H", "A61K", "A61L","A61M", "A61B", "A61P". | | [EMEA](http://opus.nlpl.eu/download.php?f=EMEA/v3/moses/en-es.txt.zip) | 5,377,448 | Spanish-side documents extracted from parallel corpora made out of PDF documents from the European Medicines Agency. | | [mespen_Medline](https://zenodo.org/record/3562536#.YTt1fH2xXbR) | 4,166,077 | Spanish-side articles extracted from a collection of Spanish-English parallel corpus consisting of biomedical scientific literature. The collection of parallel resources is aggregated from the MedlinePlus source. | | PubMed | 1,858,966 | Open-access articles from the PubMed repository crawled in 2017. | ## Evaluation and results The model has been fine-tuned on three Named Entity Recognition (NER) tasks using three clinical NER datasets: - [PharmaCoNER](https://zenodo.org/record/4270158): is a track on chemical and drug mention recognition from Spanish medical texts (for more info see: https://temu.bsc.es/pharmaconer/). - [CANTEMIST](https://zenodo.org/record/3978041#.YTt5qH2xXbQ): is a shared task specifically focusing on named entity recognition of tumor morphology, in Spanish (for more info see: https://zenodo.org/record/3978041#.YTt5qH2xXbQ). - ICTUSnet: consists of 1,006 hospital discharge reports of patients admitted for stroke from 18 different Spanish hospitals. It contains more than 79,000 annotations for 51 different kinds of variables. We addressed the NER task as a token classification problem using a standard linear layer along with the BIO tagging schema. We compared our models with the general-domain Spanish [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne), the general-domain multilingual model that supports Spanish [mBERT](https://huggingface.co/bert-base-multilingual-cased), the domain-specific English model [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2), and three domain-specific models based on continual pre-training, [mBERT-Galén](https://ieeexplore.ieee.org/document/9430499), [XLM-R-Galén](https://ieeexplore.ieee.org/document/9430499) and [BETO-Galén](https://ieeexplore.ieee.org/document/9430499). The table below shows the F1 scores obtained: | Tasks/Models | bsc-bio-ehr-es | XLM-R-Galén | BETO-Galén | mBERT-Galén | mBERT | BioBERT | roberta-base-bne | |--------------|----------------|--------------------|--------------|--------------|--------------|--------------|------------------| | PharmaCoNER | **0.8913** | 0.8754 | 0.8537 | 0.8594 | 0.8671 | 0.8545 | 0.8474 | | CANTEMIST | **0.8340** | 0.8078 | 0.8153 | 0.8168 | 0.8116 | 0.8070 | 0.7875 | | ICTUSnet | **0.8756** | 0.8716 | 0.8498 | 0.8509 | 0.8631 | 0.8521 | 0.8677 | The fine-tuning scripts can be found in the official GitHub [repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es). ## Intended uses & limitations The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section) However, the is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification. ## Cite To be announced soon! --- ## Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
SEBIS/code_trans_t5_large_code_comment_generation_java_transfer_learning_finetune
207a8be14efe3e07a84093ab914f655e09afedf6
2021-06-23T06:14:00.000Z
[ "pytorch", "jax", "t5", "feature-extraction", "transformers" ]
feature-extraction
false
SEBIS
null
SEBIS/code_trans_t5_large_code_comment_generation_java_transfer_learning_finetune
678
null
transformers
2,064
Entry not found
google/bert_uncased_L-6_H-128_A-2
cc3ddb10622cdf031e6c96cf314284cd788bc24b
2021-05-19T17:33:17.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-6_H-128_A-2
678
null
transformers
2,065
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
sentence-transformers/use-cmlm-multilingual
2528c966cf3b3d2e504d4209c8c688a63d77729f
2022-06-15T20:44:55.000Z
[ "pytorch", "tf", "bert", "feature-extraction", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/use-cmlm-multilingual
677
2
sentence-transformers
2,066
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 --- # use-cmlm-multilingual This is a pytorch version of the [universal-sentence-encoder-cmlm/multilingual-base-br](https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base-br/1) model. It can be used to map 109 languages to a shared vector space. As the model is based [LaBSE](https://huggingface.co/sentence-transformers/LaBSE), it perform quite comparable on downstream tasks. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/use-cmlm-multilingual') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/LaBSE) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors Have a look at [universal-sentence-encoder-cmlm/multilingual-base-br](https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base-br/1) for the respective publication that describes this model.
sshleifer/student_marian_en_ro_6_1
b674385d132cfc2dc92b51f7e2de78f3f2610db0
2020-08-26T23:33:54.000Z
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
sshleifer
null
sshleifer/student_marian_en_ro_6_1
676
null
transformers
2,067
Entry not found
has-abi/bert-finetuned-resumes-sections
b9a6a9c36f3e6af01a45ab15b6f24b772954e288
2022-05-31T04:07:31.000Z
[ "pytorch", "tensorboard", "bert", "text-classification", "transformers", "generated_from_trainer", "license:mit", "model-index" ]
text-classification
false
has-abi
null
has-abi/bert-finetuned-resumes-sections
675
1
transformers
2,068
--- license: mit tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: bert-finetuned-resumes-sections results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-resumes-sections This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0333 - F1: 0.9548 - Roc Auc: 0.9732 - Accuracy: 0.9493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.1659 | 1.0 | 601 | 0.0645 | 0.9201 | 0.9434 | 0.8910 | | 0.055 | 2.0 | 1202 | 0.0426 | 0.9407 | 0.9633 | 0.9309 | | 0.0324 | 3.0 | 1803 | 0.0371 | 0.9450 | 0.9663 | 0.9368 | | 0.0226 | 4.0 | 2404 | 0.0389 | 0.9402 | 0.9651 | 0.9343 | | 0.0125 | 5.0 | 3005 | 0.0354 | 0.9433 | 0.9650 | 0.9343 | | 0.0091 | 6.0 | 3606 | 0.0364 | 0.9482 | 0.9696 | 0.9434 | | 0.0075 | 7.0 | 4207 | 0.0363 | 0.9464 | 0.9676 | 0.9393 | | 0.007 | 8.0 | 4808 | 0.0333 | 0.9548 | 0.9732 | 0.9493 | | 0.0063 | 9.0 | 5409 | 0.0358 | 0.9501 | 0.9698 | 0.9434 | | 0.0043 | 10.0 | 6010 | 0.0380 | 0.9475 | 0.9707 | 0.9443 | | 0.0032 | 11.0 | 6611 | 0.0377 | 0.9491 | 0.9712 | 0.9468 | | 0.0031 | 12.0 | 7212 | 0.0375 | 0.9500 | 0.9716 | 0.9459 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
pysentimiento/robertuito-base-uncased
d8b419fbeb715ecf382cfcdd374bc8bb32f41ed8
2022-01-12T15:12:54.000Z
[ "pytorch", "roberta", "fill-mask", "es", "arxiv:2111.09453", "transformers", "twitter", "masked-lm", "autotrain_compatible" ]
fill-mask
false
pysentimiento
null
pysentimiento/robertuito-base-uncased
674
5
transformers
2,069
--- language: - es tags: - twitter - masked-lm --- # robertuito-base-uncased # RoBERTuito ## A pre-trained language model for social media text in Spanish [**PAPER**](https://arxiv.org/abs/2111.09453) [Github Repository](https://github.com/pysentimiento/robertuito) *RoBERTuito* is a pre-trained language model for user-generated content in Spanish, trained following RoBERTa guidelines on 500 million tweets. *RoBERTuito* comes in 3 flavors: cased, uncased, and uncased+deaccented. We tested *RoBERTuito* on a benchmark of tasks involving user-generated text in Spanish. It outperforms other pre-trained language models for this language such as *BETO*, *BERTin* and *RoBERTa-BNE*. The 4 tasks selected for evaluation were: Hate Speech Detection (using SemEval 2019 Task 5, HatEval dataset), Sentiment and Emotion Analysis (using TASS 2020 datasets), and Irony detection (using IrosVa 2019 dataset). | model | hate speech | sentiment analysis | emotion analysis | irony detection | score | |:-------------------|:----------------|:---------------------|:-------------------|:-----------------|---------:| | robertuito-uncased | 0.801 ± 0.010 | 0.707 ± 0.004 | 0.551 ± 0.011 | 0.736 ± 0.008 | 0.6987 | | robertuito-deacc | 0.798 ± 0.008 | 0.702 ± 0.004 | 0.543 ± 0.015 | 0.740 ± 0.006 | 0.6958 | | robertuito-cased | 0.790 ± 0.012 | 0.701 ± 0.012 | 0.519 ± 0.032 | 0.719 ± 0.023 | 0.6822 | | roberta-bne | 0.766 ± 0.015 | 0.669 ± 0.006 | 0.533 ± 0.011 | 0.723 ± 0.017 | 0.6726 | | bertin | 0.767 ± 0.005 | 0.665 ± 0.003 | 0.518 ± 0.012 | 0.716 ± 0.008 | 0.6666 | | beto-cased | 0.768 ± 0.012 | 0.665 ± 0.004 | 0.521 ± 0.012 | 0.706 ± 0.007 | 0.6651 | | beto-uncased | 0.757 ± 0.012 | 0.649 ± 0.005 | 0.521 ± 0.006 | 0.702 ± 0.008 | 0.6571 | We release the pre-trained models on huggingface model hub: - [RoBERTuito uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) - [RoBERTuito cased](https://huggingface.co/pysentimiento/robertuito-base-cased) - [RoBERTuito deacc](https://huggingface.co/pysentimiento/robertuito-base-deacc) ## Masked LM To test the masked LM, take into account that space is encoded inside SentencePiece's tokens. So, if you want to test ``` Este es un día<mask> ``` don't put a space between `día` and `<mask>` ## Usage **IMPORTANT -- READ THIS FIRST** *RoBERTuito* is not yet fully-integrated into `huggingface/transformers`. To use it, first install `pysentimiento` ```bash pip install pysentimiento ``` and preprocess text using `pysentimiento.preprocessing.preprocess_tweet` before feeding it into the tokenizer ```python from transformers import AutoTokenizer from pysentimiento.preprocessing import preprocess_tweet tokenizer = AutoTokenizer.from_pretrained('pysentimiento/robertuito-base-cased') text = "Esto es un tweet estoy usando #Robertuito @pysentimiento 🤣" preprocessed_text = preprocess_tweet(text, ha) tokenizer.tokenize(preprocessed_text) # ['<s>','▁Esto','▁es','▁un','▁tweet','▁estoy','▁usando','▁','▁hashtag','▁','▁ro','bert','uito','▁@usuario','▁','▁emoji','▁cara','▁revolviéndose','▁de','▁la','▁risa','▁emoji','</s>'] ``` We are working on integrating this preprocessing step into a Tokenizer within `transformers` library ## Citation If you use *RoBERTuito*, please cite our paper: ```bibtex @misc{perez2021robertuito, title={RoBERTuito: a pre-trained language model for social media text in Spanish}, author={Juan Manuel Pérez and Damián A. Furman and Laura Alonso Alemany and Franco Luque}, year={2021}, eprint={2111.09453}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
seeksery/DialoGPT-calig3
1910866d1c3cf0c7b95c91ca1afd63285a825687
2022-07-28T03:16:28.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
seeksery
null
seeksery/DialoGPT-calig3
674
null
transformers
2,070
--- tags: - conversational ---
Parth/result
b06ca90f3eda5e5ee4e1dc3a55714e7cb5ffcc00
2021-06-23T03:47:48.000Z
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
Parth
null
Parth/result
673
1
transformers
2,071
Entry not found
flax-community/gpt2-bengali
cb8fff6e5e2c459c057ce2d1a8e14fd79bb0f0a1
2021-09-25T08:06:37.000Z
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "bn", "dataset:mc4", "transformers", "license:mit" ]
text-generation
false
flax-community
null
flax-community/gpt2-bengali
673
2
transformers
2,072
--- language: bn license: mit datasets: - mc4 --- # Bengali GPT-2 Bengali GPT-2 demo. Part of the [Huggingface JAX/Flax event](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/). Also features a [finetuned](https://huggingface.co/khalidsaifullaah/bengali-lyricist-gpt2?) model on bengali song lyrics. # Model Description OpenAI GPT-2 model was proposed in [Language Models are Unsupervised Multitask Learners](https://paperswithcode.com/paper/language-models-are-unsupervised-multitask) paper .Original GPT2 model was a causal (unidirectional) transformer pretrained using language modeling on a very large corpus of ~40 GB of text data. This model has same configuration but has been pretrained on bengali corpus of mC4(multilingual C4) dataset. The code for training the model has all been open-sourced [here](https://huggingface.co/flax-community/gpt2-bengali/tree/main). # Training Details Overall Result: ```Eval loss : 1.45, Eval Perplexity : 3.141``` Data: [mC4-bn](https://huggingface.co/datasets/mc4) Train Steps: 250k steps link 🤗 flax-community/gpt2-bengali Demo : https://huggingface.co/spaces/flax-community/Gpt2-bengali # Usage For using the model there are multiple options available. For example using the pipeline directly we can try to generate sentences. ``` from transformers import pipeline gpt2_bengali = pipeline('text-generation',model="flax-community/gpt2-bengali", tokenizer='flax-community/gpt2-bengali') ``` Similarly for using the finetuned model on bangla songs we can use following. ``` from transformers import pipeline singer = pipeline('text-generation',model="khalidsaifullaah/bengali-lyricist-gpt2", tokenizer='khalidsaifullaah/bengali-lyricist-gpt2') ``` For using on other tasks the model needs to be fine-tuned on custom datasets. Details can be found in huggingface [documentation](https://huggingface.co/transformers/training.html) # Contributors * Khalid Saifullah * Tasmiah Tahsin Mayeesha * Ritobrata Ghosh * Ibrahim Musa * M Saiful Bari ### BibTeX entry and citation info Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` -->
nlpconnect/roberta-base-squad2-nq
e1c8537df9b745577d04e5353df510b000e2c6e8
2022-07-27T10:46:50.000Z
[ "pytorch", "jax", "roberta", "question-answering", "dataset:squad_v2", "dataset:natural_questions", "transformers", "qa", "license:apache-2.0", "model-index", "autotrain_compatible" ]
question-answering
false
nlpconnect
null
nlpconnect/roberta-base-squad2-nq
673
2
transformers
2,073
--- tags: - qa license: apache-2.0 datasets: - squad_v2 - natural_questions model-index: - name: nlpconnect/roberta-base-squad2-nq results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - name: Exact Match type: exact_match value: 80.3185 verified: true - name: F1 type: f1 value: 83.4669 verified: true - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - name: Exact Match type: exact_match value: 85.5666 verified: true - name: F1 type: f1 value: 92.1939 verified: true --- # Roberta-base-Squad2-NQ ## What is SQuAD? Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. ## The Natural Questions Dataset To help spur development in open-domain question answering, we have created the Natural Questions (NQ) corpus, along with a challenge website based on this data. The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets. ## Training Firstly, we took base roberta model and trained on SQuQD 2.0 dataset for 2 epoch and then after we took NQ Small answer and trained for 1 epoch. Total Dataset Size: 204416 Examples from squadv2 and NQ Small answer dataset ## Evaluation Eval Dataset: Squadv2 dev ``` {'exact': 80.2998399730481, 'f1': 83.4402145786235, 'total': 11873, 'HasAns_exact': 79.08232118758434, 'HasAns_f1': 85.37207619635592, 'HasAns_total': 5928, 'NoAns_exact': 81.5138772077376, 'NoAns_f1': 81.5138772077376, 'NoAns_total': 5945, 'best_exact': 80.2998399730481, 'best_exact_thresh': 0.0, 'best_f1': 83.44021457862335, 'best_f1_thresh': 0.0} ```
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment
054ef75752a9d18c8643d7adb49d2a050a70493f
2022-05-16T06:07:55.000Z
[ "pytorch", "megatron-bert", "text-classification", "zh", "transformers", "bert", "NLU", "Sentiment", "license:apache-2.0" ]
text-classification
false
IDEA-CCNL
null
IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment
673
null
transformers
2,074
--- language: - zh license: apache-2.0 tags: - bert - NLU - Sentiment inference: true widget: - text: "今天心情不好" --- # Erlangshen-MegatronBert-1.3B-Semtiment, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). We collect 8 sentiment datasets in the Chinese domain for finetune, with a total of 227347 samples. Our model is mainly based on [MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B) ## Usage ```python from transformers import AutoModelForSequenceClassification from transformers import BertTokenizer import torch tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment') model=AutoModelForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment') text='今天心情不好' output=model(torch.tensor([tokenizer.encode(text)])) print(torch.nn.functional.softmax(output.logits,dim=-1)) ``` ## Scores on downstream chinese tasks | Model | ASAP-SENT | ASAP-ASPECT | ChnSentiCorp | | :--------: | :-----: | :----: | :-----: | | Erlangshen-Roberta-110M-Sentiment | 97.77 | 97.31 | 96.61 | | Erlangshen-Roberta-330M-Sentiment | 97.9 | 97.51 | 96.66 | | Erlangshen-MegatronBert-1.3B-Sentiment | 98.1 | 97.8 | 97 | ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
textattack/bert-base-uncased-QNLI
a63ef5bad18761ededbc04fb8e0f0a2729b1508d
2021-05-20T07:33:46.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
false
textattack
null
textattack/bert-base-uncased-QNLI
672
null
transformers
2,075
Entry not found
jy46604790/Fake-News-Bert-Detect
8b42d870a3afb7f3bc683a7df73436137fce670a
2022-04-26T04:36:13.000Z
[ "pytorch", "roberta", "text-classification", "transformers", "license:apache-2.0" ]
text-classification
false
jy46604790
null
jy46604790/Fake-News-Bert-Detect
672
1
transformers
2,076
--- license: apache-2.0 --- # Fake News Recognition ## Overview This model is trained by over 40,000 news from different medias based on the 'roberta-base'. It can give result by simply entering the text of the news less than 500 words(the excess will be truncated automatically). LABEL_0: Fake news LABEL_1: Real news ## Qucik Tutorial ### Download The Model ```python from transformers import pipeline MODEL = "jy46604790/Fake-News-Bert-Detect" clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL) ``` ### Feed Data ```python text = "Indonesian police have recaptured a U.S. citizen who escaped a week ago from an overcrowded prison on the holiday island of Bali, the jail s second breakout of foreign inmates this year. Cristian Beasley from California was rearrested on Sunday, Badung Police chief Yudith Satria Hananta said, without providing further details. Beasley was a suspect in crimes related to narcotics but had not been sentenced when he escaped from Kerobokan prison in Bali last week. The 32-year-old is believed to have cut through bars in the ceiling of his cell before scaling a perimeter wall of the prison in an area being refurbished. The Kerobokan prison, about 10 km (six miles) from the main tourist beaches in the Kuta area, often holds foreigners facing drug-related charges. Representatives of Beasley could not immediately be reached for comment. In June, an Australian, a Bulgarian, an Indian and a Malaysian tunneled to freedom about 12 meters (13 yards) under Kerobokan prison s walls. The Indian and the Bulgarian were caught soon after in neighboring East Timor, but Australian Shaun Edward Davidson and Malaysian Tee Kok King remain at large. Davidson has taunted authorities by saying he was enjoying life in various parts of the world, in purported posts on Facebook. Kerobokan has housed a number of well-known foreign drug convicts, including Australian Schappelle Corby, whose 12-1/2-year sentence for marijuana smuggling got huge media attention." ``` ### Result ```python result = clf(text) result ``` output:[{'label': 'LABEL_1', 'score': 0.9994995594024658}]
Chalponkey/DialoGPT-small-Barry
d1bc56145314f22364461aa4d83f7b06b0f6e3b6
2021-09-11T22:36:06.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
false
Chalponkey
null
Chalponkey/DialoGPT-small-Barry
671
null
transformers
2,077
--- tags: - conversational --- #help why did i feed this bot the bee movie
razent/SciFive-large-Pubmed_PMC
d6e6df3eda25df2b4cb1d869cdaac27c0616129e
2022-03-20T17:46:44.000Z
[ "pytorch", "tf", "t5", "text2text-generation", "en", "dataset:pubmed", "dataset:pmc/open_access", "arxiv:2106.03598", "transformers", "token-classification", "text-classification", "question-answering", "text-generation", "autotrain_compatible" ]
text-classification
false
razent
null
razent/SciFive-large-Pubmed_PMC
671
2
transformers
2,078
--- language: - en tags: - token-classification - text-classification - question-answering - text2text-generation - text-generation datasets: - pubmed - pmc/open_access --- # SciFive Pubmed+PMC Large ## Introduction Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598) Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_ ## How to use For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed_PMC") model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed_PMC") ​ sentence = "Identification of APC2 , a homologue of the adenomatous polyposis coli tumour suppressor ." text = "ncbi_ner: " + sentence + " </s>" encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ```
Luyu/co-condenser-wiki
2038532a382e4299d72fb9bc698dd4b2470d780f
2021-08-13T13:50:11.000Z
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
Luyu
null
Luyu/co-condenser-wiki
669
null
transformers
2,079
Entry not found
poom-sci/WangchanBERTa-finetuned-sentiment
b78d07121acca2cbf53d0a81739cc0d03b033902
2021-11-05T17:48:02.000Z
[ "pytorch", "tensorboard", "camembert", "text-classification", "th", "dataset:wongnai_reviews", "dataset:wisesight_sentiment", "dataset:generated_reviews_enth", "transformers", "sentiment-analysis", "license:apache-2.0" ]
text-classification
false
poom-sci
null
poom-sci/WangchanBERTa-finetuned-sentiment
669
1
transformers
2,080
--- language: - th tags: - sentiment-analysis license: apache-2.0 datasets: - wongnai_reviews - wisesight_sentiment - generated_reviews_enth widget: - text: "โอโห้ ช่องนี้เปิดโลกเรามากเลยค่ะ คือตอนช่วงหาคำตอบเรานี่อึ้งไปเลย ดูจีเนียสมากๆๆ" example_title: "Positive" - text: "เริ่มจากชายเน็ตคนหนึ่งเปิดประเด็นว่าไปพบเจ้าจุดดำลึกลับนี้กลางมหาสมุทรใน Google Maps จนนำไปสู่การเสาะหาคำตอบ และพบว่าจริง ๆ แล้วมันคืออะไรกันแน่" example_title: "Neutral" - text: "ผมเป็นคนที่ไม่มีความสุขเลยจริงๆ" example_title: "Negative" --- Created only for study :)
arch0345/DialoGPT-small-joshua
7c6d443dcabf0c95c723db2aa638e904e94eacc5
2021-06-03T23:29:44.000Z
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "license:mit" ]
conversational
false
arch0345
null
arch0345/DialoGPT-small-joshua
668
null
transformers
2,081
--- thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png tags: - conversational license: mit --- Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
fujiki/t5-efficient-xl-en2ja
f1591edfdaf0b9fc7d011fc073612d8f7b3967c5
2022-07-04T00:49:55.000Z
[ "pytorch", "t5", "text2text-generation", "transformers", "license:afl-3.0", "autotrain_compatible" ]
text2text-generation
false
fujiki
null
fujiki/t5-efficient-xl-en2ja
668
null
transformers
2,082
--- license: afl-3.0 ---
google/multiberts-seed_1
fc707bb7657051cf7d3ac47eecfbcad470de3206
2021-11-05T22:09:07.000Z
[ "pytorch", "tf", "bert", "pretraining", "en", "arxiv:2106.16163", "arxiv:1908.08962", "transformers", "multiberts", "multiberts-seed_1", "license:apache-2.0" ]
null
false
google
null
google/multiberts-seed_1
667
null
transformers
2,083
--- language: en tags: - multiberts - multiberts-seed_1 license: apache-2.0 --- # MultiBERTs - Seed 1 MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model #1. ## Model Description This model is a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The intended uses, limitations, training data and training procedure are similar to [BERT-base uncased](https://github.com/google-research/bert). Two major differences with the original model: * We pre-trained the MultiBERTs models for 2 million steps using sequence length 512 (instead of 1 million steps using sequence length 128 then 512). * We used an alternative version of Wikipedia and Books Corpus, initially collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962). This is a best-effort reproduction, and so it is probable that differences with the original model have gone unnoticed. The performance of MultiBERTs on GLUE is oftentimes comparable to that of original BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT). See our [technical report](https://arxiv.org/abs/2106.16163) for more details. ### How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1') model = TFBertModel.from_pretrained("google/multiberts-seed_1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1') model = BertModel.from_pretrained("google/multiberts-seed_1") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ## Citation info ```bibtex @article{sellam2021multiberts, title={The MultiBERTs: BERT Reproductions for Robustness Analysis}, author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, journal={arXiv preprint arXiv:2106.16163}, year={2021} } ```
DaNLP/da-bert-tone-sentiment-polarity
2a4e7c0f815d586190c656fa5214969e01dd0639
2021-09-23T13:37:18.000Z
[ "pytorch", "tf", "bert", "text-classification", "da", "dataset:Twitter Sentiment", "dataset:Europarl Sentiment", "transformers", "sentiment", "polarity", "license:cc-by-sa-4.0" ]
text-classification
false
DaNLP
null
DaNLP/da-bert-tone-sentiment-polarity
666
2
transformers
2,084
--- language: - da tags: - bert - pytorch - sentiment - polarity license: cc-by-sa-4.0 datasets: - Twitter Sentiment - Europarl Sentiment metrics: - f1 widget: - text: Det er super godt --- # Danish BERT Tone for sentiment polarity detection The BERT Tone model detects sentiment polarity (positive, neutral or negative) in Danish texts. It has been finetuned on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO. See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-tone) for more details. Here is how to use the model: ```python from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("DaNLP/da-bert-tone-sentiment-polarity") tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-tone-sentiment-polarity") ``` ## Training data The data used for training come from the [Twitter Sentiment](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#twitsent) and [EuroParl sentiment 2](https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2) datasets.
microsoft/beit-base-finetuned-ade-640-640
3b27791dc6f9f3278e47f226e98e2558422b8365
2022-02-22T09:06:59.000Z
[ "pytorch", "beit", "dataset:scene_parse_150", "arxiv:2106.08254", "transformers", "vision", "image-segmentation", "license:apache-2.0" ]
image-segmentation
false
microsoft
null
microsoft/beit-base-finetuned-ade-640-640
664
2
transformers
2,085
--- license: apache-2.0 tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # BEiT (base-sized model, fine-tuned on ADE20k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes. ## Intended uses & limitations You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model for semantic segmentation: ```python from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation from datasets import load_dataset from PIL import Image # load ADE20k image ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test") feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-finetuned-ade-640-640') model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-base-finetuned-ade-640-640') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # logits are of shape (batch_size, num_labels, height/4, width/4) logits = outputs.logits ``` Currently, both the feature extractor and model support PyTorch. ## Training data This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
aychang/bert-base-cased-trec-coarse
b3b6ddf0f7959ba9de759eeea77cce8b8d68556e
2021-05-19T12:05:27.000Z
[ "pytorch", "jax", "bert", "text-classification", "en", "dataset:trec", "transformers", "license:mit" ]
text-classification
false
aychang
null
aychang/bert-base-cased-trec-coarse
663
null
transformers
2,086
--- language: - en thumbnail: tags: - text-classification license: mit datasets: - trec metrics: --- # bert-base-cased trained on TREC 6-class task ## Model description A simple base BERT model trained on the "trec" dataset. ## Intended uses & limitations #### How to use ##### Transformers ```python # Load model and tokenizer from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Use pipeline from transformers import pipeline model_name = "aychang/bert-base-cased-trec-coarse" nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name) results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]) ``` ##### AdaptNLP ```python from adaptnlp import EasySequenceClassifier model_name = "aychang/bert-base-cased-trec-coarse" texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"] classifer = EasySequenceClassifier results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2) ``` #### Limitations and bias This is minimal language model trained on a benchmark dataset. ## Training data TREC https://huggingface.co/datasets/trec ## Training procedure Preprocessing, hardware used, hyperparameters... #### Hardware One V100 #### Hyperparameters and Training Args ```python from transformers import TrainingArguments training_args = TrainingArguments( output_dir='./models', num_train_epochs=2, per_device_train_batch_size=16, per_device_eval_batch_size=16, warmup_steps=500, weight_decay=0.01, evaluation_strategy="steps", logging_dir='./logs', save_steps=3000 ) ``` ## Eval results ``` {'epoch': 2.0, 'eval_accuracy': 0.974, 'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708, 0.98159509]), 'eval_loss': 0.138086199760437, 'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667, 0.97560976]), 'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. , 0.98765432]), 'eval_runtime': 1.6132, 'eval_samples_per_second': 309.943} ```
jonatasgrosman/wav2vec2-large-xlsr-53-arabic
3a200094b44306af86f8732637089706f0277293
2022-07-27T23:35:30.000Z
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "ar", "dataset:common_voice", "dataset:arabic_speech_corpus", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
false
jonatasgrosman
null
jonatasgrosman/wav2vec2-large-xlsr-53-arabic
663
null
transformers
2,087
--- language: ar datasets: - common_voice - arabic_speech_corpus metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Arabic by Jonatas Grosman results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ar type: common_voice args: ar metrics: - name: Test WER type: wer value: 39.59 - name: Test CER type: cer value: 18.18 --- # Fine-tuned XLSR-53 large model for speech recognition in Arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-arabic") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "ar" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-arabic" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | ألديك قلم ؟ | ألديك قلم | | ليست هناك مسافة على هذه الأرض أبعد من يوم أمس. | ليست نالك مسافة على هذه الأرض أبعد من يوم الأمس م | | إنك تكبر المشكلة. | إنك تكبر المشكلة | | يرغب أن يلتقي بك. | يرغب أن يلتقي بك | | إنهم لا يعرفون لماذا حتى. | إنهم لا يعرفون لماذا حتى | | سيسعدني مساعدتك أي وقت تحب. | سيسئدنيمساعدتك أي وقد تحب | | أَحَبُّ نظريّة علمية إليّ هي أن حلقات زحل مكونة بالكامل من الأمتعة المفقودة. | أحب نظرية علمية إلي هي أن حل قتزح المكوينا بالكامل من الأمت عن المفقودة | | سأشتري له قلماً. | سأشتري له قلما | | أين المشكلة ؟ | أين المشكل | | وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْأَرْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ | ولله يسجد ما في السماوات وما في الأرض من دابة والملائكة وهم لا يستكبرون | ## Evaluation The model can be evaluated as follows on the Arabic test data of Common Voice. ```python import torch import re import librosa from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "ar" MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-arabic" DEVICE = "cuda" CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞", "؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]", "{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。", "、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽", "『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"] test_dataset = load_dataset("common_voice", LANG_ID, split="test") wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]" processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) model.to(DEVICE) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): with warnings.catch_warnings(): warnings.simplefilter("ignore") speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = [x.upper() for x in result["pred_strings"]] references = [x.upper() for x in result["sentence"]] print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") ``` **Test Result**: In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-05-14). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used. | Model | WER | CER | | ------------- | ------------- | ------------- | | jonatasgrosman/wav2vec2-large-xlsr-53-arabic | **39.59%** | **18.18%** | | bakrianoo/sinai-voice-ar-stt | 45.30% | 21.84% | | othrif/wav2vec2-large-xlsr-arabic | 45.93% | 20.51% | | kmfoda/wav2vec2-large-xlsr-arabic | 54.14% | 26.07% | | mohammed/wav2vec2-large-xlsr-arabic | 56.11% | 26.79% | | anas/wav2vec2-large-xlsr-arabic | 62.02% | 27.09% | | elgeish/wav2vec2-large-xlsr-53-arabic | 100.00% | 100.56% | ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-arabic, title={Fine-tuned {XLSR}-53 large model for speech recognition in {A}rabic}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-arabic}}, year={2021} } ```
google/bert_uncased_L-2_H-768_A-12
18174647239b765f3d4aca187ac63f954d01d726
2021-05-19T17:29:34.000Z
[ "pytorch", "jax", "bert", "arxiv:1908.08962", "transformers", "license:apache-2.0" ]
null
false
google
null
google/bert_uncased_L-2_H-768_A-12
662
null
transformers
2,088
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
sentence-transformers/bert-base-nli-max-tokens
c89b0d9813cc872c23740b2e08ea6210b3c059c5
2022-06-15T22:03:57.000Z
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "arxiv:1908.10084", "sentence-transformers", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
false
sentence-transformers
null
sentence-transformers/bert-base-nli-max-tokens
659
null
sentence-transformers
2,089
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: apache-2.0 --- **⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/bert-base-nli-max-tokens This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/bert-base-nli-max-tokens') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch # Max Pooling - Take the max value over time for every dimension. def max_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value return torch.max(token_embeddings, 1)[0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-max-tokens') model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-max-tokens') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = max_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-base-nli-max-tokens) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': True, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
AimB/mT5-en-kr-natural
7a0a905bf442d55d6491d918ac2d94e8bd1ba6d8
2021-04-28T12:47:22.000Z
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
false
AimB
null
AimB/mT5-en-kr-natural
658
1
transformers
2,090
you can use this model with simpletransfomers. ``` !pip install simpletransformers from simpletransformers.t5 import T5Model model = T5Model("mt5", "AimB/mT5-en-kr-natural") print(model.predict(["I feel good today"])) print(model.predict(["우리집 고양이는 세상에서 제일 귀엽습니다"])) ```
nvidia/segformer-b2-finetuned-ade-512-512
39422f0171e069e930136843906418d07e563d4e
2022-07-20T09:53:33.000Z
[ "pytorch", "tf", "segformer", "dataset:scene_parse_150", "arxiv:2105.15203", "transformers", "vision", "image-segmentation", "license:apache-2.0" ]
image-segmentation
false
nvidia
null
nvidia/segformer-b2-finetuned-ade-512-512
654
null
transformers
2,091
--- license: apache-2.0 tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg example_title: House - src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg example_title: Castle --- # SegFormer (b2-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b2-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b2-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
clarin-pl/roberta-polish-kgr10
e301525291c9e9c4047142f61a457e5df2f8492a
2021-05-20T15:22:13.000Z
[ "pytorch", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
false
clarin-pl
null
clarin-pl/roberta-polish-kgr10
653
1
transformers
2,092
# Work in Progress Polish RoBERTa The model has been trained for about 5% time of the target. We will publish new increments as they will be trained. The model pre-trained on KGR10 corpora. More about model at [CLARIN-dspace](https://huggingface.co/clarin/roberta-polish-v1) ## Usage ## Huggingface model hub ## Acknowledgments [CLARIN-PL and CLARIN-BIZ project](https://clarin-pl.eu/)
TencentGameMate/chinese-wav2vec2-large
6e3d224a0a7e42a0dc86a66e21a2c245dbb8dfed
2022-06-24T02:11:54.000Z
[ "pytorch", "wav2vec2", "pretraining", "transformers", "license:mit" ]
null
false
TencentGameMate
null
TencentGameMate/chinese-wav2vec2-large
653
2
transformers
2,093
--- license: mit --- Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain) This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. python package: transformers==4.16.2 ```python import torch import torch.nn.functional as F import soundfile as sf from fairseq import checkpoint_utils from transformers import ( Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining, Wav2Vec2Model, ) from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices model_path="" wav_path="" mask_prob=0.0 mask_length=10 feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path) model = Wav2Vec2Model.from_pretrained(model_path) # for pretrain: Wav2Vec2ForPreTraining # model = Wav2Vec2ForPreTraining.from_pretrained(model_path) model = model.to(device) model = model.half() model.eval() wav, sr = sf.read(wav_path) input_values = feature_extractor(wav, return_tensors="pt").input_values input_values = input_values.half() input_values = input_values.to(device) # for Wav2Vec2ForPreTraining # batch_size, raw_sequence_length = input_values.shape # sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) # mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.0, mask_length=2) # mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long) with torch.no_grad(): outputs = model(input_values) last_hidden_state = outputs.last_hidden_state # for Wav2Vec2ForPreTraining # outputs = model(input_values, mask_time_indices=mask_time_indices, output_hidden_states=True) # last_hidden_state = outputs.hidden_states[-1] ```
deepmind/multimodal-perceiver
bbfaf820b7445c1435f93813b7e17037ebea9b85
2021-12-11T13:27:24.000Z
[ "pytorch", "perceiver", "dataset:kinetics-700-2020", "arxiv:2010.10864", "arxiv:2107.14795", "transformers", "license:apache-2.0" ]
null
false
deepmind
null
deepmind/multimodal-perceiver
652
3
transformers
2,094
--- license: apache-2.0 tags: datasets: - kinetics-700-2020 --- # Perceiver IO for multimodal autoencoding Perceiver IO model trained on [Kinetics-700-2020](https://arxiv.org/abs/2010.10864) for auto-encoding videos that consist of images, audio and a class label. It was introduced in the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Jaegle et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/perceiver). The goal of multimodal autoencoding is to learn a model that can accurately reconstruct multimodal inputs in the presence of a bottleneck induced by an architecture. Disclaimer: The team releasing Perceiver IO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Perceiver IO is a transformer encoder model that can be applied on any modality (text, images, audio, video, ...). The core idea is to employ the self-attention mechanism on a not-too-large set of latent vectors (e.g. 256 or 512), and only use the inputs to perform cross-attention with the latents. This allows for the time and memory requirements of the self-attention mechanism to not depend on the size of the inputs. To decode, the authors employ so-called decoder queries, which allow to flexibly decode the final hidden states of the latents to produce outputs of arbitrary size and semantics. For multimodal autoencoding, the output contains the reconstructions of the 3 modalities: images, audio and the class label. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perceiver_architecture.jpg" alt="drawing" width="600"/> <small> Perceiver IO architecture.</small> As the time and memory requirements of the self-attention mechanism don't depend on the size of the inputs, the Perceiver IO authors can train the model by padding the inputs (images, audio, class label) with modality-specific embeddings and serialize all of them into a 2D input array (i.e. concatenate along the time dimension). Decoding the final hidden states of the latents is done by using queries containing Fourier-based position embeddings (for video and audio) and modality embeddings. ## Intended uses & limitations You can use the raw model for multimodal autoencoding. Note that by masking the class label during evaluation, the auto-encoding model becomes a video classifier. See the [model hub](https://huggingface.co/models search=deepmind/perceiver) to look for other versions on a task that may interest you. ### How to use We refer to the [tutorial notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Perceiver/Perceiver_for_Multimodal_Autoencoding.ipynb) regarding using the Perceiver for multimodal autoencoding. ## Training data This model was trained on [Kinetics-700-200](https://arxiv.org/abs/2010.10864), a dataset consisting of videos that belong to one of 700 classes. ## Training procedure ### Preprocessing The authors train on 16 frames at 224x224 resolution, preprocessed into 50k 4x4 patches as well as 30k raw audio samples, patched into a total of 1920 16-dimensional vectors and one 700-dimensional one-hot representation of the class label. ### Pretraining Hyperparameter details can be found in Appendix F of the [paper](https://arxiv.org/abs/2107.14795). ## Evaluation results For evaluation results, we refer to table 5 of the [paper](https://arxiv.org/abs/2107.14795). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-14795, author = {Andrew Jaegle and Sebastian Borgeaud and Jean{-}Baptiste Alayrac and Carl Doersch and Catalin Ionescu and David Ding and Skanda Koppula and Daniel Zoran and Andrew Brock and Evan Shelhamer and Olivier J. H{\'{e}}naff and Matthew M. Botvinick and Andrew Zisserman and Oriol Vinyals and Jo{\~{a}}o Carreira}, title = {Perceiver {IO:} {A} General Architecture for Structured Inputs {\&} Outputs}, journal = {CoRR}, volume = {abs/2107.14795}, year = {2021}, url = {https://arxiv.org/abs/2107.14795}, eprinttype = {arXiv}, eprint = {2107.14795}, timestamp = {Tue, 03 Aug 2021 14:53:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-14795.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Annas/the-world-machine
2e28ca9b851c49915cf2a91a89f87c7ef9fff0fa
2021-11-23T23:26:00.000Z
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
false
Annas
null
Annas/the-world-machine
651
1
transformers
2,095
AI that knows everything about Oneshot Created by Annas and Gwitr using OpenAI GPT2
inokufu/flaubert-base-uncased-xnli-sts-finetuned-education
e17f3c06113d6fed6f32b713b87182ebef9af58b
2022-07-26T10:59:20.000Z
[ "pytorch", "flaubert", "feature-extraction", "fr", "dataset:xnli", "dataset:stsb_multi_mt", "arxiv:1810.04805", "arxiv:1809.05053", "sentence-transformers", "sentence-similarity", "transformers", "Education", "xnli", "stsb_multi_mt" ]
sentence-similarity
false
inokufu
null
inokufu/flaubert-base-uncased-xnli-sts-finetuned-education
651
0
sentence-transformers
2,096
--- pipeline_tag: sentence-similarity language: fr tags: - sentence-similarity - transformers - Education - fr - flaubert - sentence-transformers - feature-extraction - xnli - stsb_multi_mt datasets: - xnli - stsb_multi_mt --- # inokufu/bertheo A [sentence-transformers](https://www.SBERT.net) model fine-tuned on course sentences. It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Details This model is based on the French flaubert-base-uncased pre-trained model [1, 2]. It was first fine-tuned on our learning object (LO) sentences dataset. This dataset consists of a sample of 500k sentences of course descriptions. We used standard parameter settings for fine-tuning as mentioned in the original BERT paper [3]. This allows the model to improve its performance on the target task (Masked Language Model) for domain-specific sentences. It was then fine-tuned on a natural language inference task (XNLI) [4]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication). It was then fine-tuned on a text semantic similarity task (on STS-fr data) [5]. This task consists in training the model to estimate the similarity between two sentences. This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Apprendre le python", "Devenir expert en comptabilité"] model = SentenceTransformer('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Apprendre le python", "Devenir expert en comptabilité"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education') model = AutoModel.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts-finetuned-education') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results STS (fr) score: 83.05% ## Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: FlaubertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## References [1] https://hal.archives-ouvertes.fr/hal-02784776v3/document <br> [2] https://huggingface.co/flaubert/flaubert_base_uncased <br> [3] https://arxiv.org/abs/1810.04805 <br> [4] https://arxiv.org/abs/1809.05053 <br> [5] https://huggingface.co/datasets/stsb_multi_mt <br>
Helsinki-NLP/opus-mt-war-en
fbaf745add3ecbdbb88881ad233881aed1776174
2020-08-21T14:42:51.000Z
[ "pytorch", "marian", "text2text-generation", "war", "en", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
false
Helsinki-NLP
null
Helsinki-NLP/opus-mt-war-en
650
null
transformers
2,097
--- language: - war - en tags: - translation license: apache-2.0 --- ### war-eng * source group: Waray (Philippines) * target group: English * OPUS readme: [war-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/war-eng/README.md) * model: transformer-align * source language(s): war * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.war.eng | 12.3 | 0.308 | ### System Info: - hf_name: war-eng - source_languages: war - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/war-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['war', 'en'] - src_constituents: {'war'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/war-eng/opus-2020-06-16.test.txt - src_alpha3: war - tgt_alpha3: eng - short_pair: war-en - chrF2_score: 0.308 - bleu: 12.3 - brevity_penalty: 1.0 - ref_len: 11345.0 - src_name: Waray (Philippines) - tgt_name: English - train_date: 2020-06-16 - src_alpha2: war - tgt_alpha2: en - prefer_old: False - long_pair: war-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
mrm8488/t5-base-finetuned-sarcasm-twitter
e97df79f1a218ee827917b7bda41cd368ab53765
2021-09-14T11:44:45.000Z
[ "pytorch", "t5", "text2text-generation", "en", "arxiv:1910.10683", "transformers", "autotrain_compatible" ]
text2text-generation
false
mrm8488
null
mrm8488/t5-base-finetuned-sarcasm-twitter
650
4
transformers
2,098
--- language: en widget: - text: "As everybody knows Trump is by far the best USA president... XD" --- # T5-base fine-tuned for Sarcasm Detection 🙄 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [ Twitter Sarcasm Dataset](https://github.com/EducationalTestingService/sarcasm) for **Sequence classification (as text generation)** downstream task. ## Details of T5 The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://i.imgur.com/jVFMMWR.png) ## Details of the downstream task (Sequence Classification as Text generation) - Dataset 📚 [ Twitter Sarcasm Dataset](https://github.com/EducationalTestingService/sarcasm) For Twitter training and testing datasets are provided for sarcasm detection tasks in jsonlines format. Each line contains a JSON object with the following fields : - ***label*** : `SARCASM` or `NOT_SARCASM` - **NOT** in test data - ***id***: String identifier for sample. This id will be required when making submissions. - **ONLY** in test data - ***response*** : the sarcastic response, whether a sarcastic Tweet - ***context*** : the conversation context of the ***response*** - Note, the context is an ordered list of dialogue, i.e., if the context contains three elements, `c1`, `c2`, `c3`, in that order, then `c2` is a reply to `c1` and `c3` is a reply to `c2`. Further, if the sarcastic response is `r`, then `r` is a reply to `c3`. For instance, for the following training example : `"label": "SARCASM", "response": "Did Kelly just call someone else messy? Baaaahaaahahahaha", "context": ["X is looking a First Lady should . #classact, "didn't think it was tailored enough it looked messy"]` The response tweet, "Did Kelly..." is a reply to its immediate context "didn't think it was tailored..." which is a reply to "X is looking...". Your goal is to predict the label of the "response" while also using the context (i.e, the immediate or the full context). ***Dataset size statistics*** : | | Train | Val | Test | |---------|-------|------|------| | Twitter | 4050 | 450 | 500 | The datasets was preprocessed to convert it to a **text-to-text** (classfication as generation task). ## Model fine-tuning 🏋️‍ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him! ## Test set metrics 🧾 | | precision| recall | f1-score |support| |----------|----------|---------|----------|-------| | derison | 0.84 | 0.80 | 0.82 | 246 | | normal | 0.82 | 0.85 | 0.83 | 254 | | | |accuracy| | | 0.83| 500| |macro avg| 0.83| 0.83| 0.83| 500| |weighted avg| 0.83| 0.83| 0.83| 500| ## Model in Action 🚀 ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-sarcasm-twitter") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-sarcasm-twitter") def eval_conversation(text): input_ids = tokenizer.encode(text + '</s>', return_tensors='pt') output = model.generate(input_ids=input_ids, max_length=3) dec = [tokenizer.decode(ids) for ids in output] label = dec[0] return label # For similarity with the training dataset we should replace users mentions in twits for @USER token and urls for URL token. twit1 = "Trump just suspended the visa program that allowed me to move to the US to start @USER!" + " Unfortunately, I won’t be able to vote in a few months but if you can, please vote him out, " + "he's destroying what made America great in so many different ways!" twit2 = "@USER @USER @USER We have far more cases than any other country, " + "so leaving remote workers in would be disastrous. Makes Trump sense." twit3 = "My worry is that i wouldn’t be surprised if half the country actually agrees with this move..." me = "Trump doing so??? It must be a mistake... XDDD" conversation = twit1 + twit2 eval_conversation(conversation) #Output: 'derison' conversation = twit1 + twit3 eval_conversation(conversation) #Output: 'normal' conversation = twit1 + me eval_conversation(conversation) #Output: 'derison' # We will get 'normal' when sarcasm is not detected and 'derison' when detected ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
PlanTL-GOB-ES/gpt2-base-bne
44d91934b5885add0cfc7c6f922a16b5b0f853b4
2022-04-06T14:41:14.000Z
[ "pytorch", "gpt2", "text-generation", "es", "dataset:bne", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0" ]
text-generation
false
PlanTL-GOB-ES
null
PlanTL-GOB-ES/gpt2-base-bne
648
6
transformers
2,099
--- language: - es license: apache-2.0 tags: - "national library of spain" - "spanish" - "bne" datasets: - "bne" metrics: - "ppl" --- # GPT2-base trained with data from National Library of Spain (BNE) ## Model Description GPT2-base-bne is a transformer-based model for the Spanish language. It is based on the [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019. ## Training corpora and preprocessing The [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) crawls all .es domains once a year. The training corpus consists of 59TB of WARC files from these crawls, carried out from 2009 to 2019. To obtain a high-quality training corpus, the corpus has been preprocessed with a pipeline of operations, including among the others, sentence splitting, language detection, filtering of bad-formed sentences and deduplication of repetitive contents. During the process document boundaries are kept. This resulted into 2TB of Spanish clean corpus. Further global deduplication among the corpus is applied, resulting into 570GB of text. Some of the statistics of the corpus: | Corpora | Number of documents | Number of tokens | Size (GB) | |---------|---------------------|------------------|-----------| | BNE | 201,080,084 | 135,733,450,668 | 570GB | ## Tokenization and pre-training The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original [GPT-2](http://www.persagen.com/files/misc/radford2019language.pdf) model with a vocabulary size of 50,262 tokens. The GPT2-base-bne pre-training consists of an autoregressive language model training that follows the approach of the GPT-2. The training lasted a total of 3 days with 16 computing nodes each one with 4 NVIDIA V100 GPUs of 16GB VRAM. ## Evaluation and results For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish). ## Citing Check out our paper for all the details: https://arxiv.org/abs/2107.07253 ``` @article{gutierrezfandino2022, author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas}, title = {MarIA: Spanish Language Models}, journal = {Procesamiento del Lenguaje Natural}, volume = {68}, number = {0}, year = {2022}, issn = {1989-7553}, url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405}, pages = {39--60} } ``` ## Funding This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL. ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables. Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial. En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.