modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
huggingtweets/zeebeecat01
huggingtweets
2022-02-26T22:24:18Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/zeebeecat01/1645914254405/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1103665627183472642/OVXzwAk7_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Shreya Mukherjee 💀🌻</div> <div style="text-align: center; font-size: 14px;">@zeebeecat01</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Shreya Mukherjee 💀🌻. | Data | Shreya Mukherjee 💀🌻 | | --- | --- | | Tweets downloaded | 731 | | Retweets | 552 | | Short tweets | 33 | | Tweets kept | 146 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kz1pvshu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @zeebeecat01's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3btkttwk) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3btkttwk/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/zeebeecat01') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingartists/tool
huggingartists
2022-02-26T22:15:47Z
4
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/tool", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/tool tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/acf1d51a2d729391074dc51a6dd26857.1000x1000x1.png&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Tool</div> <a href="https://genius.com/artists/tool"> <div style="text-align: center; font-size: 14px;">@tool</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Tool. Dataset is available [here](https://huggingface.co/datasets/huggingartists/tool). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/tool") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2w1h70ok/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Tool's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1zikehwi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1zikehwi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/tool') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/tool") model = AutoModelWithLMHead.from_pretrained("huggingartists/tool") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
cnicu/t5-small-booksum
cnicu
2022-02-26T21:32:52Z
15,213
8
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "summarization", "summary", "dataset:kmfoda/booksum", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- license: mit tags: - summarization - summary datasets: - kmfoda/booksum ---
nimrah/wav2vec2-large-xls-r-300m-my_hindi_home-colab
nimrah
2022-02-26T17:11:23Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-my_hindi_home-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-my_hindi_home-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
leonadase/bert-base-chinese-finetuned-ner
leonadase
2022-02-26T15:09:40Z
28
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:fdner", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - fdner metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-chinese-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: fdner type: fdner args: fdner metrics: - name: Precision type: precision value: 0.9146341463414634 - name: Recall type: recall value: 0.9414225941422594 - name: F1 type: f1 value: 0.9278350515463917 - name: Accuracy type: accuracy value: 0.9750636132315522 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-finetuned-ner This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the fdner dataset. It achieves the following results on the evaluation set: - Loss: 0.1016 - Precision: 0.9146 - Recall: 0.9414 - F1: 0.9278 - Accuracy: 0.9751 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 2 | 0.9181 | 0.1271 | 0.1255 | 0.1263 | 0.7170 | | No log | 2.0 | 4 | 0.8048 | 0.1919 | 0.2385 | 0.2127 | 0.7669 | | No log | 3.0 | 6 | 0.7079 | 0.2422 | 0.3264 | 0.2781 | 0.7980 | | No log | 4.0 | 8 | 0.6201 | 0.3505 | 0.4854 | 0.4070 | 0.8338 | | No log | 5.0 | 10 | 0.5462 | 0.3898 | 0.4812 | 0.4307 | 0.8611 | | No log | 6.0 | 12 | 0.4851 | 0.4749 | 0.5941 | 0.5279 | 0.8802 | | No log | 7.0 | 14 | 0.4338 | 0.5213 | 0.6151 | 0.5643 | 0.8936 | | No log | 8.0 | 16 | 0.3843 | 0.5663 | 0.6611 | 0.6100 | 0.9076 | | No log | 9.0 | 18 | 0.3451 | 0.6255 | 0.6987 | 0.6601 | 0.9214 | | No log | 10.0 | 20 | 0.3058 | 0.6719 | 0.7197 | 0.6949 | 0.9293 | | No log | 11.0 | 22 | 0.2783 | 0.6808 | 0.7406 | 0.7094 | 0.9344 | | No log | 12.0 | 24 | 0.2497 | 0.7050 | 0.7699 | 0.7360 | 0.9427 | | No log | 13.0 | 26 | 0.2235 | 0.7519 | 0.8117 | 0.7807 | 0.9506 | | No log | 14.0 | 28 | 0.2031 | 0.7713 | 0.8326 | 0.8008 | 0.9552 | | No log | 15.0 | 30 | 0.1861 | 0.7915 | 0.8577 | 0.8233 | 0.9593 | | No log | 16.0 | 32 | 0.1726 | 0.8031 | 0.8703 | 0.8353 | 0.9613 | | No log | 17.0 | 34 | 0.1619 | 0.8320 | 0.8912 | 0.8606 | 0.9641 | | No log | 18.0 | 36 | 0.1521 | 0.8571 | 0.9038 | 0.8798 | 0.9674 | | No log | 19.0 | 38 | 0.1420 | 0.8710 | 0.9038 | 0.8871 | 0.9695 | | No log | 20.0 | 40 | 0.1352 | 0.8795 | 0.9163 | 0.8975 | 0.9700 | | No log | 21.0 | 42 | 0.1281 | 0.8755 | 0.9121 | 0.8934 | 0.9712 | | No log | 22.0 | 44 | 0.1209 | 0.8916 | 0.9289 | 0.9098 | 0.9728 | | No log | 23.0 | 46 | 0.1155 | 0.8924 | 0.9372 | 0.9143 | 0.9733 | | No log | 24.0 | 48 | 0.1115 | 0.904 | 0.9456 | 0.9243 | 0.9746 | | No log | 25.0 | 50 | 0.1087 | 0.9116 | 0.9498 | 0.9303 | 0.9746 | | No log | 26.0 | 52 | 0.1068 | 0.9146 | 0.9414 | 0.9278 | 0.9740 | | No log | 27.0 | 54 | 0.1054 | 0.9146 | 0.9414 | 0.9278 | 0.9743 | | No log | 28.0 | 56 | 0.1036 | 0.9146 | 0.9414 | 0.9278 | 0.9743 | | No log | 29.0 | 58 | 0.1022 | 0.9146 | 0.9414 | 0.9278 | 0.9746 | | No log | 30.0 | 60 | 0.1016 | 0.9146 | 0.9414 | 0.9278 | 0.9751 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
sanchit-gandhi/wav2vec2-2-bert-grid-search
sanchit-gandhi
2022-02-26T14:08:06Z
25
0
transformers
[ "transformers", "pytorch", "tensorboard", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:librispeech_asr", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model was trained from scratch on the librispeech_asr dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
tuanle/GPT2_Poet
tuanle
2022-02-26T11:32:30Z
13
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# GPT-2 Fine-tuning With Vietnamese Six Eight Poems ## Model description This is a Vietnamese GPT-2 Six Eight Poet Model which is trained on the 10mb of Six Eight poems dataset, based on the Vietnamese Wiki GPT2 pretrained model (https://huggingface.co/danghuy1999/gpt2-viwiki) ## Purpose This model was made only for fun and experimental study ## Dataset The dataset is about 10k lines of Vietnamese Six Eight poems ## Result - Train Loss: 2.7 - Val loss: 4.5 ## How to use You can use this model to generate Six Eight poems given any starting words ## Example ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') tokenizer = AutoTokenizer.from_pretrained("tuanle/GPT2_Poet") model = AutoModelForCausalLM.from_pretrained("tuanle/GPT2_Poet").to(device) text = "hỏi rằng nàng" input_ids = tokenizer.encode(text, return_tensors='pt').to(device) min_length = 60 max_length = 100 sample_outputs = model.generate(input_ids,pad_token_id=tokenizer.eos_token_id, do_sample=True, max_length=max_length, min_length=min_length, # temperature = .8, # top_k= 100, top_p = 0.8, num_beams= 10, # early_stopping=True, no_repeat_ngram_size= 2, num_return_sequences= 3) for i, sample_output in enumerate(sample_outputs): print(">> Generated text {}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist(), skip_special_tokens=True))) print('\n---') ``` ## Demo - Input: "hỏi rằng nàng" - Output: hỏi rằng nàng đã nói ra\ cớ sao nàng lại hỏi han sự tình\ vân tiên nói lại những lời\ thưa rằng ở chốn am mây một mình\ từ đây mới biết rõ ràng\ ở đây cũng gặp một người ở đây\ hai người gặp lại gặp nhau\ thấy lời nàng mới hỏi tra việc này\ nguyệt nga hỏi việc bấy lâu\ khen rằng đạo sĩ ở đầu cửa thiền
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-10
anas-awadalla
2022-02-26T09:47:52Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-10 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-8
anas-awadalla
2022-02-26T09:30:48Z
3
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-8 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-6
anas-awadalla
2022-02-26T09:16:54Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-6 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-0
anas-awadalla
2022-02-26T08:25:44Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-1024-finetuned-squad-seed-0 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-10
anas-awadalla
2022-02-26T08:08:44Z
3
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-10 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-8
anas-awadalla
2022-02-26T07:53:21Z
3
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-8 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-6
anas-awadalla
2022-02-26T07:37:57Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-6 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2
anas-awadalla
2022-02-26T07:07:11Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-2 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-0
anas-awadalla
2022-02-26T06:51:47Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-512-finetuned-squad-seed-0 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-10
anas-awadalla
2022-02-26T06:36:19Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-10 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-4
anas-awadalla
2022-02-26T05:53:17Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-4 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-2
anas-awadalla
2022-02-26T05:38:42Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-256-finetuned-squad-seed-2 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-8
anas-awadalla
2022-02-26T04:54:14Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-8 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-6
anas-awadalla
2022-02-26T04:38:59Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-6 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-0
anas-awadalla
2022-02-26T04:19:12Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-128-finetuned-squad-seed-0 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13
ali2066
2022-02-26T03:36:40Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09
ali2066
2022-02-26T03:25:34Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37
ali2066
2022-02-26T03:20:03Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
ali2066
2022-02-26T03:14:31Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
ali2066
2022-02-26T03:03:20Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4345 - Accuracy: 0.8321 - F1: 0.8904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3922 | 0.8061 | 0.8747 | | No log | 2.0 | 390 | 0.3764 | 0.8171 | 0.8837 | | 0.4074 | 3.0 | 585 | 0.3873 | 0.8220 | 0.8843 | | 0.4074 | 4.0 | 780 | 0.4361 | 0.8232 | 0.8854 | | 0.4074 | 5.0 | 975 | 0.4555 | 0.8159 | 0.8793 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10
anas-awadalla
2022-02-25T23:29:09Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-64-finetuned-squad-seed-10 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-4
anas-awadalla
2022-02-25T21:12:44Z
3
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-32-finetuned-squad-seed-4 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-10
anas-awadalla
2022-02-25T20:28:21Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-10 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-8
anas-awadalla
2022-02-25T20:13:14Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-8 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-6
anas-awadalla
2022-02-25T19:58:15Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-6 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/dril-nia_mp4
huggingtweets
2022-02-25T19:44:43Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/dril-nia_mp4/1645818279249/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1487740104340918272/7c9spp2E_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Nia & wint</div> <div style="text-align: center; font-size: 14px;">@dril-nia_mp4</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Nia & wint. | Data | Nia | wint | | --- | --- | --- | | Tweets downloaded | 278 | 3229 | | Retweets | 12 | 473 | | Short tweets | 13 | 300 | | Tweets kept | 253 | 2456 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ybk5oh0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-nia_mp4's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ny6aucf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ny6aucf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dril-nia_mp4') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
anas-awadalla/spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-2
anas-awadalla
2022-02-25T19:29:02Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanbert-base-cased-few-shot-k-16-finetuned-squad-seed-2 This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-8
anas-awadalla
2022-02-25T18:42:10Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-1024-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-1024-finetuned-squad-seed-8 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-4
anas-awadalla
2022-02-25T18:03:56Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-1024-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-1024-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-0
anas-awadalla
2022-02-25T17:25:37Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-1024-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-1024-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-512-finetuned-squad-seed-8
anas-awadalla
2022-02-25T16:49:04Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-512-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-512-finetuned-squad-seed-8 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-512-finetuned-squad-seed-0
anas-awadalla
2022-02-25T15:39:31Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-512-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-512-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
Davlan/xlm-roberta-base-masakhaner
Davlan
2022-02-25T15:23:22Z
4
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "arxiv:2103.11811", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
Hugging Face's logo --- language: - am - ha - ig - rw - lg - luo - pcm - sw - wo - yo - multilingual datasets: - masakhaner --- # xlm-roberta-base-masakhaner ## Model description **xlm-roberta-base-masakhaner** is the first **Named Entity Recognition** model for 10 African languages (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) based on a fine-tuned XLM-RoBERTa large model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER). Specifically, this model is a *xlm-roberta-large* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for NER. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-masakhaner") model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-masakhaner") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria" ner_results = nlp(example) print(ner_results) ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on 10 African NER datasets (Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahili, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes: Abbreviation|Description -|- O|Outside of a named entity B-DATE |Beginning of a DATE entity right after another DATE entity I-DATE |DATE entity B-PER |Beginning of a person’s name right after another person’s name I-PER |Person’s name B-ORG |Beginning of an organisation right after another organisation I-ORG |Organisation B-LOC |Beginning of a location right after another location I-LOC |Location ## Training procedure This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus. ### BibTeX entry and citation info ``` @article{adelani21tacl, title = {Masakha{NER}: Named Entity Recognition for African Languages}, author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei}, journal = {Transactions of the Association for Computational Linguistics (TACL)}, month = {}, url = {https://arxiv.org/abs/2103.11811}, year = {2021} } ```
saptarshidatta96/finetuning-sentiment-model-3000-samples
saptarshidatta96
2022-02-25T15:20:10Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.879746835443038 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3209 - Accuracy: 0.8733 - F1: 0.8797 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
AWTStress/stress_classifier
AWTStress
2022-02-25T15:08:51Z
21
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_keras_callback model-index: - name: tmp_znj9o4r results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp_znj9o4r This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
osanseviero/el_core_news_sm
osanseviero
2022-02-25T14:44:32Z
0
1
spacy
[ "spacy", "token-classification", "el", "license:cc-by-nc-sa-3.0", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - el license: cc-by-nc-sa-3.0 model-index: - name: el_core_news_sm results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.7348837209 - name: NER Recall type: recall value: 0.6638655462 - name: NER F Score type: f_score value: 0.6975717439 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.9134743381 - task: name: POS type: token-classification metrics: - name: POS (UPOS) Accuracy type: accuracy value: 0.94345018 - task: name: MORPH type: token-classification metrics: - name: Morph (UFeats) Accuracy type: accuracy value: 0.8863580338 - task: name: LEMMA type: token-classification metrics: - name: Lemma Accuracy type: accuracy value: 0.5620470345 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.8446911409 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.804792262 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9274292743 --- ### Details: https://spacy.io/models/el#el_core_news_sm Greek pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `el_core_news_sm` | | **Version** | `3.2.0` | | **spaCy** | `>=3.2.0,<3.3.0` | | **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [UD Greek GDT v2.8](https://github.com/UniversalDependencies/UD_Greek-GDT) (Prokopidis, Prokopis)<br />[Greek NER Corpus (Google Summer of Code 2018)](https://github.com/eellak/gsoc2018-spacy) (Giannis Daras)<br />[spaCy lookups data](https://github.com/explosion/spacy-lookups-data) (Explosion) | | **License** | `CC BY-NC-SA 3.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (396 labels for 4 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=X`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `POS=ADP`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|POS=NUM`, `POS=NOUN`, `POS=ADV`, `POS=PUNCT`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADP`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADP`, `Case=Acc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `POS=AUX`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADP`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADP`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADP`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|POS=VERB\|VerbForm=Conv\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=SCONJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `POS=PART`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Abbr=Yes\|POS=ADV`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Case=Nom\|Gender=Neut\|NumType=Ord\|Number=Plur\|POS=NUM`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|NumType=Sets\|Number=Plur\|POS=NUM`, `Aspect=Imp\|POS=AUX\|VerbForm=Conv\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Fem\|NumType=Sets\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Gender=Fem\|NumType=Sets\|Number=Plur\|POS=NUM`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Neut\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|NumType=Mult\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Degree=Sup\|POS=ADV`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `POS=SYM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Rel`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|NumType=Mult\|Number=Sing\|POS=NUM`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Gen\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`senter`** | `I`, `S` | | **`ner`** | `EVENT`, `GPE`, `LOC`, `ORG`, `PERSON`, `PRODUCT` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 100.00 | | `TOKEN_P` | 99.90 | | `TOKEN_R` | 99.95 | | `TOKEN_F` | 99.93 | | `SENTS_P` | 91.95 | | `SENTS_R` | 93.55 | | `SENTS_F` | 92.74 | | `DEP_UAS` | 84.47 | | `DEP_LAS` | 80.48 | | `ENTS_P` | 73.49 | | `ENTS_R` | 66.39 | | `ENTS_F` | 69.76 | | `POS_ACC` | 94.35 | | `MORPH_ACC` | 88.64 | | `MORPH_MICRO_P` | 94.75 | | `MORPH_MICRO_R` | 94.54 | | `MORPH_MICRO_F` | 94.64 | | `TAG_ACC` | 91.35 | | `LEMMA_ACC` | 56.20 |
anas-awadalla/roberta-base-few-shot-k-256-finetuned-squad-seed-4
anas-awadalla
2022-02-25T14:32:34Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-256-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-256-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-256-finetuned-squad-seed-2
anas-awadalla
2022-02-25T14:16:03Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-256-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-256-finetuned-squad-seed-2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-256-finetuned-squad-seed-0
anas-awadalla
2022-02-25T13:59:28Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-256-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-256-finetuned-squad-seed-0 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-10
anas-awadalla
2022-02-25T13:42:57Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-128-finetuned-squad-seed-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-128-finetuned-squad-seed-10 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-8
anas-awadalla
2022-02-25T13:25:47Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-128-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-128-finetuned-squad-seed-8 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
Davlan/xlm-roberta-base-finetuned-chichewa
Davlan
2022-02-25T13:09:19Z
7
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: apache-2.0 ---
vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k-MLM_400k
vocab-transformers
2022-02-25T13:04:28Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
#cross_encoder-msmarco-distilbert-word2vec256k-MLM_400k This CrossEncoder was trained with MarginMSE loss from the [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k](https://hf.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_400k) checkpoint. **Word embedding matrix has been frozen during training**. You can load the model with [sentence-transformers](https://sbert.net): ```python from sentence_transformers import CrossEncoder from torch import nn model = CrossEncoder(model_name, default_activation_function=nn.Identity()) ``` Performance on TREC Deep Learning (nDCG@10): - TREC-DL 19: 72.62 - TREC-DL 20: 73.22
anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-4
anas-awadalla
2022-02-25T12:51:24Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-128-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-128-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
vocab-transformers/cross_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated
vocab-transformers
2022-02-25T12:44:23Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
#cross_encoder-msmarco-distilbert-word2vec256k-MLM_785k_emb_updated This CrossEncoder was trained with MarginMSE loss from the [vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated](https://hf.co/vocab-transformers/msmarco-distilbert-word2vec256k-MLM_785k_emb_updated) checkpoint. **Word embedding matrix has been updated during training**. You can load the model with [sentence-transformers](https://sbert.net): ```python from sentence_transformers import CrossEncoder from torch import nn model = CrossEncoder(model_name, default_activation_function=nn.Identity()) ``` Performance on TREC Deep Learning (nDCG@10): - TREC-DL 19: 71.65 - TREC-DL 20: 73.6
anas-awadalla/roberta-base-few-shot-k-128-finetuned-squad-seed-2
anas-awadalla
2022-02-25T12:34:14Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-128-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-128-finetuned-squad-seed-2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
QuickRead/fine-tune-Pegasus
QuickRead
2022-02-25T12:13:39Z
4
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:xsum", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: fine-tune-Pegasus results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum args: default metrics: - name: Rouge1 type: rouge value: 17.993 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tune-Pegasus This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.3242 - Rouge1: 17.993 - Rouge2: 2.9392 - Rougel: 12.313 - Rougelsum: 13.3091 - Gen Len: 67.0552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.35e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-64-finetuned-squad-seed-10
anas-awadalla
2022-02-25T12:02:17Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-64-finetuned-squad-seed-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-64-finetuned-squad-seed-10 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-64-finetuned-squad-seed-8
anas-awadalla
2022-02-25T11:45:04Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-64-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-64-finetuned-squad-seed-8 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
Ayham/albert_bert_summarization_cnn_dailymail
Ayham
2022-02-25T11:32:57Z
24
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: albert_bert_summarization_cnn_dailymail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert_bert_summarization_cnn_dailymail This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-64-finetuned-squad-seed-6
anas-awadalla
2022-02-25T11:27:54Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-64-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-64-finetuned-squad-seed-6 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
9pinus/macbert-base-chinese-medical-collation
9pinus
2022-02-25T10:26:38Z
24
10
transformers
[ "transformers", "pytorch", "bert", "token-classification", "Token Classification", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 language: zh tags: - Token Classification metrics: - precision - recall - f1 - accuracy --- ## Model description This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding noise with a either visually or phonologically resembled characters. Consequently, the fine-tuned model can achieve 96% accuracy on our test dataset. ## Intended uses & limitations You can use this model directly with a pipeline for token classification: ```python >>> from transformers import (AutoModelForTokenClassification, AutoTokenizer) >>> from transformers import pipeline >>> hub_model_id = "9pinus/macbert-base-chinese-medical-collation" >>> model = AutoModelForTokenClassification.from_pretrained(hub_model_id) >>> tokenizer = AutoTokenizer.from_pretrained(hub_model_id) >>> classifier = pipeline('ner', model=model, tokenizer=tokenizer) >>> result = classifier("如果病情较重,可适当口服甲肖唑片、环酯红霉素片等药物进行抗感染镇痛。") >>> for item in result: >>> if item['entity'] == 1: >>> print(item) {'entity': 1, 'score': 0.58127016, 'index': 14, 'word': '肖', 'start': 13, 'end': 14} ``` ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-32-finetuned-squad-seed-8
anas-awadalla
2022-02-25T10:02:23Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-32-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-32-finetuned-squad-seed-8 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
wietsedv/xlm-roberta-base-ft-udpos28-uk
wietsedv
2022-02-25T09:59:34Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "uk", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - uk license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-uk results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 82.2 - type: accuracy name: Dutch Test accuracy value: 84.3 - type: accuracy name: German Test accuracy value: 82.4 - type: accuracy name: Italian Test accuracy value: 83.9 - type: accuracy name: French Test accuracy value: 82.6 - type: accuracy name: Spanish Test accuracy value: 86.2 - type: accuracy name: Russian Test accuracy value: 93.3 - type: accuracy name: Swedish Test accuracy value: 86.3 - type: accuracy name: Norwegian Test accuracy value: 80.2 - type: accuracy name: Danish Test accuracy value: 85.2 - type: accuracy name: Low Saxon Test accuracy value: 30.9 - type: accuracy name: Akkadian Test accuracy value: 17.5 - type: accuracy name: Armenian Test accuracy value: 87.7 - type: accuracy name: Welsh Test accuracy value: 66.8 - type: accuracy name: Old East Slavic Test accuracy value: 77.5 - type: accuracy name: Albanian Test accuracy value: 79.7 - type: accuracy name: Slovenian Test accuracy value: 84.5 - type: accuracy name: Guajajara Test accuracy value: 14.6 - type: accuracy name: Kurmanji Test accuracy value: 77.0 - type: accuracy name: Turkish Test accuracy value: 76.3 - type: accuracy name: Finnish Test accuracy value: 82.5 - type: accuracy name: Indonesian Test accuracy value: 77.0 - type: accuracy name: Ukrainian Test accuracy value: 98.2 - type: accuracy name: Polish Test accuracy value: 91.8 - type: accuracy name: Portuguese Test accuracy value: 84.1 - type: accuracy name: Kazakh Test accuracy value: 81.8 - type: accuracy name: Latin Test accuracy value: 77.9 - type: accuracy name: Old French Test accuracy value: 26.9 - type: accuracy name: Buryat Test accuracy value: 60.7 - type: accuracy name: Kaapor Test accuracy value: 5.4 - type: accuracy name: Korean Test accuracy value: 61.5 - type: accuracy name: Estonian Test accuracy value: 84.4 - type: accuracy name: Croatian Test accuracy value: 93.2 - type: accuracy name: Gothic Test accuracy value: 3.7 - type: accuracy name: Swiss German Test accuracy value: 35.0 - type: accuracy name: Assyrian Test accuracy value: 14.6 - type: accuracy name: North Sami Test accuracy value: 27.0 - type: accuracy name: Naija Test accuracy value: 22.5 - type: accuracy name: Latvian Test accuracy value: 88.9 - type: accuracy name: Chinese Test accuracy value: 51.9 - type: accuracy name: Tagalog Test accuracy value: 71.1 - type: accuracy name: Bambara Test accuracy value: 18.7 - type: accuracy name: Lithuanian Test accuracy value: 88.1 - type: accuracy name: Galician Test accuracy value: 85.8 - type: accuracy name: Vietnamese Test accuracy value: 66.3 - type: accuracy name: Greek Test accuracy value: 85.9 - type: accuracy name: Catalan Test accuracy value: 84.0 - type: accuracy name: Czech Test accuracy value: 92.1 - type: accuracy name: Erzya Test accuracy value: 49.4 - type: accuracy name: Bhojpuri Test accuracy value: 51.8 - type: accuracy name: Thai Test accuracy value: 63.3 - type: accuracy name: Marathi Test accuracy value: 88.3 - type: accuracy name: Basque Test accuracy value: 75.7 - type: accuracy name: Slovak Test accuracy value: 91.8 - type: accuracy name: Kiche Test accuracy value: 22.7 - type: accuracy name: Yoruba Test accuracy value: 20.0 - type: accuracy name: Warlpiri Test accuracy value: 32.4 - type: accuracy name: Tamil Test accuracy value: 81.7 - type: accuracy name: Maltese Test accuracy value: 16.6 - type: accuracy name: Ancient Greek Test accuracy value: 63.0 - type: accuracy name: Icelandic Test accuracy value: 81.4 - type: accuracy name: Mbya Guarani Test accuracy value: 23.7 - type: accuracy name: Urdu Test accuracy value: 64.1 - type: accuracy name: Romanian Test accuracy value: 82.6 - type: accuracy name: Persian Test accuracy value: 78.3 - type: accuracy name: Apurina Test accuracy value: 24.8 - type: accuracy name: Japanese Test accuracy value: 38.0 - type: accuracy name: Hungarian Test accuracy value: 82.2 - type: accuracy name: Hindi Test accuracy value: 68.3 - type: accuracy name: Classical Chinese Test accuracy value: 36.6 - type: accuracy name: Komi Permyak Test accuracy value: 46.0 - type: accuracy name: Faroese Test accuracy value: 73.6 - type: accuracy name: Sanskrit Test accuracy value: 13.9 - type: accuracy name: Livvi Test accuracy value: 59.5 - type: accuracy name: Arabic Test accuracy value: 82.1 - type: accuracy name: Wolof Test accuracy value: 18.5 - type: accuracy name: Bulgarian Test accuracy value: 91.1 - type: accuracy name: Akuntsu Test accuracy value: 15.2 - type: accuracy name: Makurap Test accuracy value: 2.1 - type: accuracy name: Kangri Test accuracy value: 51.4 - type: accuracy name: Breton Test accuracy value: 59.3 - type: accuracy name: Telugu Test accuracy value: 84.3 - type: accuracy name: Cantonese Test accuracy value: 53.8 - type: accuracy name: Old Church Slavonic Test accuracy value: 48.0 - type: accuracy name: Karelian Test accuracy value: 68.6 - type: accuracy name: Upper Sorbian Test accuracy value: 71.7 - type: accuracy name: South Levantine Arabic Test accuracy value: 68.9 - type: accuracy name: Komi Zyrian Test accuracy value: 40.4 - type: accuracy name: Irish Test accuracy value: 66.2 - type: accuracy name: Nayini Test accuracy value: 46.2 - type: accuracy name: Munduruku Test accuracy value: 8.0 - type: accuracy name: Manx Test accuracy value: 23.0 - type: accuracy name: Skolt Sami Test accuracy value: 27.7 - type: accuracy name: Afrikaans Test accuracy value: 81.7 - type: accuracy name: Old Turkish Test accuracy value: 39.8 - type: accuracy name: Tupinamba Test accuracy value: 20.2 - type: accuracy name: Belarusian Test accuracy value: 93.7 - type: accuracy name: Serbian Test accuracy value: 93.8 - type: accuracy name: Moksha Test accuracy value: 46.0 - type: accuracy name: Western Armenian Test accuracy value: 79.8 - type: accuracy name: Scottish Gaelic Test accuracy value: 56.3 - type: accuracy name: Khunsari Test accuracy value: 36.5 - type: accuracy name: Hebrew Test accuracy value: 84.4 - type: accuracy name: Uyghur Test accuracy value: 77.2 - type: accuracy name: Chukchi Test accuracy value: 35.0 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Ukrainian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-uk") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-uk") ```
wietsedv/xlm-roberta-base-ft-udpos28-ug
wietsedv
2022-02-25T09:59:33Z
4
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "ug", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - ug license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-ug results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 60.9 - type: accuracy name: Dutch Test accuracy value: 57.8 - type: accuracy name: German Test accuracy value: 61.0 - type: accuracy name: Italian Test accuracy value: 59.4 - type: accuracy name: French Test accuracy value: 53.9 - type: accuracy name: Spanish Test accuracy value: 55.5 - type: accuracy name: Russian Test accuracy value: 71.6 - type: accuracy name: Swedish Test accuracy value: 65.9 - type: accuracy name: Norwegian Test accuracy value: 63.0 - type: accuracy name: Danish Test accuracy value: 64.4 - type: accuracy name: Low Saxon Test accuracy value: 44.5 - type: accuracy name: Akkadian Test accuracy value: 37.0 - type: accuracy name: Armenian Test accuracy value: 77.0 - type: accuracy name: Welsh Test accuracy value: 57.1 - type: accuracy name: Old East Slavic Test accuracy value: 58.4 - type: accuracy name: Albanian Test accuracy value: 63.4 - type: accuracy name: Slovenian Test accuracy value: 58.7 - type: accuracy name: Guajajara Test accuracy value: 38.2 - type: accuracy name: Kurmanji Test accuracy value: 71.3 - type: accuracy name: Turkish Test accuracy value: 74.6 - type: accuracy name: Finnish Test accuracy value: 76.0 - type: accuracy name: Indonesian Test accuracy value: 65.5 - type: accuracy name: Ukrainian Test accuracy value: 71.6 - type: accuracy name: Polish Test accuracy value: 67.9 - type: accuracy name: Portuguese Test accuracy value: 62.4 - type: accuracy name: Kazakh Test accuracy value: 82.0 - type: accuracy name: Latin Test accuracy value: 68.3 - type: accuracy name: Old French Test accuracy value: 45.0 - type: accuracy name: Buryat Test accuracy value: 61.5 - type: accuracy name: Kaapor Test accuracy value: 29.2 - type: accuracy name: Korean Test accuracy value: 61.7 - type: accuracy name: Estonian Test accuracy value: 74.8 - type: accuracy name: Croatian Test accuracy value: 64.6 - type: accuracy name: Gothic Test accuracy value: 23.8 - type: accuracy name: Swiss German Test accuracy value: 46.9 - type: accuracy name: Assyrian Test accuracy value: 29.4 - type: accuracy name: North Sami Test accuracy value: 42.7 - type: accuracy name: Naija Test accuracy value: 39.0 - type: accuracy name: Latvian Test accuracy value: 77.2 - type: accuracy name: Chinese Test accuracy value: 57.9 - type: accuracy name: Tagalog Test accuracy value: 61.5 - type: accuracy name: Bambara Test accuracy value: 35.8 - type: accuracy name: Lithuanian Test accuracy value: 79.1 - type: accuracy name: Galician Test accuracy value: 60.3 - type: accuracy name: Vietnamese Test accuracy value: 67.9 - type: accuracy name: Greek Test accuracy value: 61.4 - type: accuracy name: Catalan Test accuracy value: 50.3 - type: accuracy name: Czech Test accuracy value: 67.9 - type: accuracy name: Erzya Test accuracy value: 49.9 - type: accuracy name: Bhojpuri Test accuracy value: 55.0 - type: accuracy name: Thai Test accuracy value: 56.2 - type: accuracy name: Marathi Test accuracy value: 81.6 - type: accuracy name: Basque Test accuracy value: 70.3 - type: accuracy name: Slovak Test accuracy value: 63.9 - type: accuracy name: Kiche Test accuracy value: 35.6 - type: accuracy name: Yoruba Test accuracy value: 32.9 - type: accuracy name: Warlpiri Test accuracy value: 55.5 - type: accuracy name: Tamil Test accuracy value: 73.9 - type: accuracy name: Maltese Test accuracy value: 32.3 - type: accuracy name: Ancient Greek Test accuracy value: 51.7 - type: accuracy name: Icelandic Test accuracy value: 65.8 - type: accuracy name: Mbya Guarani Test accuracy value: 34.3 - type: accuracy name: Urdu Test accuracy value: 68.7 - type: accuracy name: Romanian Test accuracy value: 65.1 - type: accuracy name: Persian Test accuracy value: 74.1 - type: accuracy name: Apurina Test accuracy value: 45.9 - type: accuracy name: Japanese Test accuracy value: 47.5 - type: accuracy name: Hungarian Test accuracy value: 62.6 - type: accuracy name: Hindi Test accuracy value: 74.2 - type: accuracy name: Classical Chinese Test accuracy value: 40.9 - type: accuracy name: Komi Permyak Test accuracy value: 49.2 - type: accuracy name: Faroese Test accuracy value: 56.4 - type: accuracy name: Sanskrit Test accuracy value: 43.1 - type: accuracy name: Livvi Test accuracy value: 64.2 - type: accuracy name: Arabic Test accuracy value: 60.9 - type: accuracy name: Wolof Test accuracy value: 35.2 - type: accuracy name: Bulgarian Test accuracy value: 68.3 - type: accuracy name: Akuntsu Test accuracy value: 47.6 - type: accuracy name: Makurap Test accuracy value: 23.3 - type: accuracy name: Kangri Test accuracy value: 51.8 - type: accuracy name: Breton Test accuracy value: 52.0 - type: accuracy name: Telugu Test accuracy value: 82.8 - type: accuracy name: Cantonese Test accuracy value: 57.4 - type: accuracy name: Old Church Slavonic Test accuracy value: 41.9 - type: accuracy name: Karelian Test accuracy value: 64.6 - type: accuracy name: Upper Sorbian Test accuracy value: 59.8 - type: accuracy name: South Levantine Arabic Test accuracy value: 58.0 - type: accuracy name: Komi Zyrian Test accuracy value: 48.8 - type: accuracy name: Irish Test accuracy value: 51.8 - type: accuracy name: Nayini Test accuracy value: 55.1 - type: accuracy name: Munduruku Test accuracy value: 41.2 - type: accuracy name: Manx Test accuracy value: 36.9 - type: accuracy name: Skolt Sami Test accuracy value: 45.6 - type: accuracy name: Afrikaans Test accuracy value: 61.8 - type: accuracy name: Old Turkish Test accuracy value: 40.7 - type: accuracy name: Tupinamba Test accuracy value: 52.6 - type: accuracy name: Belarusian Test accuracy value: 71.2 - type: accuracy name: Serbian Test accuracy value: 63.1 - type: accuracy name: Moksha Test accuracy value: 49.0 - type: accuracy name: Western Armenian Test accuracy value: 71.8 - type: accuracy name: Scottish Gaelic Test accuracy value: 48.0 - type: accuracy name: Khunsari Test accuracy value: 52.7 - type: accuracy name: Hebrew Test accuracy value: 77.1 - type: accuracy name: Uyghur Test accuracy value: 89.9 - type: accuracy name: Chukchi Test accuracy value: 40.3 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Uyghur This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ug") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ug") ```
wietsedv/xlm-roberta-base-ft-udpos28-te
wietsedv
2022-02-25T09:59:30Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "te", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - te license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-te results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 68.9 - type: accuracy name: Dutch Test accuracy value: 68.0 - type: accuracy name: German Test accuracy value: 67.0 - type: accuracy name: Italian Test accuracy value: 63.3 - type: accuracy name: French Test accuracy value: 62.1 - type: accuracy name: Spanish Test accuracy value: 63.1 - type: accuracy name: Russian Test accuracy value: 71.0 - type: accuracy name: Swedish Test accuracy value: 66.4 - type: accuracy name: Norwegian Test accuracy value: 62.1 - type: accuracy name: Danish Test accuracy value: 67.5 - type: accuracy name: Low Saxon Test accuracy value: 48.2 - type: accuracy name: Akkadian Test accuracy value: 37.4 - type: accuracy name: Armenian Test accuracy value: 72.5 - type: accuracy name: Welsh Test accuracy value: 54.5 - type: accuracy name: Old East Slavic Test accuracy value: 57.6 - type: accuracy name: Albanian Test accuracy value: 60.3 - type: accuracy name: Slovenian Test accuracy value: 58.6 - type: accuracy name: Guajajara Test accuracy value: 35.3 - type: accuracy name: Kurmanji Test accuracy value: 67.7 - type: accuracy name: Turkish Test accuracy value: 73.0 - type: accuracy name: Finnish Test accuracy value: 73.8 - type: accuracy name: Indonesian Test accuracy value: 69.0 - type: accuracy name: Ukrainian Test accuracy value: 71.3 - type: accuracy name: Polish Test accuracy value: 68.4 - type: accuracy name: Portuguese Test accuracy value: 66.3 - type: accuracy name: Kazakh Test accuracy value: 77.4 - type: accuracy name: Latin Test accuracy value: 65.1 - type: accuracy name: Old French Test accuracy value: 48.4 - type: accuracy name: Buryat Test accuracy value: 64.0 - type: accuracy name: Kaapor Test accuracy value: 33.8 - type: accuracy name: Korean Test accuracy value: 63.2 - type: accuracy name: Estonian Test accuracy value: 73.8 - type: accuracy name: Croatian Test accuracy value: 65.6 - type: accuracy name: Gothic Test accuracy value: 29.8 - type: accuracy name: Swiss German Test accuracy value: 48.0 - type: accuracy name: Assyrian Test accuracy value: 16.8 - type: accuracy name: North Sami Test accuracy value: 41.0 - type: accuracy name: Naija Test accuracy value: 38.1 - type: accuracy name: Latvian Test accuracy value: 77.6 - type: accuracy name: Chinese Test accuracy value: 62.0 - type: accuracy name: Tagalog Test accuracy value: 66.1 - type: accuracy name: Bambara Test accuracy value: 35.3 - type: accuracy name: Lithuanian Test accuracy value: 77.6 - type: accuracy name: Galician Test accuracy value: 62.9 - type: accuracy name: Vietnamese Test accuracy value: 59.5 - type: accuracy name: Greek Test accuracy value: 66.3 - type: accuracy name: Catalan Test accuracy value: 62.1 - type: accuracy name: Czech Test accuracy value: 69.1 - type: accuracy name: Erzya Test accuracy value: 50.3 - type: accuracy name: Bhojpuri Test accuracy value: 61.0 - type: accuracy name: Thai Test accuracy value: 57.3 - type: accuracy name: Marathi Test accuracy value: 79.8 - type: accuracy name: Basque Test accuracy value: 67.4 - type: accuracy name: Slovak Test accuracy value: 67.4 - type: accuracy name: Kiche Test accuracy value: 37.4 - type: accuracy name: Yoruba Test accuracy value: 33.5 - type: accuracy name: Warlpiri Test accuracy value: 49.0 - type: accuracy name: Tamil Test accuracy value: 89.3 - type: accuracy name: Maltese Test accuracy value: 34.9 - type: accuracy name: Ancient Greek Test accuracy value: 48.0 - type: accuracy name: Icelandic Test accuracy value: 63.5 - type: accuracy name: Mbya Guarani Test accuracy value: 35.4 - type: accuracy name: Urdu Test accuracy value: 69.8 - type: accuracy name: Romanian Test accuracy value: 62.8 - type: accuracy name: Persian Test accuracy value: 63.5 - type: accuracy name: Apurina Test accuracy value: 50.2 - type: accuracy name: Japanese Test accuracy value: 49.7 - type: accuracy name: Hungarian Test accuracy value: 74.9 - type: accuracy name: Hindi Test accuracy value: 73.3 - type: accuracy name: Classical Chinese Test accuracy value: 41.9 - type: accuracy name: Komi Permyak Test accuracy value: 50.1 - type: accuracy name: Faroese Test accuracy value: 57.0 - type: accuracy name: Sanskrit Test accuracy value: 46.1 - type: accuracy name: Livvi Test accuracy value: 63.3 - type: accuracy name: Arabic Test accuracy value: 62.7 - type: accuracy name: Wolof Test accuracy value: 40.2 - type: accuracy name: Bulgarian Test accuracy value: 67.3 - type: accuracy name: Akuntsu Test accuracy value: 43.2 - type: accuracy name: Makurap Test accuracy value: 27.4 - type: accuracy name: Kangri Test accuracy value: 51.0 - type: accuracy name: Breton Test accuracy value: 54.9 - type: accuracy name: Telugu Test accuracy value: 94.9 - type: accuracy name: Cantonese Test accuracy value: 60.4 - type: accuracy name: Old Church Slavonic Test accuracy value: 46.3 - type: accuracy name: Karelian Test accuracy value: 65.9 - type: accuracy name: Upper Sorbian Test accuracy value: 59.7 - type: accuracy name: South Levantine Arabic Test accuracy value: 61.5 - type: accuracy name: Komi Zyrian Test accuracy value: 45.2 - type: accuracy name: Irish Test accuracy value: 56.0 - type: accuracy name: Nayini Test accuracy value: 52.6 - type: accuracy name: Munduruku Test accuracy value: 36.2 - type: accuracy name: Manx Test accuracy value: 37.0 - type: accuracy name: Skolt Sami Test accuracy value: 46.7 - type: accuracy name: Afrikaans Test accuracy value: 64.3 - type: accuracy name: Old Turkish Test accuracy value: 39.8 - type: accuracy name: Tupinamba Test accuracy value: 45.1 - type: accuracy name: Belarusian Test accuracy value: 70.0 - type: accuracy name: Serbian Test accuracy value: 66.4 - type: accuracy name: Moksha Test accuracy value: 45.7 - type: accuracy name: Western Armenian Test accuracy value: 66.0 - type: accuracy name: Scottish Gaelic Test accuracy value: 52.6 - type: accuracy name: Khunsari Test accuracy value: 45.9 - type: accuracy name: Hebrew Test accuracy value: 74.0 - type: accuracy name: Uyghur Test accuracy value: 75.9 - type: accuracy name: Chukchi Test accuracy value: 40.8 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Telugu This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-te") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-te") ```
wietsedv/xlm-roberta-base-ft-udpos28-ta
wietsedv
2022-02-25T09:59:28Z
9
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "ta", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - ta license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-ta results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 68.1 - type: accuracy name: Dutch Test accuracy value: 64.0 - type: accuracy name: German Test accuracy value: 65.8 - type: accuracy name: Italian Test accuracy value: 61.2 - type: accuracy name: French Test accuracy value: 56.9 - type: accuracy name: Spanish Test accuracy value: 59.5 - type: accuracy name: Russian Test accuracy value: 74.3 - type: accuracy name: Swedish Test accuracy value: 69.1 - type: accuracy name: Norwegian Test accuracy value: 64.8 - type: accuracy name: Danish Test accuracy value: 70.0 - type: accuracy name: Low Saxon Test accuracy value: 46.9 - type: accuracy name: Akkadian Test accuracy value: 28.4 - type: accuracy name: Armenian Test accuracy value: 76.5 - type: accuracy name: Welsh Test accuracy value: 54.2 - type: accuracy name: Old East Slavic Test accuracy value: 61.8 - type: accuracy name: Albanian Test accuracy value: 61.0 - type: accuracy name: Slovenian Test accuracy value: 59.8 - type: accuracy name: Guajajara Test accuracy value: 22.7 - type: accuracy name: Kurmanji Test accuracy value: 64.1 - type: accuracy name: Turkish Test accuracy value: 72.0 - type: accuracy name: Finnish Test accuracy value: 76.2 - type: accuracy name: Indonesian Test accuracy value: 70.3 - type: accuracy name: Ukrainian Test accuracy value: 75.5 - type: accuracy name: Polish Test accuracy value: 72.0 - type: accuracy name: Portuguese Test accuracy value: 65.9 - type: accuracy name: Kazakh Test accuracy value: 77.2 - type: accuracy name: Latin Test accuracy value: 67.8 - type: accuracy name: Old French Test accuracy value: 45.0 - type: accuracy name: Buryat Test accuracy value: 58.8 - type: accuracy name: Kaapor Test accuracy value: 21.2 - type: accuracy name: Korean Test accuracy value: 58.6 - type: accuracy name: Estonian Test accuracy value: 78.5 - type: accuracy name: Croatian Test accuracy value: 71.3 - type: accuracy name: Gothic Test accuracy value: 18.2 - type: accuracy name: Swiss German Test accuracy value: 44.1 - type: accuracy name: Assyrian Test accuracy value: 17.2 - type: accuracy name: North Sami Test accuracy value: 34.9 - type: accuracy name: Naija Test accuracy value: 37.5 - type: accuracy name: Latvian Test accuracy value: 79.2 - type: accuracy name: Chinese Test accuracy value: 47.9 - type: accuracy name: Tagalog Test accuracy value: 65.6 - type: accuracy name: Bambara Test accuracy value: 22.8 - type: accuracy name: Lithuanian Test accuracy value: 77.8 - type: accuracy name: Galician Test accuracy value: 61.9 - type: accuracy name: Vietnamese Test accuracy value: 56.1 - type: accuracy name: Greek Test accuracy value: 63.5 - type: accuracy name: Catalan Test accuracy value: 57.6 - type: accuracy name: Czech Test accuracy value: 71.7 - type: accuracy name: Erzya Test accuracy value: 43.5 - type: accuracy name: Bhojpuri Test accuracy value: 55.6 - type: accuracy name: Thai Test accuracy value: 56.7 - type: accuracy name: Marathi Test accuracy value: 79.1 - type: accuracy name: Basque Test accuracy value: 74.3 - type: accuracy name: Slovak Test accuracy value: 71.9 - type: accuracy name: Kiche Test accuracy value: 28.3 - type: accuracy name: Yoruba Test accuracy value: 22.3 - type: accuracy name: Warlpiri Test accuracy value: 32.4 - type: accuracy name: Tamil Test accuracy value: 85.6 - type: accuracy name: Maltese Test accuracy value: 23.1 - type: accuracy name: Ancient Greek Test accuracy value: 52.9 - type: accuracy name: Icelandic Test accuracy value: 67.9 - type: accuracy name: Mbya Guarani Test accuracy value: 28.5 - type: accuracy name: Urdu Test accuracy value: 69.0 - type: accuracy name: Romanian Test accuracy value: 65.5 - type: accuracy name: Persian Test accuracy value: 60.0 - type: accuracy name: Apurina Test accuracy value: 32.7 - type: accuracy name: Japanese Test accuracy value: 42.3 - type: accuracy name: Hungarian Test accuracy value: 69.8 - type: accuracy name: Hindi Test accuracy value: 73.6 - type: accuracy name: Classical Chinese Test accuracy value: 28.3 - type: accuracy name: Komi Permyak Test accuracy value: 40.2 - type: accuracy name: Faroese Test accuracy value: 59.9 - type: accuracy name: Sanskrit Test accuracy value: 36.9 - type: accuracy name: Livvi Test accuracy value: 61.4 - type: accuracy name: Arabic Test accuracy value: 62.9 - type: accuracy name: Wolof Test accuracy value: 28.3 - type: accuracy name: Bulgarian Test accuracy value: 71.6 - type: accuracy name: Akuntsu Test accuracy value: 19.3 - type: accuracy name: Makurap Test accuracy value: 12.3 - type: accuracy name: Kangri Test accuracy value: 51.6 - type: accuracy name: Breton Test accuracy value: 51.7 - type: accuracy name: Telugu Test accuracy value: 83.2 - type: accuracy name: Cantonese Test accuracy value: 50.3 - type: accuracy name: Old Church Slavonic Test accuracy value: 45.7 - type: accuracy name: Karelian Test accuracy value: 63.7 - type: accuracy name: Upper Sorbian Test accuracy value: 62.3 - type: accuracy name: South Levantine Arabic Test accuracy value: 57.5 - type: accuracy name: Komi Zyrian Test accuracy value: 35.3 - type: accuracy name: Irish Test accuracy value: 58.2 - type: accuracy name: Nayini Test accuracy value: 48.7 - type: accuracy name: Munduruku Test accuracy value: 15.9 - type: accuracy name: Manx Test accuracy value: 26.5 - type: accuracy name: Skolt Sami Test accuracy value: 32.7 - type: accuracy name: Afrikaans Test accuracy value: 66.5 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 27.8 - type: accuracy name: Belarusian Test accuracy value: 76.9 - type: accuracy name: Serbian Test accuracy value: 71.6 - type: accuracy name: Moksha Test accuracy value: 39.2 - type: accuracy name: Western Armenian Test accuracy value: 70.8 - type: accuracy name: Scottish Gaelic Test accuracy value: 50.2 - type: accuracy name: Khunsari Test accuracy value: 39.2 - type: accuracy name: Hebrew Test accuracy value: 81.2 - type: accuracy name: Uyghur Test accuracy value: 67.3 - type: accuracy name: Chukchi Test accuracy value: 33.6 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Tamil This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ta") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ta") ```
wietsedv/xlm-roberta-base-ft-udpos28-sv
wietsedv
2022-02-25T09:59:27Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "sv", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sv license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-sv results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 92.3 - type: accuracy name: Dutch Test accuracy value: 90.0 - type: accuracy name: German Test accuracy value: 91.1 - type: accuracy name: Italian Test accuracy value: 88.0 - type: accuracy name: French Test accuracy value: 88.2 - type: accuracy name: Spanish Test accuracy value: 91.1 - type: accuracy name: Russian Test accuracy value: 91.4 - type: accuracy name: Swedish Test accuracy value: 97.9 - type: accuracy name: Norwegian Test accuracy value: 89.7 - type: accuracy name: Danish Test accuracy value: 92.9 - type: accuracy name: Low Saxon Test accuracy value: 57.4 - type: accuracy name: Akkadian Test accuracy value: 40.4 - type: accuracy name: Armenian Test accuracy value: 87.5 - type: accuracy name: Welsh Test accuracy value: 69.6 - type: accuracy name: Old East Slavic Test accuracy value: 76.2 - type: accuracy name: Albanian Test accuracy value: 80.3 - type: accuracy name: Slovenian Test accuracy value: 81.0 - type: accuracy name: Guajajara Test accuracy value: 35.1 - type: accuracy name: Kurmanji Test accuracy value: 77.3 - type: accuracy name: Turkish Test accuracy value: 79.2 - type: accuracy name: Finnish Test accuracy value: 87.0 - type: accuracy name: Indonesian Test accuracy value: 84.2 - type: accuracy name: Ukrainian Test accuracy value: 90.4 - type: accuracy name: Polish Test accuracy value: 88.9 - type: accuracy name: Portuguese Test accuracy value: 90.1 - type: accuracy name: Kazakh Test accuracy value: 83.4 - type: accuracy name: Latin Test accuracy value: 79.1 - type: accuracy name: Old French Test accuracy value: 62.6 - type: accuracy name: Buryat Test accuracy value: 63.0 - type: accuracy name: Kaapor Test accuracy value: 20.8 - type: accuracy name: Korean Test accuracy value: 64.3 - type: accuracy name: Estonian Test accuracy value: 89.6 - type: accuracy name: Croatian Test accuracy value: 90.8 - type: accuracy name: Gothic Test accuracy value: 26.0 - type: accuracy name: Swiss German Test accuracy value: 51.8 - type: accuracy name: Assyrian Test accuracy value: 17.2 - type: accuracy name: North Sami Test accuracy value: 45.4 - type: accuracy name: Naija Test accuracy value: 48.1 - type: accuracy name: Latvian Test accuracy value: 87.1 - type: accuracy name: Chinese Test accuracy value: 48.5 - type: accuracy name: Tagalog Test accuracy value: 72.3 - type: accuracy name: Bambara Test accuracy value: 31.8 - type: accuracy name: Lithuanian Test accuracy value: 86.2 - type: accuracy name: Galician Test accuracy value: 88.1 - type: accuracy name: Vietnamese Test accuracy value: 66.3 - type: accuracy name: Greek Test accuracy value: 88.1 - type: accuracy name: Catalan Test accuracy value: 90.1 - type: accuracy name: Czech Test accuracy value: 90.1 - type: accuracy name: Erzya Test accuracy value: 50.8 - type: accuracy name: Bhojpuri Test accuracy value: 51.7 - type: accuracy name: Thai Test accuracy value: 66.4 - type: accuracy name: Marathi Test accuracy value: 86.5 - type: accuracy name: Basque Test accuracy value: 76.4 - type: accuracy name: Slovak Test accuracy value: 90.5 - type: accuracy name: Kiche Test accuracy value: 42.4 - type: accuracy name: Yoruba Test accuracy value: 31.2 - type: accuracy name: Warlpiri Test accuracy value: 42.5 - type: accuracy name: Tamil Test accuracy value: 85.3 - type: accuracy name: Maltese Test accuracy value: 30.6 - type: accuracy name: Ancient Greek Test accuracy value: 63.0 - type: accuracy name: Icelandic Test accuracy value: 85.3 - type: accuracy name: Mbya Guarani Test accuracy value: 32.3 - type: accuracy name: Urdu Test accuracy value: 67.6 - type: accuracy name: Romanian Test accuracy value: 85.5 - type: accuracy name: Persian Test accuracy value: 77.4 - type: accuracy name: Apurina Test accuracy value: 47.4 - type: accuracy name: Japanese Test accuracy value: 35.5 - type: accuracy name: Hungarian Test accuracy value: 87.1 - type: accuracy name: Hindi Test accuracy value: 75.1 - type: accuracy name: Classical Chinese Test accuracy value: 30.8 - type: accuracy name: Komi Permyak Test accuracy value: 52.4 - type: accuracy name: Faroese Test accuracy value: 80.3 - type: accuracy name: Sanskrit Test accuracy value: 40.7 - type: accuracy name: Livvi Test accuracy value: 68.5 - type: accuracy name: Arabic Test accuracy value: 82.0 - type: accuracy name: Wolof Test accuracy value: 37.4 - type: accuracy name: Bulgarian Test accuracy value: 92.9 - type: accuracy name: Akuntsu Test accuracy value: 41.1 - type: accuracy name: Makurap Test accuracy value: 22.6 - type: accuracy name: Kangri Test accuracy value: 47.1 - type: accuracy name: Breton Test accuracy value: 64.3 - type: accuracy name: Telugu Test accuracy value: 84.9 - type: accuracy name: Cantonese Test accuracy value: 48.8 - type: accuracy name: Old Church Slavonic Test accuracy value: 51.1 - type: accuracy name: Karelian Test accuracy value: 74.1 - type: accuracy name: Upper Sorbian Test accuracy value: 77.5 - type: accuracy name: South Levantine Arabic Test accuracy value: 69.6 - type: accuracy name: Komi Zyrian Test accuracy value: 44.5 - type: accuracy name: Irish Test accuracy value: 70.5 - type: accuracy name: Nayini Test accuracy value: 44.9 - type: accuracy name: Munduruku Test accuracy value: 24.3 - type: accuracy name: Manx Test accuracy value: 34.1 - type: accuracy name: Skolt Sami Test accuracy value: 42.0 - type: accuracy name: Afrikaans Test accuracy value: 92.1 - type: accuracy name: Old Turkish Test accuracy value: 40.3 - type: accuracy name: Tupinamba Test accuracy value: 41.4 - type: accuracy name: Belarusian Test accuracy value: 89.8 - type: accuracy name: Serbian Test accuracy value: 91.5 - type: accuracy name: Moksha Test accuracy value: 46.7 - type: accuracy name: Western Armenian Test accuracy value: 80.3 - type: accuracy name: Scottish Gaelic Test accuracy value: 60.4 - type: accuracy name: Khunsari Test accuracy value: 45.9 - type: accuracy name: Hebrew Test accuracy value: 87.5 - type: accuracy name: Uyghur Test accuracy value: 76.9 - type: accuracy name: Chukchi Test accuracy value: 35.9 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Swedish This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sv") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sv") ```
wietsedv/xlm-roberta-base-ft-udpos28-sr
wietsedv
2022-02-25T09:59:25Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "sr", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sr license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-sr results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 82.9 - type: accuracy name: Dutch Test accuracy value: 84.0 - type: accuracy name: German Test accuracy value: 82.7 - type: accuracy name: Italian Test accuracy value: 82.6 - type: accuracy name: French Test accuracy value: 83.6 - type: accuracy name: Spanish Test accuracy value: 87.3 - type: accuracy name: Russian Test accuracy value: 90.6 - type: accuracy name: Swedish Test accuracy value: 85.5 - type: accuracy name: Norwegian Test accuracy value: 79.0 - type: accuracy name: Danish Test accuracy value: 84.1 - type: accuracy name: Low Saxon Test accuracy value: 47.9 - type: accuracy name: Akkadian Test accuracy value: 30.2 - type: accuracy name: Armenian Test accuracy value: 84.2 - type: accuracy name: Welsh Test accuracy value: 67.4 - type: accuracy name: Old East Slavic Test accuracy value: 75.9 - type: accuracy name: Albanian Test accuracy value: 74.6 - type: accuracy name: Slovenian Test accuracy value: 85.8 - type: accuracy name: Guajajara Test accuracy value: 25.6 - type: accuracy name: Kurmanji Test accuracy value: 75.8 - type: accuracy name: Turkish Test accuracy value: 76.2 - type: accuracy name: Finnish Test accuracy value: 81.7 - type: accuracy name: Indonesian Test accuracy value: 80.5 - type: accuracy name: Ukrainian Test accuracy value: 92.3 - type: accuracy name: Polish Test accuracy value: 91.8 - type: accuracy name: Portuguese Test accuracy value: 84.7 - type: accuracy name: Kazakh Test accuracy value: 79.7 - type: accuracy name: Latin Test accuracy value: 77.0 - type: accuracy name: Old French Test accuracy value: 54.3 - type: accuracy name: Buryat Test accuracy value: 58.6 - type: accuracy name: Kaapor Test accuracy value: 14.6 - type: accuracy name: Korean Test accuracy value: 60.6 - type: accuracy name: Estonian Test accuracy value: 84.4 - type: accuracy name: Croatian Test accuracy value: 97.0 - type: accuracy name: Gothic Test accuracy value: 17.1 - type: accuracy name: Swiss German Test accuracy value: 42.9 - type: accuracy name: Assyrian Test accuracy value: 16.1 - type: accuracy name: North Sami Test accuracy value: 31.2 - type: accuracy name: Naija Test accuracy value: 38.7 - type: accuracy name: Latvian Test accuracy value: 85.1 - type: accuracy name: Chinese Test accuracy value: 41.3 - type: accuracy name: Tagalog Test accuracy value: 77.5 - type: accuracy name: Bambara Test accuracy value: 27.6 - type: accuracy name: Lithuanian Test accuracy value: 85.3 - type: accuracy name: Galician Test accuracy value: 84.9 - type: accuracy name: Vietnamese Test accuracy value: 65.8 - type: accuracy name: Greek Test accuracy value: 83.9 - type: accuracy name: Catalan Test accuracy value: 85.7 - type: accuracy name: Czech Test accuracy value: 94.8 - type: accuracy name: Erzya Test accuracy value: 43.1 - type: accuracy name: Bhojpuri Test accuracy value: 47.9 - type: accuracy name: Thai Test accuracy value: 60.5 - type: accuracy name: Marathi Test accuracy value: 84.0 - type: accuracy name: Basque Test accuracy value: 74.9 - type: accuracy name: Slovak Test accuracy value: 94.6 - type: accuracy name: Kiche Test accuracy value: 31.5 - type: accuracy name: Yoruba Test accuracy value: 21.8 - type: accuracy name: Warlpiri Test accuracy value: 37.7 - type: accuracy name: Tamil Test accuracy value: 83.9 - type: accuracy name: Maltese Test accuracy value: 22.7 - type: accuracy name: Ancient Greek Test accuracy value: 59.0 - type: accuracy name: Icelandic Test accuracy value: 79.6 - type: accuracy name: Mbya Guarani Test accuracy value: 29.4 - type: accuracy name: Urdu Test accuracy value: 63.0 - type: accuracy name: Romanian Test accuracy value: 82.1 - type: accuracy name: Persian Test accuracy value: 78.7 - type: accuracy name: Apurina Test accuracy value: 30.1 - type: accuracy name: Japanese Test accuracy value: 28.7 - type: accuracy name: Hungarian Test accuracy value: 78.4 - type: accuracy name: Hindi Test accuracy value: 66.6 - type: accuracy name: Classical Chinese Test accuracy value: 27.3 - type: accuracy name: Komi Permyak Test accuracy value: 40.2 - type: accuracy name: Faroese Test accuracy value: 76.1 - type: accuracy name: Sanskrit Test accuracy value: 32.5 - type: accuracy name: Livvi Test accuracy value: 62.6 - type: accuracy name: Arabic Test accuracy value: 80.9 - type: accuracy name: Wolof Test accuracy value: 30.7 - type: accuracy name: Bulgarian Test accuracy value: 92.2 - type: accuracy name: Akuntsu Test accuracy value: 32.6 - type: accuracy name: Makurap Test accuracy value: 12.3 - type: accuracy name: Kangri Test accuracy value: 44.4 - type: accuracy name: Breton Test accuracy value: 58.0 - type: accuracy name: Telugu Test accuracy value: 77.8 - type: accuracy name: Cantonese Test accuracy value: 44.9 - type: accuracy name: Old Church Slavonic Test accuracy value: 45.4 - type: accuracy name: Karelian Test accuracy value: 69.8 - type: accuracy name: Upper Sorbian Test accuracy value: 77.5 - type: accuracy name: South Levantine Arabic Test accuracy value: 66.8 - type: accuracy name: Komi Zyrian Test accuracy value: 36.1 - type: accuracy name: Irish Test accuracy value: 67.9 - type: accuracy name: Nayini Test accuracy value: 44.9 - type: accuracy name: Munduruku Test accuracy value: 19.2 - type: accuracy name: Manx Test accuracy value: 33.1 - type: accuracy name: Skolt Sami Test accuracy value: 33.0 - type: accuracy name: Afrikaans Test accuracy value: 79.6 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 31.4 - type: accuracy name: Belarusian Test accuracy value: 91.0 - type: accuracy name: Serbian Test accuracy value: 99.1 - type: accuracy name: Moksha Test accuracy value: 40.2 - type: accuracy name: Western Armenian Test accuracy value: 75.8 - type: accuracy name: Scottish Gaelic Test accuracy value: 57.1 - type: accuracy name: Khunsari Test accuracy value: 32.4 - type: accuracy name: Hebrew Test accuracy value: 88.5 - type: accuracy name: Uyghur Test accuracy value: 71.0 - type: accuracy name: Chukchi Test accuracy value: 29.3 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Serbian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sr") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sr") ```
wietsedv/xlm-roberta-base-ft-udpos28-sme
wietsedv
2022-02-25T09:59:24Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "sme", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sme license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-sme results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 48.1 - type: accuracy name: Dutch Test accuracy value: 49.5 - type: accuracy name: German Test accuracy value: 40.4 - type: accuracy name: Italian Test accuracy value: 48.9 - type: accuracy name: French Test accuracy value: 43.9 - type: accuracy name: Spanish Test accuracy value: 47.1 - type: accuracy name: Russian Test accuracy value: 57.3 - type: accuracy name: Swedish Test accuracy value: 47.9 - type: accuracy name: Norwegian Test accuracy value: 45.5 - type: accuracy name: Danish Test accuracy value: 50.7 - type: accuracy name: Low Saxon Test accuracy value: 38.7 - type: accuracy name: Akkadian Test accuracy value: 29.6 - type: accuracy name: Armenian Test accuracy value: 63.0 - type: accuracy name: Welsh Test accuracy value: 36.9 - type: accuracy name: Old East Slavic Test accuracy value: 46.0 - type: accuracy name: Albanian Test accuracy value: 47.8 - type: accuracy name: Slovenian Test accuracy value: 45.5 - type: accuracy name: Guajajara Test accuracy value: 31.8 - type: accuracy name: Kurmanji Test accuracy value: 42.5 - type: accuracy name: Turkish Test accuracy value: 56.3 - type: accuracy name: Finnish Test accuracy value: 64.7 - type: accuracy name: Indonesian Test accuracy value: 59.3 - type: accuracy name: Ukrainian Test accuracy value: 56.6 - type: accuracy name: Polish Test accuracy value: 55.0 - type: accuracy name: Portuguese Test accuracy value: 52.0 - type: accuracy name: Kazakh Test accuracy value: 62.2 - type: accuracy name: Latin Test accuracy value: 50.3 - type: accuracy name: Old French Test accuracy value: 30.8 - type: accuracy name: Buryat Test accuracy value: 50.6 - type: accuracy name: Kaapor Test accuracy value: 18.3 - type: accuracy name: Korean Test accuracy value: 51.7 - type: accuracy name: Estonian Test accuracy value: 65.2 - type: accuracy name: Croatian Test accuracy value: 55.9 - type: accuracy name: Gothic Test accuracy value: 31.1 - type: accuracy name: Swiss German Test accuracy value: 37.1 - type: accuracy name: Assyrian Test accuracy value: 24.1 - type: accuracy name: North Sami Test accuracy value: 87.7 - type: accuracy name: Naija Test accuracy value: 19.8 - type: accuracy name: Latvian Test accuracy value: 64.2 - type: accuracy name: Chinese Test accuracy value: 33.9 - type: accuracy name: Tagalog Test accuracy value: 46.3 - type: accuracy name: Bambara Test accuracy value: 30.2 - type: accuracy name: Lithuanian Test accuracy value: 63.5 - type: accuracy name: Galician Test accuracy value: 48.5 - type: accuracy name: Vietnamese Test accuracy value: 46.0 - type: accuracy name: Greek Test accuracy value: 45.6 - type: accuracy name: Catalan Test accuracy value: 45.8 - type: accuracy name: Czech Test accuracy value: 54.5 - type: accuracy name: Erzya Test accuracy value: 45.8 - type: accuracy name: Bhojpuri Test accuracy value: 34.3 - type: accuracy name: Thai Test accuracy value: 23.9 - type: accuracy name: Marathi Test accuracy value: 67.5 - type: accuracy name: Basque Test accuracy value: 59.6 - type: accuracy name: Slovak Test accuracy value: 57.7 - type: accuracy name: Kiche Test accuracy value: 35.6 - type: accuracy name: Yoruba Test accuracy value: 31.0 - type: accuracy name: Warlpiri Test accuracy value: 43.3 - type: accuracy name: Tamil Test accuracy value: 60.4 - type: accuracy name: Maltese Test accuracy value: 34.1 - type: accuracy name: Ancient Greek Test accuracy value: 41.8 - type: accuracy name: Icelandic Test accuracy value: 47.2 - type: accuracy name: Mbya Guarani Test accuracy value: 36.0 - type: accuracy name: Urdu Test accuracy value: 36.8 - type: accuracy name: Romanian Test accuracy value: 50.1 - type: accuracy name: Persian Test accuracy value: 45.8 - type: accuracy name: Apurina Test accuracy value: 48.4 - type: accuracy name: Japanese Test accuracy value: 30.6 - type: accuracy name: Hungarian Test accuracy value: 54.7 - type: accuracy name: Hindi Test accuracy value: 39.5 - type: accuracy name: Classical Chinese Test accuracy value: 18.3 - type: accuracy name: Komi Permyak Test accuracy value: 51.1 - type: accuracy name: Faroese Test accuracy value: 52.2 - type: accuracy name: Sanskrit Test accuracy value: 28.4 - type: accuracy name: Livvi Test accuracy value: 57.7 - type: accuracy name: Arabic Test accuracy value: 40.5 - type: accuracy name: Wolof Test accuracy value: 36.2 - type: accuracy name: Bulgarian Test accuracy value: 54.1 - type: accuracy name: Akuntsu Test accuracy value: 31.6 - type: accuracy name: Makurap Test accuracy value: 17.8 - type: accuracy name: Kangri Test accuracy value: 33.8 - type: accuracy name: Breton Test accuracy value: 47.0 - type: accuracy name: Telugu Test accuracy value: 58.7 - type: accuracy name: Cantonese Test accuracy value: 36.0 - type: accuracy name: Old Church Slavonic Test accuracy value: 35.1 - type: accuracy name: Karelian Test accuracy value: 57.5 - type: accuracy name: Upper Sorbian Test accuracy value: 51.1 - type: accuracy name: South Levantine Arabic Test accuracy value: 44.5 - type: accuracy name: Komi Zyrian Test accuracy value: 42.2 - type: accuracy name: Irish Test accuracy value: 34.8 - type: accuracy name: Nayini Test accuracy value: 41.0 - type: accuracy name: Munduruku Test accuracy value: 21.6 - type: accuracy name: Manx Test accuracy value: 28.0 - type: accuracy name: Skolt Sami Test accuracy value: 49.2 - type: accuracy name: Afrikaans Test accuracy value: 43.2 - type: accuracy name: Old Turkish Test accuracy value: 38.9 - type: accuracy name: Tupinamba Test accuracy value: 44.2 - type: accuracy name: Belarusian Test accuracy value: 58.7 - type: accuracy name: Serbian Test accuracy value: 55.9 - type: accuracy name: Moksha Test accuracy value: 45.0 - type: accuracy name: Western Armenian Test accuracy value: 56.1 - type: accuracy name: Scottish Gaelic Test accuracy value: 31.0 - type: accuracy name: Khunsari Test accuracy value: 27.0 - type: accuracy name: Hebrew Test accuracy value: 61.5 - type: accuracy name: Uyghur Test accuracy value: 61.4 - type: accuracy name: Chukchi Test accuracy value: 41.5 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: North Sami This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sme") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sme") ```
wietsedv/xlm-roberta-base-ft-udpos28-sl
wietsedv
2022-02-25T09:59:22Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "sl", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sl license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-sl results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 81.7 - type: accuracy name: Dutch Test accuracy value: 83.1 - type: accuracy name: German Test accuracy value: 81.2 - type: accuracy name: Italian Test accuracy value: 81.3 - type: accuracy name: French Test accuracy value: 79.9 - type: accuracy name: Spanish Test accuracy value: 84.9 - type: accuracy name: Russian Test accuracy value: 91.5 - type: accuracy name: Swedish Test accuracy value: 86.0 - type: accuracy name: Norwegian Test accuracy value: 78.4 - type: accuracy name: Danish Test accuracy value: 83.7 - type: accuracy name: Low Saxon Test accuracy value: 41.9 - type: accuracy name: Akkadian Test accuracy value: 17.3 - type: accuracy name: Armenian Test accuracy value: 84.3 - type: accuracy name: Welsh Test accuracy value: 65.5 - type: accuracy name: Old East Slavic Test accuracy value: 74.1 - type: accuracy name: Albanian Test accuracy value: 76.6 - type: accuracy name: Slovenian Test accuracy value: 97.6 - type: accuracy name: Guajajara Test accuracy value: 22.5 - type: accuracy name: Kurmanji Test accuracy value: 75.7 - type: accuracy name: Turkish Test accuracy value: 75.4 - type: accuracy name: Finnish Test accuracy value: 81.2 - type: accuracy name: Indonesian Test accuracy value: 81.8 - type: accuracy name: Ukrainian Test accuracy value: 92.6 - type: accuracy name: Polish Test accuracy value: 93.2 - type: accuracy name: Portuguese Test accuracy value: 84.0 - type: accuracy name: Kazakh Test accuracy value: 79.4 - type: accuracy name: Latin Test accuracy value: 76.7 - type: accuracy name: Old French Test accuracy value: 40.3 - type: accuracy name: Buryat Test accuracy value: 53.1 - type: accuracy name: Kaapor Test accuracy value: 11.2 - type: accuracy name: Korean Test accuracy value: 61.9 - type: accuracy name: Estonian Test accuracy value: 82.2 - type: accuracy name: Croatian Test accuracy value: 93.1 - type: accuracy name: Gothic Test accuracy value: 6.2 - type: accuracy name: Swiss German Test accuracy value: 40.7 - type: accuracy name: Assyrian Test accuracy value: 14.6 - type: accuracy name: North Sami Test accuracy value: 22.5 - type: accuracy name: Naija Test accuracy value: 33.9 - type: accuracy name: Latvian Test accuracy value: 86.0 - type: accuracy name: Chinese Test accuracy value: 39.7 - type: accuracy name: Tagalog Test accuracy value: 72.0 - type: accuracy name: Bambara Test accuracy value: 23.5 - type: accuracy name: Lithuanian Test accuracy value: 87.3 - type: accuracy name: Galician Test accuracy value: 82.5 - type: accuracy name: Vietnamese Test accuracy value: 67.3 - type: accuracy name: Greek Test accuracy value: 79.7 - type: accuracy name: Catalan Test accuracy value: 79.0 - type: accuracy name: Czech Test accuracy value: 94.1 - type: accuracy name: Erzya Test accuracy value: 40.1 - type: accuracy name: Bhojpuri Test accuracy value: 46.5 - type: accuracy name: Thai Test accuracy value: 53.2 - type: accuracy name: Marathi Test accuracy value: 87.7 - type: accuracy name: Basque Test accuracy value: 74.6 - type: accuracy name: Slovak Test accuracy value: 95.5 - type: accuracy name: Kiche Test accuracy value: 24.7 - type: accuracy name: Yoruba Test accuracy value: 17.1 - type: accuracy name: Warlpiri Test accuracy value: 27.5 - type: accuracy name: Tamil Test accuracy value: 83.4 - type: accuracy name: Maltese Test accuracy value: 18.4 - type: accuracy name: Ancient Greek Test accuracy value: 60.8 - type: accuracy name: Icelandic Test accuracy value: 80.0 - type: accuracy name: Mbya Guarani Test accuracy value: 23.7 - type: accuracy name: Urdu Test accuracy value: 61.6 - type: accuracy name: Romanian Test accuracy value: 82.4 - type: accuracy name: Persian Test accuracy value: 78.6 - type: accuracy name: Apurina Test accuracy value: 29.2 - type: accuracy name: Japanese Test accuracy value: 25.5 - type: accuracy name: Hungarian Test accuracy value: 74.6 - type: accuracy name: Hindi Test accuracy value: 67.4 - type: accuracy name: Classical Chinese Test accuracy value: 14.8 - type: accuracy name: Komi Permyak Test accuracy value: 40.3 - type: accuracy name: Faroese Test accuracy value: 75.0 - type: accuracy name: Sanskrit Test accuracy value: 14.3 - type: accuracy name: Livvi Test accuracy value: 58.2 - type: accuracy name: Arabic Test accuracy value: 79.8 - type: accuracy name: Wolof Test accuracy value: 24.7 - type: accuracy name: Bulgarian Test accuracy value: 90.4 - type: accuracy name: Akuntsu Test accuracy value: 20.6 - type: accuracy name: Makurap Test accuracy value: 6.2 - type: accuracy name: Kangri Test accuracy value: 44.2 - type: accuracy name: Breton Test accuracy value: 53.2 - type: accuracy name: Telugu Test accuracy value: 83.4 - type: accuracy name: Cantonese Test accuracy value: 48.9 - type: accuracy name: Old Church Slavonic Test accuracy value: 41.9 - type: accuracy name: Karelian Test accuracy value: 64.7 - type: accuracy name: Upper Sorbian Test accuracy value: 79.9 - type: accuracy name: South Levantine Arabic Test accuracy value: 67.2 - type: accuracy name: Komi Zyrian Test accuracy value: 33.3 - type: accuracy name: Irish Test accuracy value: 63.0 - type: accuracy name: Nayini Test accuracy value: 32.1 - type: accuracy name: Munduruku Test accuracy value: 10.1 - type: accuracy name: Manx Test accuracy value: 22.0 - type: accuracy name: Skolt Sami Test accuracy value: 27.4 - type: accuracy name: Afrikaans Test accuracy value: 74.0 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 22.5 - type: accuracy name: Belarusian Test accuracy value: 90.2 - type: accuracy name: Serbian Test accuracy value: 94.4 - type: accuracy name: Moksha Test accuracy value: 37.6 - type: accuracy name: Western Armenian Test accuracy value: 73.8 - type: accuracy name: Scottish Gaelic Test accuracy value: 55.0 - type: accuracy name: Khunsari Test accuracy value: 32.4 - type: accuracy name: Hebrew Test accuracy value: 81.2 - type: accuracy name: Uyghur Test accuracy value: 72.1 - type: accuracy name: Chukchi Test accuracy value: 30.2 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Slovenian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sl") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sl") ```
wietsedv/xlm-roberta-base-ft-udpos28-sa
wietsedv
2022-02-25T09:59:19Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "sa", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - sa license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-sa results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 31.4 - type: accuracy name: Dutch Test accuracy value: 28.4 - type: accuracy name: German Test accuracy value: 32.3 - type: accuracy name: Italian Test accuracy value: 28.3 - type: accuracy name: French Test accuracy value: 28.1 - type: accuracy name: Spanish Test accuracy value: 28.5 - type: accuracy name: Russian Test accuracy value: 37.5 - type: accuracy name: Swedish Test accuracy value: 35.7 - type: accuracy name: Norwegian Test accuracy value: 32.0 - type: accuracy name: Danish Test accuracy value: 32.7 - type: accuracy name: Low Saxon Test accuracy value: 28.0 - type: accuracy name: Akkadian Test accuracy value: 26.2 - type: accuracy name: Armenian Test accuracy value: 39.0 - type: accuracy name: Welsh Test accuracy value: 23.9 - type: accuracy name: Old East Slavic Test accuracy value: 36.8 - type: accuracy name: Albanian Test accuracy value: 34.1 - type: accuracy name: Slovenian Test accuracy value: 30.4 - type: accuracy name: Guajajara Test accuracy value: 16.6 - type: accuracy name: Kurmanji Test accuracy value: 34.8 - type: accuracy name: Turkish Test accuracy value: 42.8 - type: accuracy name: Finnish Test accuracy value: 42.5 - type: accuracy name: Indonesian Test accuracy value: 34.5 - type: accuracy name: Ukrainian Test accuracy value: 38.2 - type: accuracy name: Polish Test accuracy value: 36.6 - type: accuracy name: Portuguese Test accuracy value: 30.7 - type: accuracy name: Kazakh Test accuracy value: 44.2 - type: accuracy name: Latin Test accuracy value: 38.1 - type: accuracy name: Old French Test accuracy value: 35.3 - type: accuracy name: Buryat Test accuracy value: 33.0 - type: accuracy name: Kaapor Test accuracy value: 29.2 - type: accuracy name: Korean Test accuracy value: 39.6 - type: accuracy name: Estonian Test accuracy value: 41.1 - type: accuracy name: Croatian Test accuracy value: 34.9 - type: accuracy name: Gothic Test accuracy value: 26.7 - type: accuracy name: Swiss German Test accuracy value: 23.6 - type: accuracy name: Assyrian Test accuracy value: 9.7 - type: accuracy name: North Sami Test accuracy value: 21.7 - type: accuracy name: Naija Test accuracy value: 24.0 - type: accuracy name: Latvian Test accuracy value: 42.3 - type: accuracy name: Chinese Test accuracy value: 29.3 - type: accuracy name: Tagalog Test accuracy value: 34.6 - type: accuracy name: Bambara Test accuracy value: 12.0 - type: accuracy name: Lithuanian Test accuracy value: 43.5 - type: accuracy name: Galician Test accuracy value: 28.7 - type: accuracy name: Vietnamese Test accuracy value: 36.4 - type: accuracy name: Greek Test accuracy value: 32.5 - type: accuracy name: Catalan Test accuracy value: 25.7 - type: accuracy name: Czech Test accuracy value: 36.8 - type: accuracy name: Erzya Test accuracy value: 20.0 - type: accuracy name: Bhojpuri Test accuracy value: 27.3 - type: accuracy name: Thai Test accuracy value: 32.4 - type: accuracy name: Marathi Test accuracy value: 37.4 - type: accuracy name: Basque Test accuracy value: 38.3 - type: accuracy name: Slovak Test accuracy value: 37.2 - type: accuracy name: Kiche Test accuracy value: 17.2 - type: accuracy name: Yoruba Test accuracy value: 13.2 - type: accuracy name: Warlpiri Test accuracy value: 21.5 - type: accuracy name: Tamil Test accuracy value: 42.5 - type: accuracy name: Maltese Test accuracy value: 17.5 - type: accuracy name: Ancient Greek Test accuracy value: 37.4 - type: accuracy name: Icelandic Test accuracy value: 32.7 - type: accuracy name: Mbya Guarani Test accuracy value: 13.9 - type: accuracy name: Urdu Test accuracy value: 28.1 - type: accuracy name: Romanian Test accuracy value: 34.8 - type: accuracy name: Persian Test accuracy value: 36.2 - type: accuracy name: Apurina Test accuracy value: 21.9 - type: accuracy name: Japanese Test accuracy value: 26.3 - type: accuracy name: Hungarian Test accuracy value: 34.6 - type: accuracy name: Hindi Test accuracy value: 29.3 - type: accuracy name: Classical Chinese Test accuracy value: 30.0 - type: accuracy name: Komi Permyak Test accuracy value: 26.1 - type: accuracy name: Faroese Test accuracy value: 24.8 - type: accuracy name: Sanskrit Test accuracy value: 84.2 - type: accuracy name: Livvi Test accuracy value: 29.7 - type: accuracy name: Arabic Test accuracy value: 32.6 - type: accuracy name: Wolof Test accuracy value: 16.7 - type: accuracy name: Bulgarian Test accuracy value: 35.4 - type: accuracy name: Akuntsu Test accuracy value: 23.9 - type: accuracy name: Makurap Test accuracy value: 14.4 - type: accuracy name: Kangri Test accuracy value: 27.8 - type: accuracy name: Breton Test accuracy value: 27.6 - type: accuracy name: Telugu Test accuracy value: 50.6 - type: accuracy name: Cantonese Test accuracy value: 31.6 - type: accuracy name: Old Church Slavonic Test accuracy value: 43.2 - type: accuracy name: Karelian Test accuracy value: 34.1 - type: accuracy name: Upper Sorbian Test accuracy value: 28.5 - type: accuracy name: South Levantine Arabic Test accuracy value: 30.8 - type: accuracy name: Komi Zyrian Test accuracy value: 25.5 - type: accuracy name: Irish Test accuracy value: 20.8 - type: accuracy name: Nayini Test accuracy value: 29.5 - type: accuracy name: Munduruku Test accuracy value: 15.6 - type: accuracy name: Manx Test accuracy value: 15.9 - type: accuracy name: Skolt Sami Test accuracy value: 18.9 - type: accuracy name: Afrikaans Test accuracy value: 34.5 - type: accuracy name: Old Turkish Test accuracy value: 6.3 - type: accuracy name: Tupinamba Test accuracy value: 25.2 - type: accuracy name: Belarusian Test accuracy value: 39.3 - type: accuracy name: Serbian Test accuracy value: 33.7 - type: accuracy name: Moksha Test accuracy value: 21.8 - type: accuracy name: Western Armenian Test accuracy value: 38.3 - type: accuracy name: Scottish Gaelic Test accuracy value: 23.3 - type: accuracy name: Khunsari Test accuracy value: 29.7 - type: accuracy name: Hebrew Test accuracy value: 39.6 - type: accuracy name: Uyghur Test accuracy value: 50.1 - type: accuracy name: Chukchi Test accuracy value: 14.8 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Sanskrit This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sa") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sa") ```
wietsedv/xlm-roberta-base-ft-udpos28-ro
wietsedv
2022-02-25T09:59:16Z
4
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "ro", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - ro license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-ro results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 88.4 - type: accuracy name: Dutch Test accuracy value: 86.1 - type: accuracy name: German Test accuracy value: 87.3 - type: accuracy name: Italian Test accuracy value: 88.2 - type: accuracy name: French Test accuracy value: 91.3 - type: accuracy name: Spanish Test accuracy value: 91.1 - type: accuracy name: Russian Test accuracy value: 90.4 - type: accuracy name: Swedish Test accuracy value: 90.7 - type: accuracy name: Norwegian Test accuracy value: 85.0 - type: accuracy name: Danish Test accuracy value: 91.0 - type: accuracy name: Low Saxon Test accuracy value: 56.2 - type: accuracy name: Akkadian Test accuracy value: 41.8 - type: accuracy name: Armenian Test accuracy value: 88.4 - type: accuracy name: Welsh Test accuracy value: 71.7 - type: accuracy name: Old East Slavic Test accuracy value: 78.7 - type: accuracy name: Albanian Test accuracy value: 90.2 - type: accuracy name: Slovenian Test accuracy value: 80.3 - type: accuracy name: Guajajara Test accuracy value: 39.3 - type: accuracy name: Kurmanji Test accuracy value: 79.5 - type: accuracy name: Turkish Test accuracy value: 79.5 - type: accuracy name: Finnish Test accuracy value: 86.0 - type: accuracy name: Indonesian Test accuracy value: 84.2 - type: accuracy name: Ukrainian Test accuracy value: 89.7 - type: accuracy name: Polish Test accuracy value: 89.5 - type: accuracy name: Portuguese Test accuracy value: 90.3 - type: accuracy name: Kazakh Test accuracy value: 85.0 - type: accuracy name: Latin Test accuracy value: 81.8 - type: accuracy name: Old French Test accuracy value: 65.7 - type: accuracy name: Buryat Test accuracy value: 64.9 - type: accuracy name: Kaapor Test accuracy value: 27.1 - type: accuracy name: Korean Test accuracy value: 64.3 - type: accuracy name: Estonian Test accuracy value: 87.5 - type: accuracy name: Croatian Test accuracy value: 89.7 - type: accuracy name: Gothic Test accuracy value: 35.1 - type: accuracy name: Swiss German Test accuracy value: 55.5 - type: accuracy name: Assyrian Test accuracy value: 16.8 - type: accuracy name: North Sami Test accuracy value: 45.0 - type: accuracy name: Naija Test accuracy value: 43.8 - type: accuracy name: Latvian Test accuracy value: 89.5 - type: accuracy name: Chinese Test accuracy value: 54.9 - type: accuracy name: Tagalog Test accuracy value: 74.0 - type: accuracy name: Bambara Test accuracy value: 32.9 - type: accuracy name: Lithuanian Test accuracy value: 87.7 - type: accuracy name: Galician Test accuracy value: 89.9 - type: accuracy name: Vietnamese Test accuracy value: 66.2 - type: accuracy name: Greek Test accuracy value: 88.9 - type: accuracy name: Catalan Test accuracy value: 90.0 - type: accuracy name: Czech Test accuracy value: 89.8 - type: accuracy name: Erzya Test accuracy value: 51.5 - type: accuracy name: Bhojpuri Test accuracy value: 55.0 - type: accuracy name: Thai Test accuracy value: 64.9 - type: accuracy name: Marathi Test accuracy value: 87.1 - type: accuracy name: Basque Test accuracy value: 80.7 - type: accuracy name: Slovak Test accuracy value: 89.8 - type: accuracy name: Kiche Test accuracy value: 42.4 - type: accuracy name: Yoruba Test accuracy value: 30.3 - type: accuracy name: Warlpiri Test accuracy value: 46.2 - type: accuracy name: Tamil Test accuracy value: 82.5 - type: accuracy name: Maltese Test accuracy value: 38.3 - type: accuracy name: Ancient Greek Test accuracy value: 67.8 - type: accuracy name: Icelandic Test accuracy value: 85.1 - type: accuracy name: Mbya Guarani Test accuracy value: 34.4 - type: accuracy name: Urdu Test accuracy value: 63.4 - type: accuracy name: Romanian Test accuracy value: 96.8 - type: accuracy name: Persian Test accuracy value: 79.0 - type: accuracy name: Apurina Test accuracy value: 43.1 - type: accuracy name: Japanese Test accuracy value: 43.7 - type: accuracy name: Hungarian Test accuracy value: 79.9 - type: accuracy name: Hindi Test accuracy value: 70.6 - type: accuracy name: Classical Chinese Test accuracy value: 40.8 - type: accuracy name: Komi Permyak Test accuracy value: 57.2 - type: accuracy name: Faroese Test accuracy value: 80.9 - type: accuracy name: Sanskrit Test accuracy value: 40.4 - type: accuracy name: Livvi Test accuracy value: 66.9 - type: accuracy name: Arabic Test accuracy value: 83.5 - type: accuracy name: Wolof Test accuracy value: 43.1 - type: accuracy name: Bulgarian Test accuracy value: 91.2 - type: accuracy name: Akuntsu Test accuracy value: 40.6 - type: accuracy name: Makurap Test accuracy value: 20.5 - type: accuracy name: Kangri Test accuracy value: 53.7 - type: accuracy name: Breton Test accuracy value: 68.7 - type: accuracy name: Telugu Test accuracy value: 82.9 - type: accuracy name: Cantonese Test accuracy value: 57.0 - type: accuracy name: Old Church Slavonic Test accuracy value: 59.1 - type: accuracy name: Karelian Test accuracy value: 75.0 - type: accuracy name: Upper Sorbian Test accuracy value: 77.8 - type: accuracy name: South Levantine Arabic Test accuracy value: 71.2 - type: accuracy name: Komi Zyrian Test accuracy value: 47.0 - type: accuracy name: Irish Test accuracy value: 69.4 - type: accuracy name: Nayini Test accuracy value: 56.4 - type: accuracy name: Munduruku Test accuracy value: 29.2 - type: accuracy name: Manx Test accuracy value: 38.8 - type: accuracy name: Skolt Sami Test accuracy value: 43.7 - type: accuracy name: Afrikaans Test accuracy value: 88.2 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 44.5 - type: accuracy name: Belarusian Test accuracy value: 90.4 - type: accuracy name: Serbian Test accuracy value: 89.5 - type: accuracy name: Moksha Test accuracy value: 49.1 - type: accuracy name: Western Armenian Test accuracy value: 82.0 - type: accuracy name: Scottish Gaelic Test accuracy value: 63.1 - type: accuracy name: Khunsari Test accuracy value: 47.3 - type: accuracy name: Hebrew Test accuracy value: 88.5 - type: accuracy name: Uyghur Test accuracy value: 78.0 - type: accuracy name: Chukchi Test accuracy value: 37.5 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Romanian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ro") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-ro") ```
wietsedv/xlm-roberta-base-ft-udpos28-pcm
wietsedv
2022-02-25T09:59:11Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "pcm", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - pcm license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-pcm results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 77.2 - type: accuracy name: Dutch Test accuracy value: 75.2 - type: accuracy name: German Test accuracy value: 73.2 - type: accuracy name: Italian Test accuracy value: 68.9 - type: accuracy name: French Test accuracy value: 74.0 - type: accuracy name: Spanish Test accuracy value: 75.1 - type: accuracy name: Russian Test accuracy value: 70.3 - type: accuracy name: Swedish Test accuracy value: 78.9 - type: accuracy name: Norwegian Test accuracy value: 74.3 - type: accuracy name: Danish Test accuracy value: 73.4 - type: accuracy name: Low Saxon Test accuracy value: 37.9 - type: accuracy name: Akkadian Test accuracy value: 28.0 - type: accuracy name: Armenian Test accuracy value: 65.4 - type: accuracy name: Welsh Test accuracy value: 59.7 - type: accuracy name: Old East Slavic Test accuracy value: 61.0 - type: accuracy name: Albanian Test accuracy value: 66.1 - type: accuracy name: Slovenian Test accuracy value: 67.6 - type: accuracy name: Guajajara Test accuracy value: 16.1 - type: accuracy name: Kurmanji Test accuracy value: 54.8 - type: accuracy name: Turkish Test accuracy value: 58.2 - type: accuracy name: Finnish Test accuracy value: 67.4 - type: accuracy name: Indonesian Test accuracy value: 68.5 - type: accuracy name: Ukrainian Test accuracy value: 68.1 - type: accuracy name: Polish Test accuracy value: 68.8 - type: accuracy name: Portuguese Test accuracy value: 72.9 - type: accuracy name: Kazakh Test accuracy value: 60.1 - type: accuracy name: Latin Test accuracy value: 64.3 - type: accuracy name: Old French Test accuracy value: 51.1 - type: accuracy name: Buryat Test accuracy value: 38.9 - type: accuracy name: Kaapor Test accuracy value: 16.7 - type: accuracy name: Korean Test accuracy value: 52.4 - type: accuracy name: Estonian Test accuracy value: 68.3 - type: accuracy name: Croatian Test accuracy value: 73.0 - type: accuracy name: Gothic Test accuracy value: 21.4 - type: accuracy name: Swiss German Test accuracy value: 33.4 - type: accuracy name: Assyrian Test accuracy value: 0.0 - type: accuracy name: North Sami Test accuracy value: 24.3 - type: accuracy name: Naija Test accuracy value: 97.9 - type: accuracy name: Latvian Test accuracy value: 66.3 - type: accuracy name: Chinese Test accuracy value: 34.3 - type: accuracy name: Tagalog Test accuracy value: 49.9 - type: accuracy name: Bambara Test accuracy value: 16.7 - type: accuracy name: Lithuanian Test accuracy value: 65.7 - type: accuracy name: Galician Test accuracy value: 72.4 - type: accuracy name: Vietnamese Test accuracy value: 54.3 - type: accuracy name: Greek Test accuracy value: 73.3 - type: accuracy name: Catalan Test accuracy value: 73.6 - type: accuracy name: Czech Test accuracy value: 69.5 - type: accuracy name: Erzya Test accuracy value: 22.1 - type: accuracy name: Bhojpuri Test accuracy value: 36.6 - type: accuracy name: Thai Test accuracy value: 65.4 - type: accuracy name: Marathi Test accuracy value: 50.3 - type: accuracy name: Basque Test accuracy value: 58.5 - type: accuracy name: Slovak Test accuracy value: 70.4 - type: accuracy name: Kiche Test accuracy value: 8.0 - type: accuracy name: Yoruba Test accuracy value: 6.1 - type: accuracy name: Warlpiri Test accuracy value: 15.4 - type: accuracy name: Tamil Test accuracy value: 60.1 - type: accuracy name: Maltese Test accuracy value: 12.2 - type: accuracy name: Ancient Greek Test accuracy value: 45.8 - type: accuracy name: Icelandic Test accuracy value: 72.5 - type: accuracy name: Mbya Guarani Test accuracy value: 11.4 - type: accuracy name: Urdu Test accuracy value: 59.1 - type: accuracy name: Romanian Test accuracy value: 64.8 - type: accuracy name: Persian Test accuracy value: 67.2 - type: accuracy name: Apurina Test accuracy value: 15.5 - type: accuracy name: Japanese Test accuracy value: 26.1 - type: accuracy name: Hungarian Test accuracy value: 68.6 - type: accuracy name: Hindi Test accuracy value: 65.0 - type: accuracy name: Classical Chinese Test accuracy value: 30.4 - type: accuracy name: Komi Permyak Test accuracy value: 21.2 - type: accuracy name: Faroese Test accuracy value: 61.6 - type: accuracy name: Sanskrit Test accuracy value: 25.6 - type: accuracy name: Livvi Test accuracy value: 39.7 - type: accuracy name: Arabic Test accuracy value: 63.5 - type: accuracy name: Wolof Test accuracy value: 15.9 - type: accuracy name: Bulgarian Test accuracy value: 74.6 - type: accuracy name: Akuntsu Test accuracy value: 26.5 - type: accuracy name: Makurap Test accuracy value: 11.6 - type: accuracy name: Kangri Test accuracy value: 27.8 - type: accuracy name: Breton Test accuracy value: 46.6 - type: accuracy name: Telugu Test accuracy value: 59.4 - type: accuracy name: Cantonese Test accuracy value: 30.7 - type: accuracy name: Old Church Slavonic Test accuracy value: 36.7 - type: accuracy name: Karelian Test accuracy value: 45.9 - type: accuracy name: Upper Sorbian Test accuracy value: 49.3 - type: accuracy name: South Levantine Arabic Test accuracy value: 42.5 - type: accuracy name: Komi Zyrian Test accuracy value: 18.4 - type: accuracy name: Irish Test accuracy value: 48.3 - type: accuracy name: Nayini Test accuracy value: 24.4 - type: accuracy name: Munduruku Test accuracy value: 16.1 - type: accuracy name: Manx Test accuracy value: 14.7 - type: accuracy name: Skolt Sami Test accuracy value: 5.4 - type: accuracy name: Afrikaans Test accuracy value: 76.5 - type: accuracy name: Old Turkish Test accuracy value: 0.0 - type: accuracy name: Tupinamba Test accuracy value: 16.3 - type: accuracy name: Belarusian Test accuracy value: 70.7 - type: accuracy name: Serbian Test accuracy value: 74.8 - type: accuracy name: Moksha Test accuracy value: 24.1 - type: accuracy name: Western Armenian Test accuracy value: 59.8 - type: accuracy name: Scottish Gaelic Test accuracy value: 45.4 - type: accuracy name: Khunsari Test accuracy value: 21.6 - type: accuracy name: Hebrew Test accuracy value: 65.6 - type: accuracy name: Uyghur Test accuracy value: 55.0 - type: accuracy name: Chukchi Test accuracy value: 12.6 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Naija This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-pcm") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-pcm") ```
wietsedv/xlm-roberta-base-ft-udpos28-orv
wietsedv
2022-02-25T09:59:10Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "orv", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - orv license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-orv results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 79.4 - type: accuracy name: Dutch Test accuracy value: 77.8 - type: accuracy name: German Test accuracy value: 79.3 - type: accuracy name: Italian Test accuracy value: 77.5 - type: accuracy name: French Test accuracy value: 75.2 - type: accuracy name: Spanish Test accuracy value: 77.2 - type: accuracy name: Russian Test accuracy value: 87.9 - type: accuracy name: Swedish Test accuracy value: 83.0 - type: accuracy name: Norwegian Test accuracy value: 78.6 - type: accuracy name: Danish Test accuracy value: 82.9 - type: accuracy name: Low Saxon Test accuracy value: 58.9 - type: accuracy name: Akkadian Test accuracy value: 41.8 - type: accuracy name: Armenian Test accuracy value: 82.7 - type: accuracy name: Welsh Test accuracy value: 64.3 - type: accuracy name: Old East Slavic Test accuracy value: 91.0 - type: accuracy name: Albanian Test accuracy value: 73.4 - type: accuracy name: Slovenian Test accuracy value: 73.8 - type: accuracy name: Guajajara Test accuracy value: 41.7 - type: accuracy name: Kurmanji Test accuracy value: 76.7 - type: accuracy name: Turkish Test accuracy value: 73.5 - type: accuracy name: Finnish Test accuracy value: 83.0 - type: accuracy name: Indonesian Test accuracy value: 78.9 - type: accuracy name: Ukrainian Test accuracy value: 86.7 - type: accuracy name: Polish Test accuracy value: 85.5 - type: accuracy name: Portuguese Test accuracy value: 79.5 - type: accuracy name: Kazakh Test accuracy value: 79.7 - type: accuracy name: Latin Test accuracy value: 80.9 - type: accuracy name: Old French Test accuracy value: 60.5 - type: accuracy name: Buryat Test accuracy value: 59.8 - type: accuracy name: Kaapor Test accuracy value: 27.1 - type: accuracy name: Korean Test accuracy value: 61.0 - type: accuracy name: Estonian Test accuracy value: 83.9 - type: accuracy name: Croatian Test accuracy value: 84.7 - type: accuracy name: Gothic Test accuracy value: 33.1 - type: accuracy name: Swiss German Test accuracy value: 53.5 - type: accuracy name: Assyrian Test accuracy value: 15.7 - type: accuracy name: North Sami Test accuracy value: 39.9 - type: accuracy name: Naija Test accuracy value: 41.9 - type: accuracy name: Latvian Test accuracy value: 85.7 - type: accuracy name: Chinese Test accuracy value: 42.7 - type: accuracy name: Tagalog Test accuracy value: 73.5 - type: accuracy name: Bambara Test accuracy value: 29.5 - type: accuracy name: Lithuanian Test accuracy value: 86.1 - type: accuracy name: Galician Test accuracy value: 77.7 - type: accuracy name: Vietnamese Test accuracy value: 64.8 - type: accuracy name: Greek Test accuracy value: 73.8 - type: accuracy name: Catalan Test accuracy value: 74.2 - type: accuracy name: Czech Test accuracy value: 85.0 - type: accuracy name: Erzya Test accuracy value: 46.1 - type: accuracy name: Bhojpuri Test accuracy value: 56.8 - type: accuracy name: Thai Test accuracy value: 60.6 - type: accuracy name: Marathi Test accuracy value: 84.0 - type: accuracy name: Basque Test accuracy value: 77.2 - type: accuracy name: Slovak Test accuracy value: 84.3 - type: accuracy name: Kiche Test accuracy value: 35.3 - type: accuracy name: Yoruba Test accuracy value: 29.9 - type: accuracy name: Warlpiri Test accuracy value: 33.6 - type: accuracy name: Tamil Test accuracy value: 84.3 - type: accuracy name: Maltese Test accuracy value: 32.0 - type: accuracy name: Ancient Greek Test accuracy value: 65.7 - type: accuracy name: Icelandic Test accuracy value: 81.6 - type: accuracy name: Mbya Guarani Test accuracy value: 33.2 - type: accuracy name: Urdu Test accuracy value: 66.2 - type: accuracy name: Romanian Test accuracy value: 80.9 - type: accuracy name: Persian Test accuracy value: 74.6 - type: accuracy name: Apurina Test accuracy value: 44.6 - type: accuracy name: Japanese Test accuracy value: 35.7 - type: accuracy name: Hungarian Test accuracy value: 73.3 - type: accuracy name: Hindi Test accuracy value: 75.3 - type: accuracy name: Classical Chinese Test accuracy value: 41.5 - type: accuracy name: Komi Permyak Test accuracy value: 49.0 - type: accuracy name: Faroese Test accuracy value: 78.3 - type: accuracy name: Sanskrit Test accuracy value: 43.3 - type: accuracy name: Livvi Test accuracy value: 70.2 - type: accuracy name: Arabic Test accuracy value: 79.8 - type: accuracy name: Wolof Test accuracy value: 39.8 - type: accuracy name: Bulgarian Test accuracy value: 85.8 - type: accuracy name: Akuntsu Test accuracy value: 36.5 - type: accuracy name: Makurap Test accuracy value: 14.4 - type: accuracy name: Kangri Test accuracy value: 52.0 - type: accuracy name: Breton Test accuracy value: 58.1 - type: accuracy name: Telugu Test accuracy value: 79.9 - type: accuracy name: Cantonese Test accuracy value: 50.8 - type: accuracy name: Old Church Slavonic Test accuracy value: 78.2 - type: accuracy name: Karelian Test accuracy value: 73.5 - type: accuracy name: Upper Sorbian Test accuracy value: 76.0 - type: accuracy name: South Levantine Arabic Test accuracy value: 70.0 - type: accuracy name: Komi Zyrian Test accuracy value: 43.1 - type: accuracy name: Irish Test accuracy value: 61.1 - type: accuracy name: Nayini Test accuracy value: 53.8 - type: accuracy name: Munduruku Test accuracy value: 26.4 - type: accuracy name: Manx Test accuracy value: 44.6 - type: accuracy name: Skolt Sami Test accuracy value: 45.2 - type: accuracy name: Afrikaans Test accuracy value: 76.9 - type: accuracy name: Old Turkish Test accuracy value: 2.7 - type: accuracy name: Tupinamba Test accuracy value: 39.0 - type: accuracy name: Belarusian Test accuracy value: 89.5 - type: accuracy name: Serbian Test accuracy value: 85.1 - type: accuracy name: Moksha Test accuracy value: 42.8 - type: accuracy name: Western Armenian Test accuracy value: 77.0 - type: accuracy name: Scottish Gaelic Test accuracy value: 51.6 - type: accuracy name: Khunsari Test accuracy value: 54.1 - type: accuracy name: Hebrew Test accuracy value: 85.4 - type: accuracy name: Uyghur Test accuracy value: 74.4 - type: accuracy name: Chukchi Test accuracy value: 34.5 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Old East Slavic This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-orv") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-orv") ```
wietsedv/xlm-roberta-base-ft-udpos28-no
wietsedv
2022-02-25T09:59:08Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "no", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - no license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-no results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 89.7 - type: accuracy name: Dutch Test accuracy value: 89.3 - type: accuracy name: German Test accuracy value: 87.8 - type: accuracy name: Italian Test accuracy value: 85.0 - type: accuracy name: French Test accuracy value: 83.9 - type: accuracy name: Spanish Test accuracy value: 88.4 - type: accuracy name: Russian Test accuracy value: 89.4 - type: accuracy name: Swedish Test accuracy value: 92.1 - type: accuracy name: Norwegian Test accuracy value: 97.1 - type: accuracy name: Danish Test accuracy value: 89.0 - type: accuracy name: Low Saxon Test accuracy value: 56.5 - type: accuracy name: Akkadian Test accuracy value: 32.3 - type: accuracy name: Armenian Test accuracy value: 86.2 - type: accuracy name: Welsh Test accuracy value: 67.9 - type: accuracy name: Old East Slavic Test accuracy value: 73.9 - type: accuracy name: Albanian Test accuracy value: 79.0 - type: accuracy name: Slovenian Test accuracy value: 78.9 - type: accuracy name: Guajajara Test accuracy value: 26.9 - type: accuracy name: Kurmanji Test accuracy value: 75.1 - type: accuracy name: Turkish Test accuracy value: 77.8 - type: accuracy name: Finnish Test accuracy value: 85.2 - type: accuracy name: Indonesian Test accuracy value: 85.9 - type: accuracy name: Ukrainian Test accuracy value: 87.6 - type: accuracy name: Polish Test accuracy value: 87.0 - type: accuracy name: Portuguese Test accuracy value: 88.0 - type: accuracy name: Kazakh Test accuracy value: 82.9 - type: accuracy name: Latin Test accuracy value: 78.9 - type: accuracy name: Old French Test accuracy value: 51.2 - type: accuracy name: Buryat Test accuracy value: 61.0 - type: accuracy name: Kaapor Test accuracy value: 13.8 - type: accuracy name: Korean Test accuracy value: 62.8 - type: accuracy name: Estonian Test accuracy value: 87.9 - type: accuracy name: Croatian Test accuracy value: 88.8 - type: accuracy name: Gothic Test accuracy value: 25.8 - type: accuracy name: Swiss German Test accuracy value: 44.0 - type: accuracy name: Assyrian Test accuracy value: 15.0 - type: accuracy name: North Sami Test accuracy value: 43.0 - type: accuracy name: Naija Test accuracy value: 41.5 - type: accuracy name: Latvian Test accuracy value: 85.2 - type: accuracy name: Chinese Test accuracy value: 46.6 - type: accuracy name: Tagalog Test accuracy value: 73.1 - type: accuracy name: Bambara Test accuracy value: 29.0 - type: accuracy name: Lithuanian Test accuracy value: 84.1 - type: accuracy name: Galician Test accuracy value: 84.9 - type: accuracy name: Vietnamese Test accuracy value: 66.4 - type: accuracy name: Greek Test accuracy value: 83.0 - type: accuracy name: Catalan Test accuracy value: 88.8 - type: accuracy name: Czech Test accuracy value: 87.3 - type: accuracy name: Erzya Test accuracy value: 50.3 - type: accuracy name: Bhojpuri Test accuracy value: 52.0 - type: accuracy name: Thai Test accuracy value: 65.6 - type: accuracy name: Marathi Test accuracy value: 89.0 - type: accuracy name: Basque Test accuracy value: 74.5 - type: accuracy name: Slovak Test accuracy value: 88.8 - type: accuracy name: Kiche Test accuracy value: 35.4 - type: accuracy name: Yoruba Test accuracy value: 28.2 - type: accuracy name: Warlpiri Test accuracy value: 39.3 - type: accuracy name: Tamil Test accuracy value: 83.5 - type: accuracy name: Maltese Test accuracy value: 30.4 - type: accuracy name: Ancient Greek Test accuracy value: 63.7 - type: accuracy name: Icelandic Test accuracy value: 84.3 - type: accuracy name: Mbya Guarani Test accuracy value: 32.9 - type: accuracy name: Urdu Test accuracy value: 69.4 - type: accuracy name: Romanian Test accuracy value: 83.8 - type: accuracy name: Persian Test accuracy value: 78.6 - type: accuracy name: Apurina Test accuracy value: 45.4 - type: accuracy name: Japanese Test accuracy value: 33.2 - type: accuracy name: Hungarian Test accuracy value: 84.5 - type: accuracy name: Hindi Test accuracy value: 74.9 - type: accuracy name: Classical Chinese Test accuracy value: 31.3 - type: accuracy name: Komi Permyak Test accuracy value: 50.9 - type: accuracy name: Faroese Test accuracy value: 80.8 - type: accuracy name: Sanskrit Test accuracy value: 35.6 - type: accuracy name: Livvi Test accuracy value: 67.6 - type: accuracy name: Arabic Test accuracy value: 80.4 - type: accuracy name: Wolof Test accuracy value: 35.5 - type: accuracy name: Bulgarian Test accuracy value: 90.7 - type: accuracy name: Akuntsu Test accuracy value: 32.9 - type: accuracy name: Makurap Test accuracy value: 17.8 - type: accuracy name: Kangri Test accuracy value: 48.1 - type: accuracy name: Breton Test accuracy value: 61.9 - type: accuracy name: Telugu Test accuracy value: 85.3 - type: accuracy name: Cantonese Test accuracy value: 50.1 - type: accuracy name: Old Church Slavonic Test accuracy value: 47.8 - type: accuracy name: Karelian Test accuracy value: 71.8 - type: accuracy name: Upper Sorbian Test accuracy value: 78.4 - type: accuracy name: South Levantine Arabic Test accuracy value: 67.3 - type: accuracy name: Komi Zyrian Test accuracy value: 44.4 - type: accuracy name: Irish Test accuracy value: 69.9 - type: accuracy name: Nayini Test accuracy value: 41.0 - type: accuracy name: Munduruku Test accuracy value: 21.6 - type: accuracy name: Manx Test accuracy value: 35.0 - type: accuracy name: Skolt Sami Test accuracy value: 38.9 - type: accuracy name: Afrikaans Test accuracy value: 86.7 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 40.4 - type: accuracy name: Belarusian Test accuracy value: 88.2 - type: accuracy name: Serbian Test accuracy value: 89.9 - type: accuracy name: Moksha Test accuracy value: 47.4 - type: accuracy name: Western Armenian Test accuracy value: 78.4 - type: accuracy name: Scottish Gaelic Test accuracy value: 58.3 - type: accuracy name: Khunsari Test accuracy value: 43.2 - type: accuracy name: Hebrew Test accuracy value: 89.6 - type: accuracy name: Uyghur Test accuracy value: 76.5 - type: accuracy name: Chukchi Test accuracy value: 37.9 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Norwegian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-no") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-no") ```
wietsedv/xlm-roberta-base-ft-udpos28-mr
wietsedv
2022-02-25T09:59:04Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "mr", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - mr license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-mr results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 67.4 - type: accuracy name: Dutch Test accuracy value: 61.5 - type: accuracy name: German Test accuracy value: 66.9 - type: accuracy name: Italian Test accuracy value: 64.8 - type: accuracy name: French Test accuracy value: 61.7 - type: accuracy name: Spanish Test accuracy value: 60.1 - type: accuracy name: Russian Test accuracy value: 68.1 - type: accuracy name: Swedish Test accuracy value: 68.4 - type: accuracy name: Norwegian Test accuracy value: 64.1 - type: accuracy name: Danish Test accuracy value: 66.4 - type: accuracy name: Low Saxon Test accuracy value: 51.7 - type: accuracy name: Akkadian Test accuracy value: 23.7 - type: accuracy name: Armenian Test accuracy value: 74.4 - type: accuracy name: Welsh Test accuracy value: 50.1 - type: accuracy name: Old East Slavic Test accuracy value: 57.8 - type: accuracy name: Albanian Test accuracy value: 61.9 - type: accuracy name: Slovenian Test accuracy value: 60.1 - type: accuracy name: Guajajara Test accuracy value: 20.5 - type: accuracy name: Kurmanji Test accuracy value: 60.0 - type: accuracy name: Turkish Test accuracy value: 71.8 - type: accuracy name: Finnish Test accuracy value: 74.5 - type: accuracy name: Indonesian Test accuracy value: 59.0 - type: accuracy name: Ukrainian Test accuracy value: 67.1 - type: accuracy name: Polish Test accuracy value: 65.0 - type: accuracy name: Portuguese Test accuracy value: 66.7 - type: accuracy name: Kazakh Test accuracy value: 73.8 - type: accuracy name: Latin Test accuracy value: 66.2 - type: accuracy name: Old French Test accuracy value: 48.6 - type: accuracy name: Buryat Test accuracy value: 57.0 - type: accuracy name: Kaapor Test accuracy value: 19.2 - type: accuracy name: Korean Test accuracy value: 59.7 - type: accuracy name: Estonian Test accuracy value: 75.4 - type: accuracy name: Croatian Test accuracy value: 63.8 - type: accuracy name: Gothic Test accuracy value: 20.0 - type: accuracy name: Swiss German Test accuracy value: 46.8 - type: accuracy name: Assyrian Test accuracy value: 16.1 - type: accuracy name: North Sami Test accuracy value: 37.1 - type: accuracy name: Naija Test accuracy value: 37.9 - type: accuracy name: Latvian Test accuracy value: 75.6 - type: accuracy name: Chinese Test accuracy value: 49.7 - type: accuracy name: Tagalog Test accuracy value: 55.1 - type: accuracy name: Bambara Test accuracy value: 28.9 - type: accuracy name: Lithuanian Test accuracy value: 75.9 - type: accuracy name: Galician Test accuracy value: 65.5 - type: accuracy name: Vietnamese Test accuracy value: 61.0 - type: accuracy name: Greek Test accuracy value: 70.4 - type: accuracy name: Catalan Test accuracy value: 57.9 - type: accuracy name: Czech Test accuracy value: 64.9 - type: accuracy name: Erzya Test accuracy value: 47.7 - type: accuracy name: Bhojpuri Test accuracy value: 41.9 - type: accuracy name: Thai Test accuracy value: 44.1 - type: accuracy name: Marathi Test accuracy value: 89.0 - type: accuracy name: Basque Test accuracy value: 71.8 - type: accuracy name: Slovak Test accuracy value: 61.3 - type: accuracy name: Kiche Test accuracy value: 25.7 - type: accuracy name: Yoruba Test accuracy value: 22.8 - type: accuracy name: Warlpiri Test accuracy value: 42.9 - type: accuracy name: Tamil Test accuracy value: 73.5 - type: accuracy name: Maltese Test accuracy value: 26.7 - type: accuracy name: Ancient Greek Test accuracy value: 63.5 - type: accuracy name: Icelandic Test accuracy value: 64.0 - type: accuracy name: Mbya Guarani Test accuracy value: 29.7 - type: accuracy name: Urdu Test accuracy value: 50.3 - type: accuracy name: Romanian Test accuracy value: 63.3 - type: accuracy name: Persian Test accuracy value: 61.0 - type: accuracy name: Apurina Test accuracy value: 38.4 - type: accuracy name: Japanese Test accuracy value: 40.5 - type: accuracy name: Hungarian Test accuracy value: 69.4 - type: accuracy name: Hindi Test accuracy value: 52.7 - type: accuracy name: Classical Chinese Test accuracy value: 32.4 - type: accuracy name: Komi Permyak Test accuracy value: 50.1 - type: accuracy name: Faroese Test accuracy value: 58.0 - type: accuracy name: Sanskrit Test accuracy value: 34.1 - type: accuracy name: Livvi Test accuracy value: 65.3 - type: accuracy name: Arabic Test accuracy value: 55.9 - type: accuracy name: Wolof Test accuracy value: 27.8 - type: accuracy name: Bulgarian Test accuracy value: 63.2 - type: accuracy name: Akuntsu Test accuracy value: 23.1 - type: accuracy name: Makurap Test accuracy value: 17.1 - type: accuracy name: Kangri Test accuracy value: 48.8 - type: accuracy name: Breton Test accuracy value: 50.8 - type: accuracy name: Telugu Test accuracy value: 82.0 - type: accuracy name: Cantonese Test accuracy value: 52.5 - type: accuracy name: Old Church Slavonic Test accuracy value: 42.8 - type: accuracy name: Karelian Test accuracy value: 61.8 - type: accuracy name: Upper Sorbian Test accuracy value: 54.1 - type: accuracy name: South Levantine Arabic Test accuracy value: 55.8 - type: accuracy name: Komi Zyrian Test accuracy value: 47.0 - type: accuracy name: Irish Test accuracy value: 50.1 - type: accuracy name: Nayini Test accuracy value: 48.7 - type: accuracy name: Munduruku Test accuracy value: 18.6 - type: accuracy name: Manx Test accuracy value: 31.1 - type: accuracy name: Skolt Sami Test accuracy value: 40.8 - type: accuracy name: Afrikaans Test accuracy value: 66.4 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 29.9 - type: accuracy name: Belarusian Test accuracy value: 65.4 - type: accuracy name: Serbian Test accuracy value: 62.6 - type: accuracy name: Moksha Test accuracy value: 46.8 - type: accuracy name: Western Armenian Test accuracy value: 70.6 - type: accuracy name: Scottish Gaelic Test accuracy value: 47.4 - type: accuracy name: Khunsari Test accuracy value: 45.9 - type: accuracy name: Hebrew Test accuracy value: 77.1 - type: accuracy name: Uyghur Test accuracy value: 73.2 - type: accuracy name: Chukchi Test accuracy value: 33.5 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Marathi This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mr") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mr") ```
wietsedv/xlm-roberta-base-ft-udpos28-la
wietsedv
2022-02-25T09:58:58Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "la", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - la license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-la results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 81.5 - type: accuracy name: Dutch Test accuracy value: 79.6 - type: accuracy name: German Test accuracy value: 78.2 - type: accuracy name: Italian Test accuracy value: 78.0 - type: accuracy name: French Test accuracy value: 78.1 - type: accuracy name: Spanish Test accuracy value: 79.8 - type: accuracy name: Russian Test accuracy value: 89.8 - type: accuracy name: Swedish Test accuracy value: 86.0 - type: accuracy name: Norwegian Test accuracy value: 81.5 - type: accuracy name: Danish Test accuracy value: 85.7 - type: accuracy name: Low Saxon Test accuracy value: 56.6 - type: accuracy name: Akkadian Test accuracy value: 44.7 - type: accuracy name: Armenian Test accuracy value: 86.4 - type: accuracy name: Welsh Test accuracy value: 65.1 - type: accuracy name: Old East Slavic Test accuracy value: 79.8 - type: accuracy name: Albanian Test accuracy value: 74.9 - type: accuracy name: Slovenian Test accuracy value: 77.4 - type: accuracy name: Guajajara Test accuracy value: 35.8 - type: accuracy name: Kurmanji Test accuracy value: 77.7 - type: accuracy name: Turkish Test accuracy value: 76.9 - type: accuracy name: Finnish Test accuracy value: 84.9 - type: accuracy name: Indonesian Test accuracy value: 82.0 - type: accuracy name: Ukrainian Test accuracy value: 87.8 - type: accuracy name: Polish Test accuracy value: 88.0 - type: accuracy name: Portuguese Test accuracy value: 82.3 - type: accuracy name: Kazakh Test accuracy value: 83.2 - type: accuracy name: Latin Test accuracy value: 92.9 - type: accuracy name: Old French Test accuracy value: 61.2 - type: accuracy name: Buryat Test accuracy value: 64.7 - type: accuracy name: Kaapor Test accuracy value: 34.2 - type: accuracy name: Korean Test accuracy value: 63.0 - type: accuracy name: Estonian Test accuracy value: 85.5 - type: accuracy name: Croatian Test accuracy value: 86.3 - type: accuracy name: Gothic Test accuracy value: 36.5 - type: accuracy name: Swiss German Test accuracy value: 47.8 - type: accuracy name: Assyrian Test accuracy value: 15.5 - type: accuracy name: North Sami Test accuracy value: 41.4 - type: accuracy name: Naija Test accuracy value: 41.9 - type: accuracy name: Latvian Test accuracy value: 89.1 - type: accuracy name: Chinese Test accuracy value: 44.3 - type: accuracy name: Tagalog Test accuracy value: 73.7 - type: accuracy name: Bambara Test accuracy value: 27.9 - type: accuracy name: Lithuanian Test accuracy value: 88.3 - type: accuracy name: Galician Test accuracy value: 81.7 - type: accuracy name: Vietnamese Test accuracy value: 68.0 - type: accuracy name: Greek Test accuracy value: 74.9 - type: accuracy name: Catalan Test accuracy value: 76.2 - type: accuracy name: Czech Test accuracy value: 86.3 - type: accuracy name: Erzya Test accuracy value: 50.8 - type: accuracy name: Bhojpuri Test accuracy value: 52.5 - type: accuracy name: Thai Test accuracy value: 61.6 - type: accuracy name: Marathi Test accuracy value: 88.3 - type: accuracy name: Basque Test accuracy value: 79.0 - type: accuracy name: Slovak Test accuracy value: 85.9 - type: accuracy name: Kiche Test accuracy value: 39.3 - type: accuracy name: Yoruba Test accuracy value: 29.9 - type: accuracy name: Warlpiri Test accuracy value: 40.9 - type: accuracy name: Tamil Test accuracy value: 85.7 - type: accuracy name: Maltese Test accuracy value: 32.8 - type: accuracy name: Ancient Greek Test accuracy value: 70.5 - type: accuracy name: Icelandic Test accuracy value: 81.6 - type: accuracy name: Mbya Guarani Test accuracy value: 33.1 - type: accuracy name: Urdu Test accuracy value: 61.3 - type: accuracy name: Romanian Test accuracy value: 83.1 - type: accuracy name: Persian Test accuracy value: 75.7 - type: accuracy name: Apurina Test accuracy value: 43.5 - type: accuracy name: Japanese Test accuracy value: 36.5 - type: accuracy name: Hungarian Test accuracy value: 74.5 - type: accuracy name: Hindi Test accuracy value: 67.0 - type: accuracy name: Classical Chinese Test accuracy value: 38.2 - type: accuracy name: Komi Permyak Test accuracy value: 52.2 - type: accuracy name: Faroese Test accuracy value: 75.6 - type: accuracy name: Sanskrit Test accuracy value: 43.5 - type: accuracy name: Livvi Test accuracy value: 66.1 - type: accuracy name: Arabic Test accuracy value: 81.3 - type: accuracy name: Wolof Test accuracy value: 39.1 - type: accuracy name: Bulgarian Test accuracy value: 87.7 - type: accuracy name: Akuntsu Test accuracy value: 35.5 - type: accuracy name: Makurap Test accuracy value: 28.8 - type: accuracy name: Kangri Test accuracy value: 49.8 - type: accuracy name: Breton Test accuracy value: 59.8 - type: accuracy name: Telugu Test accuracy value: 84.3 - type: accuracy name: Cantonese Test accuracy value: 50.3 - type: accuracy name: Old Church Slavonic Test accuracy value: 55.7 - type: accuracy name: Karelian Test accuracy value: 73.0 - type: accuracy name: Upper Sorbian Test accuracy value: 76.0 - type: accuracy name: South Levantine Arabic Test accuracy value: 68.8 - type: accuracy name: Komi Zyrian Test accuracy value: 46.3 - type: accuracy name: Irish Test accuracy value: 64.1 - type: accuracy name: Nayini Test accuracy value: 44.9 - type: accuracy name: Munduruku Test accuracy value: 24.1 - type: accuracy name: Manx Test accuracy value: 39.3 - type: accuracy name: Skolt Sami Test accuracy value: 43.5 - type: accuracy name: Afrikaans Test accuracy value: 74.8 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 45.2 - type: accuracy name: Belarusian Test accuracy value: 89.1 - type: accuracy name: Serbian Test accuracy value: 87.2 - type: accuracy name: Moksha Test accuracy value: 47.3 - type: accuracy name: Western Armenian Test accuracy value: 81.6 - type: accuracy name: Scottish Gaelic Test accuracy value: 55.3 - type: accuracy name: Khunsari Test accuracy value: 43.2 - type: accuracy name: Hebrew Test accuracy value: 89.6 - type: accuracy name: Uyghur Test accuracy value: 76.8 - type: accuracy name: Chukchi Test accuracy value: 36.3 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Latin This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-la") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-la") ```
wietsedv/xlm-roberta-base-ft-udpos28-hy
wietsedv
2022-02-25T09:58:47Z
4
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "hy", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - hy license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-hy results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 84.7 - type: accuracy name: Dutch Test accuracy value: 85.3 - type: accuracy name: German Test accuracy value: 84.1 - type: accuracy name: Italian Test accuracy value: 82.9 - type: accuracy name: French Test accuracy value: 82.6 - type: accuracy name: Spanish Test accuracy value: 83.2 - type: accuracy name: Russian Test accuracy value: 92.1 - type: accuracy name: Swedish Test accuracy value: 87.5 - type: accuracy name: Norwegian Test accuracy value: 82.5 - type: accuracy name: Danish Test accuracy value: 86.6 - type: accuracy name: Low Saxon Test accuracy value: 40.1 - type: accuracy name: Akkadian Test accuracy value: 7.0 - type: accuracy name: Armenian Test accuracy value: 97.0 - type: accuracy name: Welsh Test accuracy value: 65.3 - type: accuracy name: Old East Slavic Test accuracy value: 73.6 - type: accuracy name: Albanian Test accuracy value: 75.8 - type: accuracy name: Slovenian Test accuracy value: 80.8 - type: accuracy name: Guajajara Test accuracy value: 14.8 - type: accuracy name: Kurmanji Test accuracy value: 77.9 - type: accuracy name: Turkish Test accuracy value: 79.3 - type: accuracy name: Finnish Test accuracy value: 86.3 - type: accuracy name: Indonesian Test accuracy value: 80.5 - type: accuracy name: Ukrainian Test accuracy value: 91.0 - type: accuracy name: Polish Test accuracy value: 86.3 - type: accuracy name: Portuguese Test accuracy value: 84.6 - type: accuracy name: Kazakh Test accuracy value: 86.3 - type: accuracy name: Latin Test accuracy value: 79.8 - type: accuracy name: Old French Test accuracy value: 47.9 - type: accuracy name: Buryat Test accuracy value: 59.5 - type: accuracy name: Kaapor Test accuracy value: 4.6 - type: accuracy name: Korean Test accuracy value: 64.1 - type: accuracy name: Estonian Test accuracy value: 86.1 - type: accuracy name: Croatian Test accuracy value: 88.6 - type: accuracy name: Gothic Test accuracy value: 6.5 - type: accuracy name: Swiss German Test accuracy value: 43.7 - type: accuracy name: Assyrian Test accuracy value: 14.6 - type: accuracy name: North Sami Test accuracy value: 23.7 - type: accuracy name: Naija Test accuracy value: 36.1 - type: accuracy name: Latvian Test accuracy value: 90.0 - type: accuracy name: Chinese Test accuracy value: 43.5 - type: accuracy name: Tagalog Test accuracy value: 71.8 - type: accuracy name: Bambara Test accuracy value: 17.2 - type: accuracy name: Lithuanian Test accuracy value: 89.0 - type: accuracy name: Galician Test accuracy value: 83.6 - type: accuracy name: Vietnamese Test accuracy value: 66.4 - type: accuracy name: Greek Test accuracy value: 86.9 - type: accuracy name: Catalan Test accuracy value: 82.3 - type: accuracy name: Czech Test accuracy value: 88.7 - type: accuracy name: Erzya Test accuracy value: 40.9 - type: accuracy name: Bhojpuri Test accuracy value: 53.6 - type: accuracy name: Thai Test accuracy value: 67.5 - type: accuracy name: Marathi Test accuracy value: 83.4 - type: accuracy name: Basque Test accuracy value: 79.0 - type: accuracy name: Slovak Test accuracy value: 89.5 - type: accuracy name: Kiche Test accuracy value: 19.8 - type: accuracy name: Yoruba Test accuracy value: 15.4 - type: accuracy name: Warlpiri Test accuracy value: 25.5 - type: accuracy name: Tamil Test accuracy value: 86.9 - type: accuracy name: Maltese Test accuracy value: 14.7 - type: accuracy name: Ancient Greek Test accuracy value: 67.4 - type: accuracy name: Icelandic Test accuracy value: 82.2 - type: accuracy name: Mbya Guarani Test accuracy value: 22.8 - type: accuracy name: Urdu Test accuracy value: 70.6 - type: accuracy name: Romanian Test accuracy value: 82.4 - type: accuracy name: Persian Test accuracy value: 79.2 - type: accuracy name: Apurina Test accuracy value: 25.2 - type: accuracy name: Japanese Test accuracy value: 30.3 - type: accuracy name: Hungarian Test accuracy value: 85.7 - type: accuracy name: Hindi Test accuracy value: 75.7 - type: accuracy name: Classical Chinese Test accuracy value: 26.3 - type: accuracy name: Komi Permyak Test accuracy value: 38.3 - type: accuracy name: Faroese Test accuracy value: 76.5 - type: accuracy name: Sanskrit Test accuracy value: 23.7 - type: accuracy name: Livvi Test accuracy value: 58.1 - type: accuracy name: Arabic Test accuracy value: 78.6 - type: accuracy name: Wolof Test accuracy value: 16.3 - type: accuracy name: Bulgarian Test accuracy value: 90.3 - type: accuracy name: Akuntsu Test accuracy value: 11.6 - type: accuracy name: Makurap Test accuracy value: 1.4 - type: accuracy name: Kangri Test accuracy value: 51.3 - type: accuracy name: Breton Test accuracy value: 65.5 - type: accuracy name: Telugu Test accuracy value: 85.6 - type: accuracy name: Cantonese Test accuracy value: 48.2 - type: accuracy name: Old Church Slavonic Test accuracy value: 44.4 - type: accuracy name: Karelian Test accuracy value: 67.7 - type: accuracy name: Upper Sorbian Test accuracy value: 69.5 - type: accuracy name: South Levantine Arabic Test accuracy value: 69.6 - type: accuracy name: Komi Zyrian Test accuracy value: 33.0 - type: accuracy name: Irish Test accuracy value: 62.4 - type: accuracy name: Nayini Test accuracy value: 48.7 - type: accuracy name: Munduruku Test accuracy value: 7.6 - type: accuracy name: Manx Test accuracy value: 19.6 - type: accuracy name: Skolt Sami Test accuracy value: 26.8 - type: accuracy name: Afrikaans Test accuracy value: 83.9 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 20.9 - type: accuracy name: Belarusian Test accuracy value: 91.9 - type: accuracy name: Serbian Test accuracy value: 89.7 - type: accuracy name: Moksha Test accuracy value: 40.7 - type: accuracy name: Western Armenian Test accuracy value: 84.5 - type: accuracy name: Scottish Gaelic Test accuracy value: 56.9 - type: accuracy name: Khunsari Test accuracy value: 43.2 - type: accuracy name: Hebrew Test accuracy value: 91.7 - type: accuracy name: Uyghur Test accuracy value: 78.1 - type: accuracy name: Chukchi Test accuracy value: 33.2 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Armenian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hy") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hy") ```
wietsedv/xlm-roberta-base-ft-udpos28-hr
wietsedv
2022-02-25T09:58:44Z
4
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "hr", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - hr license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-hr results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 83.7 - type: accuracy name: Dutch Test accuracy value: 83.7 - type: accuracy name: German Test accuracy value: 83.2 - type: accuracy name: Italian Test accuracy value: 83.2 - type: accuracy name: French Test accuracy value: 84.2 - type: accuracy name: Spanish Test accuracy value: 87.8 - type: accuracy name: Russian Test accuracy value: 91.4 - type: accuracy name: Swedish Test accuracy value: 85.4 - type: accuracy name: Norwegian Test accuracy value: 79.0 - type: accuracy name: Danish Test accuracy value: 83.8 - type: accuracy name: Low Saxon Test accuracy value: 43.5 - type: accuracy name: Akkadian Test accuracy value: 32.5 - type: accuracy name: Armenian Test accuracy value: 84.7 - type: accuracy name: Welsh Test accuracy value: 67.9 - type: accuracy name: Old East Slavic Test accuracy value: 76.8 - type: accuracy name: Albanian Test accuracy value: 75.2 - type: accuracy name: Slovenian Test accuracy value: 87.0 - type: accuracy name: Guajajara Test accuracy value: 28.3 - type: accuracy name: Kurmanji Test accuracy value: 78.5 - type: accuracy name: Turkish Test accuracy value: 75.9 - type: accuracy name: Finnish Test accuracy value: 83.2 - type: accuracy name: Indonesian Test accuracy value: 81.3 - type: accuracy name: Ukrainian Test accuracy value: 93.2 - type: accuracy name: Polish Test accuracy value: 92.3 - type: accuracy name: Portuguese Test accuracy value: 84.6 - type: accuracy name: Kazakh Test accuracy value: 79.4 - type: accuracy name: Latin Test accuracy value: 77.4 - type: accuracy name: Old French Test accuracy value: 54.3 - type: accuracy name: Buryat Test accuracy value: 61.1 - type: accuracy name: Kaapor Test accuracy value: 20.0 - type: accuracy name: Korean Test accuracy value: 60.7 - type: accuracy name: Estonian Test accuracy value: 85.7 - type: accuracy name: Croatian Test accuracy value: 98.3 - type: accuracy name: Gothic Test accuracy value: 16.5 - type: accuracy name: Swiss German Test accuracy value: 44.8 - type: accuracy name: Assyrian Test accuracy value: 15.9 - type: accuracy name: North Sami Test accuracy value: 35.3 - type: accuracy name: Naija Test accuracy value: 39.6 - type: accuracy name: Latvian Test accuracy value: 86.5 - type: accuracy name: Chinese Test accuracy value: 41.2 - type: accuracy name: Tagalog Test accuracy value: 70.9 - type: accuracy name: Bambara Test accuracy value: 28.2 - type: accuracy name: Lithuanian Test accuracy value: 86.1 - type: accuracy name: Galician Test accuracy value: 86.0 - type: accuracy name: Vietnamese Test accuracy value: 66.5 - type: accuracy name: Greek Test accuracy value: 85.8 - type: accuracy name: Catalan Test accuracy value: 85.5 - type: accuracy name: Czech Test accuracy value: 94.8 - type: accuracy name: Erzya Test accuracy value: 47.2 - type: accuracy name: Bhojpuri Test accuracy value: 49.2 - type: accuracy name: Thai Test accuracy value: 63.4 - type: accuracy name: Marathi Test accuracy value: 87.1 - type: accuracy name: Basque Test accuracy value: 75.0 - type: accuracy name: Slovak Test accuracy value: 95.0 - type: accuracy name: Kiche Test accuracy value: 35.8 - type: accuracy name: Yoruba Test accuracy value: 28.5 - type: accuracy name: Warlpiri Test accuracy value: 41.3 - type: accuracy name: Tamil Test accuracy value: 84.8 - type: accuracy name: Maltese Test accuracy value: 23.7 - type: accuracy name: Ancient Greek Test accuracy value: 62.1 - type: accuracy name: Icelandic Test accuracy value: 79.9 - type: accuracy name: Mbya Guarani Test accuracy value: 31.9 - type: accuracy name: Urdu Test accuracy value: 65.0 - type: accuracy name: Romanian Test accuracy value: 82.5 - type: accuracy name: Persian Test accuracy value: 79.4 - type: accuracy name: Apurina Test accuracy value: 38.4 - type: accuracy name: Japanese Test accuracy value: 30.1 - type: accuracy name: Hungarian Test accuracy value: 83.8 - type: accuracy name: Hindi Test accuracy value: 67.8 - type: accuracy name: Classical Chinese Test accuracy value: 27.0 - type: accuracy name: Komi Permyak Test accuracy value: 44.9 - type: accuracy name: Faroese Test accuracy value: 77.3 - type: accuracy name: Sanskrit Test accuracy value: 35.6 - type: accuracy name: Livvi Test accuracy value: 65.5 - type: accuracy name: Arabic Test accuracy value: 82.3 - type: accuracy name: Wolof Test accuracy value: 32.2 - type: accuracy name: Bulgarian Test accuracy value: 92.6 - type: accuracy name: Akuntsu Test accuracy value: 37.0 - type: accuracy name: Makurap Test accuracy value: 17.8 - type: accuracy name: Kangri Test accuracy value: 47.9 - type: accuracy name: Breton Test accuracy value: 62.2 - type: accuracy name: Telugu Test accuracy value: 82.4 - type: accuracy name: Cantonese Test accuracy value: 45.6 - type: accuracy name: Old Church Slavonic Test accuracy value: 48.9 - type: accuracy name: Karelian Test accuracy value: 71.7 - type: accuracy name: Upper Sorbian Test accuracy value: 79.4 - type: accuracy name: South Levantine Arabic Test accuracy value: 68.9 - type: accuracy name: Komi Zyrian Test accuracy value: 39.6 - type: accuracy name: Irish Test accuracy value: 65.4 - type: accuracy name: Nayini Test accuracy value: 42.3 - type: accuracy name: Munduruku Test accuracy value: 28.8 - type: accuracy name: Manx Test accuracy value: 35.7 - type: accuracy name: Skolt Sami Test accuracy value: 33.7 - type: accuracy name: Afrikaans Test accuracy value: 79.8 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 33.1 - type: accuracy name: Belarusian Test accuracy value: 91.6 - type: accuracy name: Serbian Test accuracy value: 97.5 - type: accuracy name: Moksha Test accuracy value: 45.7 - type: accuracy name: Western Armenian Test accuracy value: 77.7 - type: accuracy name: Scottish Gaelic Test accuracy value: 57.7 - type: accuracy name: Khunsari Test accuracy value: 36.5 - type: accuracy name: Hebrew Test accuracy value: 85.4 - type: accuracy name: Uyghur Test accuracy value: 72.2 - type: accuracy name: Chukchi Test accuracy value: 35.4 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Croatian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hr") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hr") ```
wietsedv/xlm-roberta-base-ft-udpos28-gl
wietsedv
2022-02-25T09:58:36Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "gl", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - gl license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-gl results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 86.5 - type: accuracy name: Dutch Test accuracy value: 87.6 - type: accuracy name: German Test accuracy value: 83.3 - type: accuracy name: Italian Test accuracy value: 88.6 - type: accuracy name: French Test accuracy value: 88.3 - type: accuracy name: Spanish Test accuracy value: 86.6 - type: accuracy name: Russian Test accuracy value: 89.2 - type: accuracy name: Swedish Test accuracy value: 87.7 - type: accuracy name: Norwegian Test accuracy value: 83.2 - type: accuracy name: Danish Test accuracy value: 87.8 - type: accuracy name: Low Saxon Test accuracy value: 53.1 - type: accuracy name: Akkadian Test accuracy value: 30.7 - type: accuracy name: Armenian Test accuracy value: 84.7 - type: accuracy name: Welsh Test accuracy value: 67.1 - type: accuracy name: Old East Slavic Test accuracy value: 73.7 - type: accuracy name: Albanian Test accuracy value: 79.7 - type: accuracy name: Slovenian Test accuracy value: 78.4 - type: accuracy name: Guajajara Test accuracy value: 25.8 - type: accuracy name: Kurmanji Test accuracy value: 79.4 - type: accuracy name: Turkish Test accuracy value: 76.8 - type: accuracy name: Finnish Test accuracy value: 84.4 - type: accuracy name: Indonesian Test accuracy value: 83.9 - type: accuracy name: Ukrainian Test accuracy value: 86.6 - type: accuracy name: Polish Test accuracy value: 86.8 - type: accuracy name: Portuguese Test accuracy value: 90.9 - type: accuracy name: Kazakh Test accuracy value: 81.1 - type: accuracy name: Latin Test accuracy value: 80.0 - type: accuracy name: Old French Test accuracy value: 64.0 - type: accuracy name: Buryat Test accuracy value: 58.0 - type: accuracy name: Kaapor Test accuracy value: 18.8 - type: accuracy name: Korean Test accuracy value: 62.5 - type: accuracy name: Estonian Test accuracy value: 85.3 - type: accuracy name: Croatian Test accuracy value: 88.3 - type: accuracy name: Gothic Test accuracy value: 22.4 - type: accuracy name: Swiss German Test accuracy value: 47.9 - type: accuracy name: Assyrian Test accuracy value: 14.6 - type: accuracy name: North Sami Test accuracy value: 32.1 - type: accuracy name: Naija Test accuracy value: 41.1 - type: accuracy name: Latvian Test accuracy value: 86.5 - type: accuracy name: Chinese Test accuracy value: 32.8 - type: accuracy name: Tagalog Test accuracy value: 71.9 - type: accuracy name: Bambara Test accuracy value: 28.8 - type: accuracy name: Lithuanian Test accuracy value: 85.4 - type: accuracy name: Galician Test accuracy value: 93.8 - type: accuracy name: Vietnamese Test accuracy value: 63.8 - type: accuracy name: Greek Test accuracy value: 87.6 - type: accuracy name: Catalan Test accuracy value: 87.4 - type: accuracy name: Czech Test accuracy value: 87.6 - type: accuracy name: Erzya Test accuracy value: 42.6 - type: accuracy name: Bhojpuri Test accuracy value: 52.0 - type: accuracy name: Thai Test accuracy value: 49.3 - type: accuracy name: Marathi Test accuracy value: 80.4 - type: accuracy name: Basque Test accuracy value: 75.8 - type: accuracy name: Slovak Test accuracy value: 87.6 - type: accuracy name: Kiche Test accuracy value: 31.8 - type: accuracy name: Yoruba Test accuracy value: 21.5 - type: accuracy name: Warlpiri Test accuracy value: 34.4 - type: accuracy name: Tamil Test accuracy value: 81.6 - type: accuracy name: Maltese Test accuracy value: 25.2 - type: accuracy name: Ancient Greek Test accuracy value: 59.4 - type: accuracy name: Icelandic Test accuracy value: 82.0 - type: accuracy name: Mbya Guarani Test accuracy value: 29.2 - type: accuracy name: Urdu Test accuracy value: 64.6 - type: accuracy name: Romanian Test accuracy value: 84.5 - type: accuracy name: Persian Test accuracy value: 78.9 - type: accuracy name: Apurina Test accuracy value: 32.8 - type: accuracy name: Japanese Test accuracy value: 20.0 - type: accuracy name: Hungarian Test accuracy value: 83.0 - type: accuracy name: Hindi Test accuracy value: 71.8 - type: accuracy name: Classical Chinese Test accuracy value: 14.3 - type: accuracy name: Komi Permyak Test accuracy value: 42.7 - type: accuracy name: Faroese Test accuracy value: 76.8 - type: accuracy name: Sanskrit Test accuracy value: 21.0 - type: accuracy name: Livvi Test accuracy value: 62.4 - type: accuracy name: Arabic Test accuracy value: 82.1 - type: accuracy name: Wolof Test accuracy value: 33.2 - type: accuracy name: Bulgarian Test accuracy value: 89.5 - type: accuracy name: Akuntsu Test accuracy value: 24.4 - type: accuracy name: Makurap Test accuracy value: 16.4 - type: accuracy name: Kangri Test accuracy value: 43.6 - type: accuracy name: Breton Test accuracy value: 66.2 - type: accuracy name: Telugu Test accuracy value: 79.6 - type: accuracy name: Cantonese Test accuracy value: 37.0 - type: accuracy name: Old Church Slavonic Test accuracy value: 49.5 - type: accuracy name: Karelian Test accuracy value: 69.5 - type: accuracy name: Upper Sorbian Test accuracy value: 73.2 - type: accuracy name: South Levantine Arabic Test accuracy value: 65.1 - type: accuracy name: Komi Zyrian Test accuracy value: 36.2 - type: accuracy name: Irish Test accuracy value: 69.2 - type: accuracy name: Nayini Test accuracy value: 43.6 - type: accuracy name: Munduruku Test accuracy value: 19.7 - type: accuracy name: Manx Test accuracy value: 33.4 - type: accuracy name: Skolt Sami Test accuracy value: 30.3 - type: accuracy name: Afrikaans Test accuracy value: 83.3 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 26.9 - type: accuracy name: Belarusian Test accuracy value: 87.9 - type: accuracy name: Serbian Test accuracy value: 89.8 - type: accuracy name: Moksha Test accuracy value: 38.8 - type: accuracy name: Western Armenian Test accuracy value: 78.1 - type: accuracy name: Scottish Gaelic Test accuracy value: 58.7 - type: accuracy name: Khunsari Test accuracy value: 35.1 - type: accuracy name: Hebrew Test accuracy value: 90.6 - type: accuracy name: Uyghur Test accuracy value: 70.7 - type: accuracy name: Chukchi Test accuracy value: 28.7 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Galician This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-gl") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-gl") ```
wietsedv/xlm-roberta-base-ft-udpos28-fro
wietsedv
2022-02-25T09:58:31Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "fro", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - fro license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-fro results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 73.4 - type: accuracy name: Dutch Test accuracy value: 73.1 - type: accuracy name: German Test accuracy value: 70.7 - type: accuracy name: Italian Test accuracy value: 72.6 - type: accuracy name: French Test accuracy value: 79.3 - type: accuracy name: Spanish Test accuracy value: 78.0 - type: accuracy name: Russian Test accuracy value: 68.8 - type: accuracy name: Swedish Test accuracy value: 76.8 - type: accuracy name: Norwegian Test accuracy value: 69.6 - type: accuracy name: Danish Test accuracy value: 74.2 - type: accuracy name: Low Saxon Test accuracy value: 40.3 - type: accuracy name: Akkadian Test accuracy value: 38.3 - type: accuracy name: Armenian Test accuracy value: 64.7 - type: accuracy name: Welsh Test accuracy value: 56.3 - type: accuracy name: Old East Slavic Test accuracy value: 67.5 - type: accuracy name: Albanian Test accuracy value: 66.5 - type: accuracy name: Slovenian Test accuracy value: 64.2 - type: accuracy name: Guajajara Test accuracy value: 15.0 - type: accuracy name: Kurmanji Test accuracy value: 59.9 - type: accuracy name: Turkish Test accuracy value: 57.2 - type: accuracy name: Finnish Test accuracy value: 66.3 - type: accuracy name: Indonesian Test accuracy value: 66.9 - type: accuracy name: Ukrainian Test accuracy value: 66.7 - type: accuracy name: Polish Test accuracy value: 67.3 - type: accuracy name: Portuguese Test accuracy value: 73.1 - type: accuracy name: Kazakh Test accuracy value: 58.5 - type: accuracy name: Latin Test accuracy value: 65.3 - type: accuracy name: Old French Test accuracy value: 93.3 - type: accuracy name: Buryat Test accuracy value: 43.2 - type: accuracy name: Kaapor Test accuracy value: 25.8 - type: accuracy name: Korean Test accuracy value: 50.3 - type: accuracy name: Estonian Test accuracy value: 66.1 - type: accuracy name: Croatian Test accuracy value: 72.0 - type: accuracy name: Gothic Test accuracy value: 38.1 - type: accuracy name: Swiss German Test accuracy value: 34.6 - type: accuracy name: Assyrian Test accuracy value: 8.2 - type: accuracy name: North Sami Test accuracy value: 23.0 - type: accuracy name: Naija Test accuracy value: 40.4 - type: accuracy name: Latvian Test accuracy value: 65.2 - type: accuracy name: Chinese Test accuracy value: 36.4 - type: accuracy name: Tagalog Test accuracy value: 53.3 - type: accuracy name: Bambara Test accuracy value: 13.4 - type: accuracy name: Lithuanian Test accuracy value: 64.1 - type: accuracy name: Galician Test accuracy value: 71.6 - type: accuracy name: Vietnamese Test accuracy value: 46.7 - type: accuracy name: Greek Test accuracy value: 72.9 - type: accuracy name: Catalan Test accuracy value: 76.9 - type: accuracy name: Czech Test accuracy value: 68.8 - type: accuracy name: Erzya Test accuracy value: 25.4 - type: accuracy name: Bhojpuri Test accuracy value: 41.2 - type: accuracy name: Thai Test accuracy value: 52.2 - type: accuracy name: Marathi Test accuracy value: 51.5 - type: accuracy name: Basque Test accuracy value: 59.6 - type: accuracy name: Slovak Test accuracy value: 70.7 - type: accuracy name: Kiche Test accuracy value: 19.7 - type: accuracy name: Yoruba Test accuracy value: 18.3 - type: accuracy name: Warlpiri Test accuracy value: 15.8 - type: accuracy name: Tamil Test accuracy value: 62.0 - type: accuracy name: Maltese Test accuracy value: 28.1 - type: accuracy name: Ancient Greek Test accuracy value: 56.3 - type: accuracy name: Icelandic Test accuracy value: 70.6 - type: accuracy name: Mbya Guarani Test accuracy value: 16.8 - type: accuracy name: Urdu Test accuracy value: 54.2 - type: accuracy name: Romanian Test accuracy value: 69.1 - type: accuracy name: Persian Test accuracy value: 65.4 - type: accuracy name: Apurina Test accuracy value: 24.5 - type: accuracy name: Japanese Test accuracy value: 31.0 - type: accuracy name: Hungarian Test accuracy value: 62.5 - type: accuracy name: Hindi Test accuracy value: 58.3 - type: accuracy name: Classical Chinese Test accuracy value: 41.9 - type: accuracy name: Komi Permyak Test accuracy value: 30.3 - type: accuracy name: Faroese Test accuracy value: 62.5 - type: accuracy name: Sanskrit Test accuracy value: 37.8 - type: accuracy name: Livvi Test accuracy value: 40.2 - type: accuracy name: Arabic Test accuracy value: 66.2 - type: accuracy name: Wolof Test accuracy value: 26.8 - type: accuracy name: Bulgarian Test accuracy value: 72.5 - type: accuracy name: Akuntsu Test accuracy value: 24.2 - type: accuracy name: Makurap Test accuracy value: 19.2 - type: accuracy name: Kangri Test accuracy value: 36.4 - type: accuracy name: Breton Test accuracy value: 47.3 - type: accuracy name: Telugu Test accuracy value: 58.4 - type: accuracy name: Cantonese Test accuracy value: 33.5 - type: accuracy name: Old Church Slavonic Test accuracy value: 57.3 - type: accuracy name: Karelian Test accuracy value: 49.4 - type: accuracy name: Upper Sorbian Test accuracy value: 52.3 - type: accuracy name: South Levantine Arabic Test accuracy value: 48.3 - type: accuracy name: Komi Zyrian Test accuracy value: 26.6 - type: accuracy name: Irish Test accuracy value: 46.7 - type: accuracy name: Nayini Test accuracy value: 41.0 - type: accuracy name: Munduruku Test accuracy value: 15.6 - type: accuracy name: Manx Test accuracy value: 16.1 - type: accuracy name: Skolt Sami Test accuracy value: 20.0 - type: accuracy name: Afrikaans Test accuracy value: 77.0 - type: accuracy name: Old Turkish Test accuracy value: 2.7 - type: accuracy name: Tupinamba Test accuracy value: 23.5 - type: accuracy name: Belarusian Test accuracy value: 67.8 - type: accuracy name: Serbian Test accuracy value: 74.1 - type: accuracy name: Moksha Test accuracy value: 27.3 - type: accuracy name: Western Armenian Test accuracy value: 61.6 - type: accuracy name: Scottish Gaelic Test accuracy value: 42.8 - type: accuracy name: Khunsari Test accuracy value: 32.4 - type: accuracy name: Hebrew Test accuracy value: 62.5 - type: accuracy name: Uyghur Test accuracy value: 55.0 - type: accuracy name: Chukchi Test accuracy value: 20.1 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Old French This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fro") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fro") ```
wietsedv/xlm-roberta-base-ft-udpos28-fr
wietsedv
2022-02-25T09:58:30Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "fr", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - fr license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-fr results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 87.6 - type: accuracy name: Dutch Test accuracy value: 89.0 - type: accuracy name: German Test accuracy value: 85.5 - type: accuracy name: Italian Test accuracy value: 91.7 - type: accuracy name: French Test accuracy value: 97.1 - type: accuracy name: Spanish Test accuracy value: 93.4 - type: accuracy name: Russian Test accuracy value: 91.4 - type: accuracy name: Swedish Test accuracy value: 89.6 - type: accuracy name: Norwegian Test accuracy value: 84.3 - type: accuracy name: Danish Test accuracy value: 90.2 - type: accuracy name: Low Saxon Test accuracy value: 32.4 - type: accuracy name: Akkadian Test accuracy value: 24.5 - type: accuracy name: Armenian Test accuracy value: 87.2 - type: accuracy name: Welsh Test accuracy value: 69.2 - type: accuracy name: Old East Slavic Test accuracy value: 71.5 - type: accuracy name: Albanian Test accuracy value: 78.3 - type: accuracy name: Slovenian Test accuracy value: 80.6 - type: accuracy name: Guajajara Test accuracy value: 20.3 - type: accuracy name: Kurmanji Test accuracy value: 78.9 - type: accuracy name: Turkish Test accuracy value: 77.9 - type: accuracy name: Finnish Test accuracy value: 86.5 - type: accuracy name: Indonesian Test accuracy value: 84.8 - type: accuracy name: Ukrainian Test accuracy value: 88.9 - type: accuracy name: Polish Test accuracy value: 88.1 - type: accuracy name: Portuguese Test accuracy value: 92.3 - type: accuracy name: Kazakh Test accuracy value: 82.9 - type: accuracy name: Latin Test accuracy value: 79.6 - type: accuracy name: Old French Test accuracy value: 68.2 - type: accuracy name: Buryat Test accuracy value: 53.6 - type: accuracy name: Kaapor Test accuracy value: 15.0 - type: accuracy name: Korean Test accuracy value: 64.3 - type: accuracy name: Estonian Test accuracy value: 87.5 - type: accuracy name: Croatian Test accuracy value: 89.5 - type: accuracy name: Gothic Test accuracy value: 11.6 - type: accuracy name: Swiss German Test accuracy value: 39.5 - type: accuracy name: Assyrian Test accuracy value: 14.8 - type: accuracy name: North Sami Test accuracy value: 27.0 - type: accuracy name: Naija Test accuracy value: 36.9 - type: accuracy name: Latvian Test accuracy value: 87.7 - type: accuracy name: Chinese Test accuracy value: 44.1 - type: accuracy name: Tagalog Test accuracy value: 72.8 - type: accuracy name: Bambara Test accuracy value: 24.7 - type: accuracy name: Lithuanian Test accuracy value: 86.9 - type: accuracy name: Galician Test accuracy value: 91.6 - type: accuracy name: Vietnamese Test accuracy value: 67.0 - type: accuracy name: Greek Test accuracy value: 88.0 - type: accuracy name: Catalan Test accuracy value: 92.5 - type: accuracy name: Czech Test accuracy value: 89.7 - type: accuracy name: Erzya Test accuracy value: 41.2 - type: accuracy name: Bhojpuri Test accuracy value: 48.9 - type: accuracy name: Thai Test accuracy value: 56.3 - type: accuracy name: Marathi Test accuracy value: 83.4 - type: accuracy name: Basque Test accuracy value: 75.9 - type: accuracy name: Slovak Test accuracy value: 91.1 - type: accuracy name: Kiche Test accuracy value: 32.5 - type: accuracy name: Yoruba Test accuracy value: 19.4 - type: accuracy name: Warlpiri Test accuracy value: 26.3 - type: accuracy name: Tamil Test accuracy value: 83.5 - type: accuracy name: Maltese Test accuracy value: 17.4 - type: accuracy name: Ancient Greek Test accuracy value: 60.2 - type: accuracy name: Icelandic Test accuracy value: 83.2 - type: accuracy name: Mbya Guarani Test accuracy value: 26.1 - type: accuracy name: Urdu Test accuracy value: 67.5 - type: accuracy name: Romanian Test accuracy value: 87.1 - type: accuracy name: Persian Test accuracy value: 78.6 - type: accuracy name: Apurina Test accuracy value: 26.1 - type: accuracy name: Japanese Test accuracy value: 32.3 - type: accuracy name: Hungarian Test accuracy value: 86.3 - type: accuracy name: Hindi Test accuracy value: 73.7 - type: accuracy name: Classical Chinese Test accuracy value: 28.4 - type: accuracy name: Komi Permyak Test accuracy value: 35.0 - type: accuracy name: Faroese Test accuracy value: 75.7 - type: accuracy name: Sanskrit Test accuracy value: 17.9 - type: accuracy name: Livvi Test accuracy value: 53.2 - type: accuracy name: Arabic Test accuracy value: 83.1 - type: accuracy name: Wolof Test accuracy value: 24.6 - type: accuracy name: Bulgarian Test accuracy value: 90.9 - type: accuracy name: Akuntsu Test accuracy value: 35.2 - type: accuracy name: Makurap Test accuracy value: 13.0 - type: accuracy name: Kangri Test accuracy value: 43.0 - type: accuracy name: Breton Test accuracy value: 67.7 - type: accuracy name: Telugu Test accuracy value: 83.6 - type: accuracy name: Cantonese Test accuracy value: 51.6 - type: accuracy name: Old Church Slavonic Test accuracy value: 43.3 - type: accuracy name: Karelian Test accuracy value: 67.3 - type: accuracy name: Upper Sorbian Test accuracy value: 65.1 - type: accuracy name: South Levantine Arabic Test accuracy value: 69.3 - type: accuracy name: Komi Zyrian Test accuracy value: 29.5 - type: accuracy name: Irish Test accuracy value: 69.4 - type: accuracy name: Nayini Test accuracy value: 48.7 - type: accuracy name: Munduruku Test accuracy value: 19.9 - type: accuracy name: Manx Test accuracy value: 27.6 - type: accuracy name: Skolt Sami Test accuracy value: 26.9 - type: accuracy name: Afrikaans Test accuracy value: 84.9 - type: accuracy name: Old Turkish Test accuracy value: 38.0 - type: accuracy name: Tupinamba Test accuracy value: 22.8 - type: accuracy name: Belarusian Test accuracy value: 89.5 - type: accuracy name: Serbian Test accuracy value: 90.8 - type: accuracy name: Moksha Test accuracy value: 39.0 - type: accuracy name: Western Armenian Test accuracy value: 76.8 - type: accuracy name: Scottish Gaelic Test accuracy value: 60.0 - type: accuracy name: Khunsari Test accuracy value: 35.1 - type: accuracy name: Hebrew Test accuracy value: 94.8 - type: accuracy name: Uyghur Test accuracy value: 75.2 - type: accuracy name: Chukchi Test accuracy value: 30.9 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: French This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fr") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fr") ```
wietsedv/xlm-roberta-base-ft-udpos28-et
wietsedv
2022-02-25T09:58:22Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "et", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - et license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-et results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 82.3 - type: accuracy name: Dutch Test accuracy value: 80.9 - type: accuracy name: German Test accuracy value: 80.4 - type: accuracy name: Italian Test accuracy value: 78.0 - type: accuracy name: French Test accuracy value: 75.6 - type: accuracy name: Spanish Test accuracy value: 75.4 - type: accuracy name: Russian Test accuracy value: 88.2 - type: accuracy name: Swedish Test accuracy value: 89.1 - type: accuracy name: Norwegian Test accuracy value: 83.2 - type: accuracy name: Danish Test accuracy value: 87.0 - type: accuracy name: Low Saxon Test accuracy value: 52.2 - type: accuracy name: Akkadian Test accuracy value: 37.9 - type: accuracy name: Armenian Test accuracy value: 87.7 - type: accuracy name: Welsh Test accuracy value: 61.5 - type: accuracy name: Old East Slavic Test accuracy value: 74.6 - type: accuracy name: Albanian Test accuracy value: 74.0 - type: accuracy name: Slovenian Test accuracy value: 77.3 - type: accuracy name: Guajajara Test accuracy value: 30.7 - type: accuracy name: Kurmanji Test accuracy value: 76.7 - type: accuracy name: Turkish Test accuracy value: 79.3 - type: accuracy name: Finnish Test accuracy value: 90.5 - type: accuracy name: Indonesian Test accuracy value: 84.1 - type: accuracy name: Ukrainian Test accuracy value: 86.9 - type: accuracy name: Polish Test accuracy value: 84.4 - type: accuracy name: Portuguese Test accuracy value: 79.6 - type: accuracy name: Kazakh Test accuracy value: 83.0 - type: accuracy name: Latin Test accuracy value: 78.5 - type: accuracy name: Old French Test accuracy value: 50.0 - type: accuracy name: Buryat Test accuracy value: 64.6 - type: accuracy name: Kaapor Test accuracy value: 21.2 - type: accuracy name: Korean Test accuracy value: 62.9 - type: accuracy name: Estonian Test accuracy value: 96.8 - type: accuracy name: Croatian Test accuracy value: 87.0 - type: accuracy name: Gothic Test accuracy value: 24.7 - type: accuracy name: Swiss German Test accuracy value: 40.7 - type: accuracy name: Assyrian Test accuracy value: 20.1 - type: accuracy name: North Sami Test accuracy value: 46.7 - type: accuracy name: Naija Test accuracy value: 41.8 - type: accuracy name: Latvian Test accuracy value: 87.9 - type: accuracy name: Chinese Test accuracy value: 52.1 - type: accuracy name: Tagalog Test accuracy value: 65.9 - type: accuracy name: Bambara Test accuracy value: 27.9 - type: accuracy name: Lithuanian Test accuracy value: 86.0 - type: accuracy name: Galician Test accuracy value: 74.4 - type: accuracy name: Vietnamese Test accuracy value: 63.7 - type: accuracy name: Greek Test accuracy value: 77.4 - type: accuracy name: Catalan Test accuracy value: 73.4 - type: accuracy name: Czech Test accuracy value: 87.4 - type: accuracy name: Erzya Test accuracy value: 53.1 - type: accuracy name: Bhojpuri Test accuracy value: 52.4 - type: accuracy name: Thai Test accuracy value: 62.6 - type: accuracy name: Marathi Test accuracy value: 88.3 - type: accuracy name: Basque Test accuracy value: 77.1 - type: accuracy name: Slovak Test accuracy value: 87.0 - type: accuracy name: Kiche Test accuracy value: 37.8 - type: accuracy name: Yoruba Test accuracy value: 26.7 - type: accuracy name: Warlpiri Test accuracy value: 42.1 - type: accuracy name: Tamil Test accuracy value: 85.4 - type: accuracy name: Maltese Test accuracy value: 30.9 - type: accuracy name: Ancient Greek Test accuracy value: 65.9 - type: accuracy name: Icelandic Test accuracy value: 82.9 - type: accuracy name: Mbya Guarani Test accuracy value: 30.6 - type: accuracy name: Urdu Test accuracy value: 67.0 - type: accuracy name: Romanian Test accuracy value: 78.5 - type: accuracy name: Persian Test accuracy value: 73.9 - type: accuracy name: Apurina Test accuracy value: 47.9 - type: accuracy name: Japanese Test accuracy value: 38.9 - type: accuracy name: Hungarian Test accuracy value: 83.2 - type: accuracy name: Hindi Test accuracy value: 71.6 - type: accuracy name: Classical Chinese Test accuracy value: 35.4 - type: accuracy name: Komi Permyak Test accuracy value: 53.2 - type: accuracy name: Faroese Test accuracy value: 76.4 - type: accuracy name: Sanskrit Test accuracy value: 38.8 - type: accuracy name: Livvi Test accuracy value: 71.2 - type: accuracy name: Arabic Test accuracy value: 76.3 - type: accuracy name: Wolof Test accuracy value: 35.3 - type: accuracy name: Bulgarian Test accuracy value: 85.8 - type: accuracy name: Akuntsu Test accuracy value: 37.5 - type: accuracy name: Makurap Test accuracy value: 15.8 - type: accuracy name: Kangri Test accuracy value: 51.7 - type: accuracy name: Breton Test accuracy value: 60.1 - type: accuracy name: Telugu Test accuracy value: 84.2 - type: accuracy name: Cantonese Test accuracy value: 58.3 - type: accuracy name: Old Church Slavonic Test accuracy value: 51.8 - type: accuracy name: Karelian Test accuracy value: 75.7 - type: accuracy name: Upper Sorbian Test accuracy value: 77.3 - type: accuracy name: South Levantine Arabic Test accuracy value: 68.8 - type: accuracy name: Komi Zyrian Test accuracy value: 46.6 - type: accuracy name: Irish Test accuracy value: 60.5 - type: accuracy name: Nayini Test accuracy value: 42.3 - type: accuracy name: Munduruku Test accuracy value: 27.1 - type: accuracy name: Manx Test accuracy value: 35.3 - type: accuracy name: Skolt Sami Test accuracy value: 40.7 - type: accuracy name: Afrikaans Test accuracy value: 77.5 - type: accuracy name: Old Turkish Test accuracy value: 46.6 - type: accuracy name: Tupinamba Test accuracy value: 46.5 - type: accuracy name: Belarusian Test accuracy value: 87.1 - type: accuracy name: Serbian Test accuracy value: 86.9 - type: accuracy name: Moksha Test accuracy value: 48.3 - type: accuracy name: Western Armenian Test accuracy value: 80.6 - type: accuracy name: Scottish Gaelic Test accuracy value: 51.5 - type: accuracy name: Khunsari Test accuracy value: 40.5 - type: accuracy name: Hebrew Test accuracy value: 89.6 - type: accuracy name: Uyghur Test accuracy value: 77.1 - type: accuracy name: Chukchi Test accuracy value: 38.9 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Estonian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-et") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-et") ```
wietsedv/xlm-roberta-base-ft-udpos28-es
wietsedv
2022-02-25T09:58:20Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "part-of-speech", "es", "dataset:universal_dependencies", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - es license: apache-2.0 library_name: transformers tags: - part-of-speech - token-classification datasets: - universal_dependencies metrics: - accuracy model-index: - name: xlm-roberta-base-ft-udpos28-es results: - task: type: token-classification name: Part-of-Speech Tagging dataset: type: universal_dependencies name: Universal Dependencies v2.8 metrics: - type: accuracy name: English Test accuracy value: 88.3 - type: accuracy name: Dutch Test accuracy value: 88.6 - type: accuracy name: German Test accuracy value: 83.9 - type: accuracy name: Italian Test accuracy value: 90.7 - type: accuracy name: French Test accuracy value: 91.0 - type: accuracy name: Spanish Test accuracy value: 95.5 - type: accuracy name: Russian Test accuracy value: 89.9 - type: accuracy name: Swedish Test accuracy value: 89.8 - type: accuracy name: Norwegian Test accuracy value: 84.7 - type: accuracy name: Danish Test accuracy value: 90.8 - type: accuracy name: Low Saxon Test accuracy value: 43.6 - type: accuracy name: Akkadian Test accuracy value: 20.7 - type: accuracy name: Armenian Test accuracy value: 84.7 - type: accuracy name: Welsh Test accuracy value: 66.5 - type: accuracy name: Old East Slavic Test accuracy value: 74.0 - type: accuracy name: Albanian Test accuracy value: 79.3 - type: accuracy name: Slovenian Test accuracy value: 79.6 - type: accuracy name: Guajajara Test accuracy value: 21.0 - type: accuracy name: Kurmanji Test accuracy value: 77.6 - type: accuracy name: Turkish Test accuracy value: 77.2 - type: accuracy name: Finnish Test accuracy value: 85.5 - type: accuracy name: Indonesian Test accuracy value: 86.3 - type: accuracy name: Ukrainian Test accuracy value: 87.3 - type: accuracy name: Polish Test accuracy value: 87.6 - type: accuracy name: Portuguese Test accuracy value: 92.9 - type: accuracy name: Kazakh Test accuracy value: 80.9 - type: accuracy name: Latin Test accuracy value: 78.9 - type: accuracy name: Old French Test accuracy value: 59.6 - type: accuracy name: Buryat Test accuracy value: 57.6 - type: accuracy name: Kaapor Test accuracy value: 15.0 - type: accuracy name: Korean Test accuracy value: 61.9 - type: accuracy name: Estonian Test accuracy value: 88.3 - type: accuracy name: Croatian Test accuracy value: 88.9 - type: accuracy name: Gothic Test accuracy value: 16.0 - type: accuracy name: Swiss German Test accuracy value: 36.9 - type: accuracy name: Assyrian Test accuracy value: 15.9 - type: accuracy name: North Sami Test accuracy value: 29.4 - type: accuracy name: Naija Test accuracy value: 38.2 - type: accuracy name: Latvian Test accuracy value: 85.6 - type: accuracy name: Chinese Test accuracy value: 37.8 - type: accuracy name: Tagalog Test accuracy value: 74.4 - type: accuracy name: Bambara Test accuracy value: 25.9 - type: accuracy name: Lithuanian Test accuracy value: 84.7 - type: accuracy name: Galician Test accuracy value: 89.7 - type: accuracy name: Vietnamese Test accuracy value: 65.7 - type: accuracy name: Greek Test accuracy value: 86.9 - type: accuracy name: Catalan Test accuracy value: 95.7 - type: accuracy name: Czech Test accuracy value: 89.5 - type: accuracy name: Erzya Test accuracy value: 41.4 - type: accuracy name: Bhojpuri Test accuracy value: 47.5 - type: accuracy name: Thai Test accuracy value: 52.3 - type: accuracy name: Marathi Test accuracy value: 85.3 - type: accuracy name: Basque Test accuracy value: 74.9 - type: accuracy name: Slovak Test accuracy value: 90.2 - type: accuracy name: Kiche Test accuracy value: 28.2 - type: accuracy name: Yoruba Test accuracy value: 21.6 - type: accuracy name: Warlpiri Test accuracy value: 26.3 - type: accuracy name: Tamil Test accuracy value: 81.8 - type: accuracy name: Maltese Test accuracy value: 18.6 - type: accuracy name: Ancient Greek Test accuracy value: 60.6 - type: accuracy name: Icelandic Test accuracy value: 83.5 - type: accuracy name: Mbya Guarani Test accuracy value: 25.6 - type: accuracy name: Urdu Test accuracy value: 65.0 - type: accuracy name: Romanian Test accuracy value: 84.6 - type: accuracy name: Persian Test accuracy value: 78.2 - type: accuracy name: Apurina Test accuracy value: 25.0 - type: accuracy name: Japanese Test accuracy value: 23.8 - type: accuracy name: Hungarian Test accuracy value: 86.8 - type: accuracy name: Hindi Test accuracy value: 69.0 - type: accuracy name: Classical Chinese Test accuracy value: 29.7 - type: accuracy name: Komi Permyak Test accuracy value: 44.8 - type: accuracy name: Faroese Test accuracy value: 76.1 - type: accuracy name: Sanskrit Test accuracy value: 24.0 - type: accuracy name: Livvi Test accuracy value: 58.5 - type: accuracy name: Arabic Test accuracy value: 79.4 - type: accuracy name: Wolof Test accuracy value: 25.1 - type: accuracy name: Bulgarian Test accuracy value: 90.3 - type: accuracy name: Akuntsu Test accuracy value: 24.2 - type: accuracy name: Makurap Test accuracy value: 8.2 - type: accuracy name: Kangri Test accuracy value: 43.1 - type: accuracy name: Breton Test accuracy value: 64.1 - type: accuracy name: Telugu Test accuracy value: 84.0 - type: accuracy name: Cantonese Test accuracy value: 48.2 - type: accuracy name: Old Church Slavonic Test accuracy value: 52.8 - type: accuracy name: Karelian Test accuracy value: 68.6 - type: accuracy name: Upper Sorbian Test accuracy value: 69.8 - type: accuracy name: South Levantine Arabic Test accuracy value: 65.9 - type: accuracy name: Komi Zyrian Test accuracy value: 33.8 - type: accuracy name: Irish Test accuracy value: 68.5 - type: accuracy name: Nayini Test accuracy value: 34.6 - type: accuracy name: Munduruku Test accuracy value: 11.6 - type: accuracy name: Manx Test accuracy value: 28.7 - type: accuracy name: Skolt Sami Test accuracy value: 27.7 - type: accuracy name: Afrikaans Test accuracy value: 86.3 - type: accuracy name: Old Turkish Test accuracy value: 37.1 - type: accuracy name: Tupinamba Test accuracy value: 23.5 - type: accuracy name: Belarusian Test accuracy value: 87.2 - type: accuracy name: Serbian Test accuracy value: 90.6 - type: accuracy name: Moksha Test accuracy value: 37.5 - type: accuracy name: Western Armenian Test accuracy value: 77.5 - type: accuracy name: Scottish Gaelic Test accuracy value: 55.8 - type: accuracy name: Khunsari Test accuracy value: 36.5 - type: accuracy name: Hebrew Test accuracy value: 92.7 - type: accuracy name: Uyghur Test accuracy value: 73.9 - type: accuracy name: Chukchi Test accuracy value: 33.2 --- # XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Spanish This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. ## Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-es") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-es") ```
anas-awadalla/roberta-base-few-shot-k-32-finetuned-squad-seed-6
anas-awadalla
2022-02-25T09:45:24Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-32-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-32-finetuned-squad-seed-6 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-32-finetuned-squad-seed-4
anas-awadalla
2022-02-25T09:28:29Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-32-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-32-finetuned-squad-seed-4 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/roberta-base-few-shot-k-32-finetuned-squad-seed-2
anas-awadalla
2022-02-25T09:11:30Z
4
0
transformers
[ "transformers", "pytorch", "roberta", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: roberta-base-few-shot-k-32-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-few-shot-k-32-finetuned-squad-seed-2 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anantoj/wav2vec2-xls-r-300m-adult-child-cls
anantoj
2022-02-25T07:47:57Z
21
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: wav2vec2-xls-r-300m-adult-child-cls results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-adult-child-cls This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1770 - Accuracy: 0.9404 - F1: 0.9440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.25 | 1.0 | 383 | 0.2516 | 0.9077 | 0.9106 | | 0.2052 | 2.0 | 766 | 0.2138 | 0.9321 | 0.9353 | | 0.1901 | 3.0 | 1149 | 0.1770 | 0.9404 | 0.9440 | | 0.2255 | 4.0 | 1532 | 0.1794 | 0.9404 | 0.9440 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
anas-awadalla
2022-02-25T06:39:41Z
5
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2
anas-awadalla
2022-02-25T05:48:11Z
13
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
BigSalmon/GPTNeo350MInformalToFormalLincoln3
BigSalmon
2022-02-25T05:04:02Z
20
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3") model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln3") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel. Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle. Essay Intro ( ```
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8
anas-awadalla
2022-02-25T04:58:07Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6
anas-awadalla
2022-02-25T04:42:31Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2
anas-awadalla
2022-02-25T04:11:20Z
6
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
hfl/chinese-pert-large
hfl
2022-02-25T04:09:23Z
61
10
transformers
[ "transformers", "pytorch", "tf", "bert", "feature-extraction", "zh", "license:cc-by-nc-sa-4.0", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: - zh license: "cc-by-nc-sa-4.0" --- # Please use 'Bert' related functions to load this model! Under construction... Please visit our GitHub repo for more information: https://github.com/ymcui/PERT
ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000
ASCCCCCCCC
2022-02-25T03:38:48Z
20
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-amazon_zh_20000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-amazon_zh_20000 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3516 - Accuracy: 0.414 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4343 | 1.0 | 1250 | 1.3516 | 0.414 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8
anas-awadalla
2022-02-25T03:25:26Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
anas-awadalla
2022-02-25T03:10:43Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4
anas-awadalla
2022-02-25T02:55:57Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
shields/wav2vec2-base-20sec-timit-and-dementiabank
shields
2022-02-25T02:39:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-20sec-timit-and-dementiabank results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-20sec-timit-and-dementiabank This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4338 - Wer: 0.2313 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6839 | 2.53 | 500 | 2.7287 | 1.0 | | 0.8708 | 5.05 | 1000 | 0.5004 | 0.3490 | | 0.2879 | 7.58 | 1500 | 0.4411 | 0.2872 | | 0.1877 | 10.1 | 2000 | 0.4359 | 0.2594 | | 0.1617 | 12.63 | 2500 | 0.4404 | 0.2492 | | 0.1295 | 15.15 | 3000 | 0.4356 | 0.2418 | | 0.1146 | 17.68 | 3500 | 0.4338 | 0.2313 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0
anas-awadalla
2022-02-25T02:26:29Z
6
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10
anas-awadalla
2022-02-25T02:11:47Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3