modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
sequence
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
9pinus/macbert-base-chinese-medicine-recognition
9pinus
2022-03-02T09:20:41Z
33
5
transformers
[ "transformers", "pytorch", "bert", "token-classification", "Token Classification", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - Token Classification language: - zh --- ## Model description This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset. ## Intended use ```python >>> from transformers import (AutoModelForTokenClassification, AutoTokenizer) >>> from transformers import pipeline >>> hub_model_id = "9pinus/macbert-base-chinese-medicine-recognition" >>> model = AutoModelForTokenClassification.from_pretrained(hub_model_id) >>> tokenizer = AutoTokenizer.from_pretrained(hub_model_id) >>> classifier = pipeline('ner', model=model, tokenizer=tokenizer) >>> result = classifier("如果病情较重,可适当口服甲硝唑片、环酯红霉素片、吲哚美辛片等药物进行抗感染镇痛。") >>> for item in result: >>> if item['entity'] == 1 or item['entity'] == 2: >>> print(item) {'entity': 1, 'score': 0.99999595, 'index': 13, 'word': '甲', 'start': 12, 'end': 13} {'entity': 2, 'score': 0.9999957, 'index': 14, 'word': '硝', 'start': 13, 'end': 14} {'entity': 2, 'score': 0.99999166, 'index': 15, 'word': '唑', 'start': 14, 'end': 15} {'entity': 2, 'score': 0.99898833, 'index': 16, 'word': '片', 'start': 15, 'end': 16} {'entity': 1, 'score': 0.9999864, 'index': 18, 'word': '环', 'start': 17, 'end': 18} {'entity': 2, 'score': 0.99999404, 'index': 19, 'word': '酯', 'start': 18, 'end': 19} {'entity': 2, 'score': 0.99999475, 'index': 20, 'word': '红', 'start': 19, 'end': 20} {'entity': 2, 'score': 0.9999964, 'index': 21, 'word': '霉', 'start': 20, 'end': 21} {'entity': 2, 'score': 0.9999951, 'index': 22, 'word': '素', 'start': 21, 'end': 22} {'entity': 2, 'score': 0.9990088, 'index': 23, 'word': '片', 'start': 22, 'end': 23} {'entity': 1, 'score': 0.9999975, 'index': 25, 'word': '吲', 'start': 24, 'end': 25} {'entity': 2, 'score': 0.9999957, 'index': 26, 'word': '哚', 'start': 25, 'end': 26} {'entity': 2, 'score': 0.9999945, 'index': 27, 'word': '美', 'start': 26, 'end': 27} {'entity': 2, 'score': 0.9999933, 'index': 28, 'word': '辛', 'start': 27, 'end': 28} {'entity': 2, 'score': 0.99949837, 'index': 29, 'word': '片', 'start': 28, 'end': 29} ``` ## Training and evaluation data ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingartists/pink-floyd
huggingartists
2022-03-02T09:18:41Z
3
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/pink-floyd", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/pink-floyd tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/6b5c50912d99c3cf0eabfec5f427c452.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Pink Floyd</div> <a href="https://genius.com/artists/pink-floyd"> <div style="text-align: center; font-size: 14px;">@pink-floyd</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Pink Floyd. Dataset is available [here](https://huggingface.co/datasets/huggingartists/pink-floyd). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/pink-floyd") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3j9osgks/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Pink Floyd's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1wlqpngf) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1wlqpngf/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/pink-floyd') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/pink-floyd") model = AutoModelWithLMHead.from_pretrained("huggingartists/pink-floyd") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
Akash7897/distilbert-base-uncased-finetuned-cola
Akash7897
2022-03-02T08:29:47Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.522211073949747 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0789 - Matthews Correlation: 0.5222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.1472 | 1.0 | 535 | 0.8407 | 0.4915 | | 0.1365 | 2.0 | 1070 | 0.9236 | 0.4990 | | 0.1194 | 3.0 | 1605 | 0.8753 | 0.4953 | | 0.1313 | 4.0 | 2140 | 0.9684 | 0.5013 | | 0.0895 | 5.0 | 2675 | 1.0789 | 0.5222 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Theivaprakasham/layoutlmv2-finetuned-sroie
Theivaprakasham
2022-03-02T08:12:26Z
21
2
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "dataset:sroie", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - sroie model-index: - name: layoutlmv2-finetuned-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-sroie This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the sroie dataset. It achieves the following results on the evaluation set: - Loss: 0.0291 - Address Precision: 0.9341 - Address Recall: 0.9395 - Address F1: 0.9368 - Address Number: 347 - Company Precision: 0.9570 - Company Recall: 0.9625 - Company F1: 0.9598 - Company Number: 347 - Date Precision: 0.9885 - Date Recall: 0.9885 - Date F1: 0.9885 - Date Number: 347 - Total Precision: 0.9253 - Total Recall: 0.9280 - Total F1: 0.9266 - Total Number: 347 - Overall Precision: 0.9512 - Overall Recall: 0.9546 - Overall F1: 0.9529 - Overall Accuracy: 0.9961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Address Precision | Address Recall | Address F1 | Address Number | Company Precision | Company Recall | Company F1 | Company Number | Date Precision | Date Recall | Date F1 | Date Number | Total Precision | Total Recall | Total F1 | Total Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------------:|:--------------:|:----------:|:--------------:|:--------------:|:-----------:|:-------:|:-----------:|:---------------:|:------------:|:--------:|:------------:|:-----------------:|:--------------:|:----------:|:----------------:| | No log | 0.05 | 157 | 0.8162 | 0.3670 | 0.7233 | 0.4869 | 347 | 0.0617 | 0.0144 | 0.0234 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.3346 | 0.1844 | 0.2378 | 0.9342 | | No log | 1.05 | 314 | 0.3490 | 0.8564 | 0.8934 | 0.8745 | 347 | 0.8610 | 0.9280 | 0.8932 | 347 | 0.7297 | 0.8559 | 0.7878 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.8128 | 0.6693 | 0.7341 | 0.9826 | | No log | 2.05 | 471 | 0.1845 | 0.7970 | 0.9049 | 0.8475 | 347 | 0.9211 | 0.9424 | 0.9316 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.8978 | 0.7089 | 0.7923 | 0.9835 | | 0.7027 | 3.05 | 628 | 0.1194 | 0.9040 | 0.9222 | 0.9130 | 347 | 0.8880 | 0.9135 | 0.9006 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.0 | 0.0 | 0.0 | 347 | 0.9263 | 0.7061 | 0.8013 | 0.9853 | | 0.7027 | 4.05 | 785 | 0.0762 | 0.9397 | 0.9424 | 0.9410 | 347 | 0.8889 | 0.9222 | 0.9052 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.7740 | 0.9078 | 0.8355 | 347 | 0.8926 | 0.9402 | 0.9158 | 0.9928 | | 0.7027 | 5.05 | 942 | 0.0564 | 0.9282 | 0.9308 | 0.9295 | 347 | 0.9296 | 0.9510 | 0.9402 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.7801 | 0.8588 | 0.8176 | 347 | 0.9036 | 0.9323 | 0.9177 | 0.9946 | | 0.0935 | 6.05 | 1099 | 0.0548 | 0.9222 | 0.9222 | 0.9222 | 347 | 0.6975 | 0.7378 | 0.7171 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.8608 | 0.8732 | 0.8670 | 347 | 0.8648 | 0.8804 | 0.8725 | 0.9921 | | 0.0935 | 7.05 | 1256 | 0.0410 | 0.92 | 0.9280 | 0.9240 | 347 | 0.9486 | 0.9568 | 0.9527 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9091 | 0.9222 | 0.9156 | 347 | 0.9414 | 0.9488 | 0.9451 | 0.9961 | | 0.0935 | 8.05 | 1413 | 0.0369 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9569 | 0.9597 | 0.9583 | 347 | 0.9772 | 0.9885 | 0.9828 | 347 | 0.9143 | 0.9222 | 0.9182 | 347 | 0.9463 | 0.9524 | 0.9494 | 0.9960 | | 0.038 | 9.05 | 1570 | 0.0343 | 0.9282 | 0.9308 | 0.9295 | 347 | 0.9624 | 0.9597 | 0.9610 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9206 | 0.9020 | 0.9112 | 347 | 0.9500 | 0.9452 | 0.9476 | 0.9958 | | 0.038 | 10.05 | 1727 | 0.0317 | 0.9395 | 0.9395 | 0.9395 | 347 | 0.9598 | 0.9625 | 0.9612 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9280 | 0.9280 | 0.9280 | 347 | 0.9539 | 0.9546 | 0.9543 | 0.9963 | | 0.038 | 11.05 | 1884 | 0.0312 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9514 | 0.9597 | 0.9555 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9226 | 0.9280 | 0.9253 | 347 | 0.9498 | 0.9539 | 0.9518 | 0.9960 | | 0.0236 | 12.05 | 2041 | 0.0318 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9043 | 0.8991 | 0.9017 | 347 | 0.9467 | 0.9474 | 0.9471 | 0.9956 | | 0.0236 | 13.05 | 2198 | 0.0291 | 0.9337 | 0.9337 | 0.9337 | 347 | 0.9598 | 0.9625 | 0.9612 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9164 | 0.9164 | 0.9164 | 347 | 0.9496 | 0.9503 | 0.9499 | 0.9960 | | 0.0236 | 14.05 | 2355 | 0.0300 | 0.9286 | 0.9366 | 0.9326 | 347 | 0.9459 | 0.9568 | 0.9513 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9275 | 0.9222 | 0.9249 | 347 | 0.9476 | 0.9510 | 0.9493 | 0.9959 | | 0.0178 | 15.05 | 2512 | 0.0307 | 0.9366 | 0.9366 | 0.9366 | 347 | 0.9513 | 0.9568 | 0.9540 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9275 | 0.9222 | 0.9249 | 347 | 0.9510 | 0.9510 | 0.9510 | 0.9959 | | 0.0178 | 16.05 | 2669 | 0.0300 | 0.9312 | 0.9366 | 0.9339 | 347 | 0.9543 | 0.9625 | 0.9584 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9171 | 0.9251 | 0.9211 | 347 | 0.9477 | 0.9532 | 0.9504 | 0.9959 | | 0.0178 | 17.05 | 2826 | 0.0292 | 0.9368 | 0.9395 | 0.9381 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9519 | 0.9546 | 0.9532 | 0.9961 | | 0.0178 | 18.05 | 2983 | 0.0291 | 0.9341 | 0.9395 | 0.9368 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9512 | 0.9546 | 0.9529 | 0.9961 | | 0.0149 | 19.01 | 3000 | 0.0291 | 0.9341 | 0.9395 | 0.9368 | 347 | 0.9570 | 0.9625 | 0.9598 | 347 | 0.9885 | 0.9885 | 0.9885 | 347 | 0.9253 | 0.9280 | 0.9266 | 347 | 0.9512 | 0.9546 | 0.9529 | 0.9961 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.8.0+cu101 - Datasets 1.18.4.dev0 - Tokenizers 0.11.6
cnu/distilbert-base-uncased-finetuned-cola
cnu
2022-03-02T07:30:35Z
4
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5474713423103301 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8651 - Matthews Correlation: 0.5475 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5233 | 1.0 | 535 | 0.5353 | 0.4004 | | 0.3497 | 2.0 | 1070 | 0.5165 | 0.5076 | | 0.2386 | 3.0 | 1605 | 0.6661 | 0.5161 | | 0.1745 | 4.0 | 2140 | 0.7730 | 0.5406 | | 0.1268 | 5.0 | 2675 | 0.8651 | 0.5475 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2 - Datasets 1.18.3 - Tokenizers 0.11.6
anan0329/wav2vec2-base-timit-demo-colab
anan0329
2022-03-02T07:25:27Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
armageddon/electra-base-squad2-covid-qa-deepset
armageddon
2022-03-02T06:38:05Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:covid_qa_deepset", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: cc-by-4.0 tags: - generated_from_trainer datasets: - covid_qa_deepset model-index: - name: electra-base-squad2-covid-qa-deepset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-squad2-covid-qa-deepset This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on the covid_qa_deepset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.6
csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01
csukuangfj
2022-03-02T06:00:09Z
0
0
k2
[ "k2", "icefall", "transducer", "aishell", "ASR", "stateless transducer", "PyTorch", "en", "dataset:aishell", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: "en" tags: - icefall - k2 - transducer - aishell - ASR - stateless transducer - PyTorch license: "apache-2.0" datasets: - aishell metrics: - WER --- # Introduction This repo contains pre-trained model using <https://github.com/k2-fsa/icefall/pull/219>. It is trained on [AIShell](https://www.openslr.org/33/) dataset using modified transducer from [optimized_transducer](https://github.com/csukuangfj/optimized_transducer). ## How to clone this repo ``` sudo apt-get install git-lfs git clone https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01 cd icefall-aishell-transducer-stateless-modified-2022-03-01 git lfs pull ``` **Catuion**: You have to run `git lfs pull`. Otherwise, you will be SAD later. The model in this repo is trained using the commit `TODO`. You can use ``` git clone https://github.com/k2-fsa/icefall cd icefall git checkout TODO ``` to download `icefall`. You can find the model information by visiting <https://github.com/k2-fsa/icefall/blob/TODO/egs/aishell/ASR/transducer_stateless_modified/train.py#L232>. In short, the encoder is a Conformer model with 8 heads, 12 encoder layers, 512-dim attention, 2048-dim feedforward; the decoder contains a 512-dim embedding layer and a Conv1d with kernel size 2. The decoder architecture is modified from [Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419). A Conv1d layer is placed right after the input embedding layer. ----- ## Description This repo provides pre-trained transducer Conformer model for the AIShell dataset using [icefall][icefall]. There are no RNNs in the decoder. The decoder is stateless and contains only an embedding layer and a Conv1d. The commands for training are: ```bash cd egs/aishell/ASR ./prepare.sh --stop-stage 6 export CUDA_VISIBLE_DEVICES="0,1,2" ./transducer_stateless_modified/train.py \ --world-size 3 \ --num-epochs 90 \ --start-epoch 0 \ --exp-dir transducer_stateless_modified/exp-4 \ --max-duration 250 \ --lr-factor 2.0 \ --context-size 2 \ --modified-transducer-prob 0.25 ``` The tensorboard training log can be found at <https://tensorboard.dev/experiment/C27M8YxRQCa1t2XglTqlWg> The commands for decoding are ```bash # greedy search for epoch in 64; do for avg in 33; do ./transducer_stateless_modified-2/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_modified/exp-4 \ --max-duration 100 \ --context-size 2 \ --decoding-method greedy_search \ --max-sym-per-frame 1 done done # modified beam search for epoch in 64; do for avg in 33; do ./transducer_stateless_modified/decode.py \ --epoch $epoch \ --avg $avg \ --exp-dir transducer_stateless_modified/exp-4 \ --max-duration 100 \ --context-size 2 \ --decoding-method modified_beam_search \ --beam-size 4 done done ``` You can find the decoding log for the above command in this repo (in the folder [log][log]). The WER for the test dataset is | | test |comment | |------------------------|------|----------------------------------------------------------------| | greedy search | 5.22 |--epoch 64, --avg 33, --max-duration 100, --max-sym-per-frame 1 | | modified beam search | 5.02 |--epoch 64, --avg 33, --max-duration 100 --beam-size 4 | # File description - [log][log], this directory contains the decoding log and decoding results - [test_wavs][test_wavs], this directory contains wave files for testing the pre-trained model - [data][data], this directory contains files generated by [prepare.sh][prepare] - [exp][exp], this directory contains only one file: `preprained.pt` `exp/pretrained.pt` is generated by the following command: ```bash epoch=64 avg=33 ./transducer_stateless_modified/export.py \ --exp-dir ./transducer_stateless_modified/exp-4 \ --lang-dir ./data/lang_char \ --epoch $epoch \ --avg $avg ``` **HINT**: To use `pretrained.pt` to compute the WER for the `test` dataset, just do the following: ```bash cp icefall-aishell-transducer-stateless-modified-2022-03-01/exp/pretrained.pt \ /path/to/icefall/egs/aishell/ASR/transducer_stateless_modified/exp/epoch-999.pt ``` and pass `--epoch 999 --avg 1` to `transducer_stateless_modified/decode.py`. [icefall]: https://github.com/k2-fsa/icefall [prepare]: https://github.com/k2-fsa/icefall/blob/master/egs/aishell/ASR/prepare.sh [exp]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/exp [data]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/data [test_wavs]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/test_wavs [log]: https://huggingface.co/csukuangfj/icefall-aishell-transducer-stateless-modified-2022-03-01/tree/main/log [icefall]: https://github.com/k2-fsa/icefall
BigSalmon/GPTNeo350MInformalToFormalLincoln6
BigSalmon
2022-03-02T02:29:46Z
24
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
Trained on this model: https://huggingface.co/xhyi/PT_GPTNEO350_ATG/tree/main ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln6") model = AutoModelForCausalLM.from_pretrained("BigSalmon/GPTNeo350MInformalToFormalLincoln6") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel. Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle. Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ```
BigSalmon/InformalToFormalLincoln24
BigSalmon
2022-03-02T01:11:55Z
13
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln24") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln24") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel. Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle. Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ```
BigSalmon/InformalToFormalLincoln22
BigSalmon
2022-03-01T22:38:59Z
10
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln22") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln22") ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` - declining viewership facing the nba. - does not have to be this way. - in fact, many solutions exist. - the four point line would surely draw in eyes. Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership. *** - ``` ``` infill: chrome extensions [MASK] accomplish everyday tasks. Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks. infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices. infill: ``` ``` Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel. Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle. Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ```
Kevincp560/bart-large-finetuned-pubmed
Kevincp560
2022-03-01T18:35:04Z
7
1
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:pub_med_summarization_dataset", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - pub_med_summarization_dataset metrics: - rouge model-index: - name: bart-large-finetuned-pubmed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pub_med_summarization_dataset type: pub_med_summarization_dataset args: document metrics: - name: Rouge1 type: rouge value: 10.946 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.8135 - Rouge1: 10.946 - Rouge2: 5.0933 - Rougel: 9.5608 - Rougelsum: 10.4259 - Gen Len: 19.0495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:| | 2.0861 | 1.0 | 4000 | 1.8909 | 8.7344 | 3.6919 | 7.8804 | 8.3305 | 20.0 | | 1.8996 | 2.0 | 8000 | 1.8261 | 10.2124 | 4.6212 | 8.9842 | 9.7417 | 17.632 | | 1.7459 | 3.0 | 12000 | 1.8160 | 9.4933 | 4.4117 | 8.3977 | 9.0758 | 16.4775 | | 1.6258 | 4.0 | 16000 | 1.8136 | 10.8248 | 5.0335 | 9.4286 | 10.3123 | 18.724 | | 1.5214 | 5.0 | 20000 | 1.8135 | 10.946 | 5.0933 | 9.5608 | 10.4259 | 19.0495 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
davanstrien/vit_flyswot_test
davanstrien
2022-03-01T18:28:19Z
70
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:image_folder", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - image_folder metrics: - f1 model-index: - name: vit_flyswot_test results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: F1 type: f1 value: 0.849172221610369 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_flyswot_test This model is a fine-tuned version of [](https://huggingface.co/) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.4777 - F1: 0.8492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 666 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 52 | 1.2007 | 0.3533 | | No log | 2.0 | 104 | 1.0037 | 0.5525 | | No log | 3.0 | 156 | 0.8301 | 0.6318 | | No log | 4.0 | 208 | 0.7224 | 0.6946 | | No log | 5.0 | 260 | 0.7298 | 0.7145 | | No log | 6.0 | 312 | 0.6328 | 0.7729 | | No log | 7.0 | 364 | 0.6010 | 0.7992 | | No log | 8.0 | 416 | 0.5174 | 0.8364 | | No log | 9.0 | 468 | 0.5084 | 0.8479 | | 0.6372 | 10.0 | 520 | 0.4777 | 0.8492 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
SuperAI2-Machima/mt5-small-thai-qg-v2
SuperAI2-Machima
2022-03-01T14:53:52Z
26
2
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "question-generation", "dataset:NSC2018", "dataset:wiki-documents-nsc", "dataset:ThaiQACorpus-DevelopmentDataset", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: - question-generation language: - thai - th datasets: - NSC2018 - wiki-documents-nsc - ThaiQACorpus-DevelopmentDataset widget: - text: "โรงเรียนบ้านขุนด่าน ตั้งอยู่ที่ขุนด่าน จ.นครนายก </s>" example_title: "Example 01" - text: "พลเอก ประยุทธ์ จันทร์โอชา (เกิด 21 มีนาคม พ.ศ. 2497) ชื่อเล่น ตู่ เป็นนักการเมืองและอดีตนายทหารบกชาวไทย </s>" example_title: "Example 02" - text: "วันที่ 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น </s>" example_title: "Example 03" - text: "กรุงเทพมหานคร เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. </s>" example_title: "Example 04" license: mit --- [SuperAI Engineer Season 2](https://superai.aiat.or.th/) , [Machima](https://machchima.superai.me/) [Google's mT5](https://github.com/google-research/multilingual-t5) , [Pollawat](https://huggingface.co/Pollawat/mt5-small-thai-qg) ```python from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config model = T5ForConditionalGeneration.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2') tokenizer = T5Tokenizer.from_pretrained('SuperAI2-Machima/mt5-small-thai-qg-v2') source_text = 'บุกยึดไม้เถื่อน อดีต ส.ส.บุรีรัมย์ เตรียมสร้างคฤหาสน์ทรงไทย 1 กันยายน 2550 12:00 น. ตำรวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่า 80 แผ่น' print('Predicted Summary Text : ') tokenized_text = tokenizer.encode(source_text, return_tensors="pt").to(device) summary_ids = model.generate(tokenized_text, num_beams=4, no_repeat_ngram_size=2, max_length=50, early_stopping=True) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) #Predicted Summary Text : #answer: 80 แผ่น question: ตํารวจภูธรจ.บุรีรัมย์บุกตรวจยึดไม้แปรรูปหวงห้ามกว่ากี่แผ่น ```
ali2066/correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47
ali2066
2022-03-01T14:45:44Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3343 - Precision: 0.1651 - Recall: 0.3039 - F1: 0.2140 - Accuracy: 0.8493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4801 | 0.0352 | 0.0591 | 0.0441 | 0.7521 | | No log | 2.0 | 60 | 0.3795 | 0.0355 | 0.0795 | 0.0491 | 0.8020 | | No log | 3.0 | 90 | 0.3359 | 0.0591 | 0.1294 | 0.0812 | 0.8334 | | No log | 4.0 | 120 | 0.3205 | 0.0785 | 0.1534 | 0.1039 | 0.8486 | | No log | 5.0 | 150 | 0.3144 | 0.0853 | 0.1571 | 0.1105 | 0.8516 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32
ali2066
2022-03-01T14:43:43Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1206 - Precision: 0.0637 - Recall: 0.0080 - F1: 0.0141 - Accuracy: 0.9707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.1222 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 2.0 | 30 | 0.1159 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 3.0 | 45 | 0.1082 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 4.0 | 60 | 0.1042 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 5.0 | 75 | 0.1029 | 0.12 | 0.0139 | 0.0249 | 0.9736 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
ali2066
2022-03-01T14:42:27Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3097 - Precision: 0.2769 - Recall: 0.4391 - F1: 0.3396 - Accuracy: 0.8878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.4573 | 0.0094 | 0.0027 | 0.0042 | 0.7702 | | No log | 2.0 | 22 | 0.3660 | 0.1706 | 0.3253 | 0.2239 | 0.8516 | | No log | 3.0 | 33 | 0.3096 | 0.2339 | 0.408 | 0.2974 | 0.8827 | | No log | 4.0 | 44 | 0.2868 | 0.2963 | 0.4693 | 0.3633 | 0.8928 | | No log | 5.0 | 55 | 0.2798 | 0.3141 | 0.48 | 0.3797 | 0.8960 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04
ali2066
2022-03-01T14:39:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2876 - Precision: 0.2345 - Recall: 0.4281 - F1: 0.3030 - Accuracy: 0.8728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3907 | 0.0433 | 0.0824 | 0.0568 | 0.7626 | | No log | 2.0 | 60 | 0.3046 | 0.2302 | 0.4095 | 0.2947 | 0.8598 | | No log | 3.0 | 90 | 0.2945 | 0.2084 | 0.4095 | 0.2762 | 0.8668 | | No log | 4.0 | 120 | 0.2687 | 0.2847 | 0.4607 | 0.3519 | 0.8761 | | No log | 5.0 | 150 | 0.2643 | 0.2779 | 0.4444 | 0.3420 | 0.8788 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
ali2066
2022-03-01T14:33:46Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2663 - Precision: 0.3644 - Recall: 0.4985 - F1: 0.4210 - Accuracy: 0.8997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.5174 | 0.0120 | 0.0061 | 0.0081 | 0.6997 | | No log | 2.0 | 22 | 0.4029 | 0.1145 | 0.3098 | 0.1672 | 0.8265 | | No log | 3.0 | 33 | 0.3604 | 0.2539 | 0.4448 | 0.3233 | 0.8632 | | No log | 4.0 | 44 | 0.3449 | 0.2992 | 0.4755 | 0.3673 | 0.8704 | | No log | 5.0 | 55 | 0.3403 | 0.3340 | 0.4816 | 0.3945 | 0.8760 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39
ali2066
2022-03-01T14:11:40Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5867 - Precision: 0.0119 - Recall: 0.0116 - F1: 0.0118 - Accuracy: 0.6976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.5730 | 0.0952 | 0.0270 | 0.0421 | 0.7381 | | No log | 2.0 | 20 | 0.5755 | 0.0213 | 0.0135 | 0.0165 | 0.7388 | | No log | 3.0 | 30 | 0.5635 | 0.0196 | 0.0135 | 0.016 | 0.7416 | | No log | 4.0 | 40 | 0.5549 | 0.0392 | 0.0270 | 0.032 | 0.7429 | | No log | 5.0 | 50 | 0.5530 | 0.0357 | 0.0270 | 0.0308 | 0.7438 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39
ali2066
2022-03-01T14:05:57Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_02_39 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2903 - Precision: 0.2440 - Recall: 0.4465 - F1: 0.3155 - Accuracy: 0.8706 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4378 | 0.0463 | 0.1136 | 0.0658 | 0.7742 | | No log | 2.0 | 60 | 0.3739 | 0.1472 | 0.3756 | 0.2115 | 0.8284 | | No log | 3.0 | 90 | 0.3422 | 0.1865 | 0.4330 | 0.2607 | 0.8374 | | No log | 4.0 | 120 | 0.3286 | 0.2243 | 0.4833 | 0.3064 | 0.8438 | | No log | 5.0 | 150 | 0.3239 | 0.2356 | 0.4809 | 0.3163 | 0.8490 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_40_24
ali2066
2022-03-01T13:41:28Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_40_24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-14_40_24 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3067 - Precision: 0.2871 - Recall: 0.4433 - F1: 0.3485 - Accuracy: 0.8906 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.4768 | 0.0 | 0.0 | 0.0 | 0.7546 | | No log | 2.0 | 22 | 0.3665 | 0.1610 | 0.3211 | 0.2145 | 0.8487 | | No log | 3.0 | 33 | 0.3010 | 0.1994 | 0.3690 | 0.2589 | 0.8868 | | No log | 4.0 | 44 | 0.2748 | 0.2839 | 0.4479 | 0.3475 | 0.9037 | | No log | 5.0 | 55 | 0.2670 | 0.3104 | 0.4704 | 0.3740 | 0.9083 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35
ali2066
2022-03-01T13:39:36Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-14_37_35 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3190 - Precision: 0.1194 - Recall: 0.2563 - F1: 0.1629 - Accuracy: 0.8546 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4963 | 0.0223 | 0.0562 | 0.0319 | 0.7461 | | No log | 2.0 | 60 | 0.4089 | 0.0617 | 0.1359 | 0.0849 | 0.8093 | | No log | 3.0 | 90 | 0.3919 | 0.1053 | 0.2101 | 0.1403 | 0.8219 | | No log | 4.0 | 120 | 0.3787 | 0.1202 | 0.2482 | 0.1619 | 0.8270 | | No log | 5.0 | 150 | 0.3745 | 0.1171 | 0.2391 | 0.1572 | 0.8311 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
coastalcph/fairlex-fscs-minilm
coastalcph
2022-03-01T13:36:58Z
14
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "legal", "fairlex", "de", "fr", "it", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - de - fr - it pipeline_tag: fill-mask license: cc-by-nc-sa-4.0 tags: - legal - fairlex widget: - text: "Aus seinem damaligen strafbaren Verhalten resultierte eine Forderung der Nachlassverwaltung eines <mask>, worüber eine aussergerichtliche Vereinbarung über Fr. 500'000." - text: " Elle avait pour but social les <mask> dans le domaine des changes, en particulier l'exploitation d'une plateforme internet." - text: "Il Pretore ha accolto la petizione con sentenza 16 luglio 2015, accordando all'attore l'importo <mask>, con interessi di mora a partire dalla notifica del precetto esecutivo, e ha rigettato in tale misura l'opposizione interposta a quest'ultimo." --- # FairLex: A multilingual benchmark for evaluating fairness in legal text processing We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. --- Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland. --- ## Pre-training details For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC). We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads. We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019). For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC). ## Models list | Model name | Training corpora | Language | |-----------------------------------|------------------|--------------------| | `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` | | `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` | | `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] | | `coastalcph/fairlex-cail-minlm` | CAIL | `zh` | ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-fscs-minlm") model = AutoModel.from_pretrained("coastalcph/fairlex-fscs-minlm") ``` ## Evaluation on downstream tasks Consider the experiments in the article: _Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._ ## Author - Publication ``` @inproceedings{chalkidis-2022-fairlex, author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders}, title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics}, year={2022}, address={Dublin, Ireland} } ``` Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io) | Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
ali2066/distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58
ali2066
2022-03-01T13:33:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2572 - Precision: 0.3363 - Recall: 0.5110 - F1: 0.4057 - Accuracy: 0.8931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3976 | 0.1405 | 0.3058 | 0.1925 | 0.7921 | | No log | 2.0 | 60 | 0.3511 | 0.2360 | 0.4038 | 0.2979 | 0.8260 | | No log | 3.0 | 90 | 0.3595 | 0.1863 | 0.3827 | 0.2506 | 0.8211 | | No log | 4.0 | 120 | 0.3591 | 0.2144 | 0.4288 | 0.2859 | 0.8299 | | No log | 5.0 | 150 | 0.3605 | 0.1989 | 0.4212 | 0.2702 | 0.8343 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25
ali2066
2022-03-01T13:24:47Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2698 - Precision: 0.3321 - Recall: 0.5265 - F1: 0.4073 - Accuracy: 0.8942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3314 | 0.1627 | 0.3746 | 0.2269 | 0.8419 | | No log | 2.0 | 60 | 0.2957 | 0.2887 | 0.4841 | 0.3617 | 0.8592 | | No log | 3.0 | 90 | 0.2905 | 0.2429 | 0.5141 | 0.3299 | 0.8651 | | No log | 4.0 | 120 | 0.2759 | 0.3137 | 0.5565 | 0.4013 | 0.8787 | | No log | 5.0 | 150 | 0.2977 | 0.3116 | 0.5565 | 0.3995 | 0.8796 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
spy24/autonlp-US-to-UK-604417040
spy24
2022-03-01T13:16:47Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autonlp", "unk", "dataset:spy24/autonlp-data-US-to-UK", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - spy24/autonlp-data-US-to-UK co2_eq_emissions: 3.3271667948644614 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 604417040 - CO2 Emissions (in grams): 3.3271667948644614 ## Validation Metrics - Loss: 1.919085144996643 - Rouge1: 39.2808 - Rouge2: 4.905 - RougeL: 39.113 - RougeLsum: 39.1463 - Gen Len: 3.4611 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-US-to-UK-604417040 ```
nickmuchi/vit-finetuned-cats-dogs
nickmuchi
2022-03-01T13:15:13Z
132
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy widget: - src: https://cdn.pixabay.com/photo/2021/09/19/12/19/animal-6637774_1280.jpg example_title: Dog - src: https://cdn.pixabay.com/photo/2017/02/20/18/03/cat-2083492_1280.jpg example_title: Cat model-index: - name: vit-finetuned-cats-dogs results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9971014261245728 --- # vit-finetuned-cats-dogs Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### cat ![cat](images/cat.jpg) #### dog ![dog](images/dog.jpg)
coastalcph/fairlex-cail-minilm
coastalcph
2022-03-01T13:12:22Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "legal", "fairlex", "zh", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: zh pipeline_tag: fill-mask license: cc-by-nc-sa-4.0 tags: - legal - fairlex widget: - text: "上述事实,被告人在庭审过程中亦无异议,且有<mask>的陈述,现场辨认笔录及照片,被告人的前科刑事判决书,释放证明材料,抓获经过,被告人的供述及身份证明等证据证实,足以认定。" --- # FairLex: A multilingual benchmark for evaluating fairness in legal text processing We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP. --- Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland. --- ## Pre-training details For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC). We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads. We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019). For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC). ## Models list | Model name | Training corpora | Language | |-----------------------------------|------------------|--------------------| | `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` | | `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` | | `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] | | `coastalcph/fairlex-cail-minlm` | CAIL | `zh` | ## Load Pretrained Model ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-cail-minlm") model = AutoModel.from_pretrained("coastalcph/fairlex-cail-minlm") ``` ## Evaluation on downstream tasks Consider the experiments in the article: _Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._ ## Author - Publication ``` @inproceedings{chalkidis-2022-fairlex, author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders}, title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics}, year={2022}, address={Dublin, Ireland} } ``` Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io) | Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
ali2066/twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11
ali2066
2022-03-01T13:03:25Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter_RoBERTa_base_sentence_itr0_1e-05_all_01_03_2022-13_53_11 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4118 - Accuracy: 0.8446 - F1: 0.8968 - Precision: 0.8740 - Recall: 0.9207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.3532 | 0.8451 | 0.8990 | 0.8997 | 0.8983 | | 0.4111 | 2.0 | 780 | 0.3381 | 0.8561 | 0.9080 | 0.8913 | 0.9253 | | 0.3031 | 3.0 | 1170 | 0.3490 | 0.8537 | 0.9034 | 0.9152 | 0.8919 | | 0.2408 | 4.0 | 1560 | 0.3562 | 0.8671 | 0.9148 | 0.9 | 0.9300 | | 0.2408 | 5.0 | 1950 | 0.3725 | 0.8659 | 0.9131 | 0.9074 | 0.9189 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
asini/wav2vec2-timit-demo
asini
2022-03-01T10:37:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-timit-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-timit-demo This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4847 - Wer: 0.3462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.487 | 4.0 | 500 | 1.3466 | 1.0153 | | 0.6134 | 8.0 | 1000 | 0.4807 | 0.4538 | | 0.2214 | 12.0 | 1500 | 0.4684 | 0.3984 | | 0.1233 | 16.0 | 2000 | 0.5070 | 0.3779 | | 0.0847 | 20.0 | 2500 | 0.4965 | 0.3705 | | 0.0611 | 24.0 | 3000 | 0.4881 | 0.3535 | | 0.0464 | 28.0 | 3500 | 0.4847 | 0.3462 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.10.3
huggingtweets/berniesanders-dril
huggingtweets
2022-03-01T10:13:41Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wint & Bernie Sanders</div> <div style="text-align: center; font-size: 14px;">@berniesanders-dril</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wint & Bernie Sanders. | Data | wint | Bernie Sanders | | --- | --- | --- | | Tweets downloaded | 3229 | 3250 | | Retweets | 473 | 429 | | Short tweets | 300 | 10 | | Tweets kept | 2456 | 2811 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/yw6378l1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-dril's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pydufi9/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/berniesanders-dril') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/berniesanders-cnn-dril
huggingtweets
2022-03-01T09:43:27Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/berniesanders-cnn-dril/1646127802129/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1097820307388334080/9ddg5F6v_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Bernie Sanders & wint & CNN</div> <div style="text-align: center; font-size: 14px;">@berniesanders-cnn-dril</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Bernie Sanders & wint & CNN. | Data | Bernie Sanders | wint | CNN | | --- | --- | --- | --- | | Tweets downloaded | 3250 | 3229 | 3250 | | Retweets | 429 | 473 | 30 | | Short tweets | 10 | 300 | 6 | | Tweets kept | 2811 | 2456 | 3214 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yapgpjj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @berniesanders-cnn-dril's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1hmm651a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/berniesanders-cnn-dril') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
inovex/multi2convai-corona-en-bert
inovex
2022-03-01T09:20:04Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification - pytorch - transformers widget: - text: "Do I need to wear a mask?" license: mit language: en --- # Multi2ConvAI-Corona: finetuned Bert for English This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-en-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-en-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-corona-de-bert
inovex
2022-03-01T09:18:20Z
4
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification - pytorch - transformers widget: - text: "Muss ich eine Maske tragen?" license: mit language: de --- # Multi2ConvAI-Corona: finetuned Bert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Corona (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-logistics-de-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-logistics-de-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
hfl/chinese-roberta-wwm-ext-large
hfl
2022-03-01T09:15:16Z
5,610
196
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - zh tags: - bert license: "apache-2.0" --- # Please use 'Bert' related functions to load this model! ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
inovex/multi2convai-quality-it-mbert
inovex
2022-03-01T09:02:26Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Avviare il programma" license: mit language: it --- # Multi2ConvAI-Quality: finetuned MBert for Italian This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Italian (it) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-it-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-it-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-quality-fr-mbert
inovex
2022-03-01T09:01:51Z
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "fr", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Lancer le programme" license: mit language: fr --- # Multi2ConvAI-Quality: finetuned MBert for French This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: French (fr) - model type: finetuned MBert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-fr-mbert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-fr-mbert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-quality-de-bert
inovex
2022-03-01T09:00:15Z
5
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Starte das Programm" license: mit language: de --- # Multi2ConvAI-Quality: finetuned Bert for German This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: finetuned Bert ## How to run Requires: - Huggingface transformers ### Run with Huggingface Transformers ````python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("inovex/multi2convai-quality-de-bert") model = AutoModelForSequenceClassification.from_pretrained("inovex/multi2convai-quality-de-bert") ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
gzomer/clip-multilingual
gzomer
2022-03-01T08:50:45Z
0
0
null
[ "clip", "vision", "text", "multilingual", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - clip - vision - text language: multilingual license: mit --- # MultiLingual CLIP Multilingual CLIP is a pre-trained model which can be used for multilingual semantic search and zero-shot image classification in 100 languages. # Model Architecture Multilingual CLIP was built using [OpenAI CLIP](https://github.com/openai/CLIP) model. I have used the same Vision encoder (ResNet 50x4), but instead I replaced their text encoder (Transformer) with a Mulilingual Text Encoder ([XLM-Roberta](https://huggingface.co/xlm-roberta-large)) and a configurable number of projection heads, as seen below: ![Model Architecture](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/858/046/datas/gallery.jpg) The model was trained in a distributed fashion on 16 Habana Gaudi Accelerators and with mixed Precision in two phases (using COCO Dataset for phase 1 and Google Conceptual Captions for phase 2). The training pipeline was built using PyTorch, PyTorch Lightning, and Distributed Data Parallel. # Datasets Three datasets have been used for building the model. COCO captions was used for training phase 1 and Google Conceptual Captions was used for training phase 2. Unsplash dataset was used for testing and inference. ## COCO Captions COCO (Common Objects in Context) is a large-scale object detection, segmentation, and captioning dataset. The COCO captions dataset has around ~85000 images and captions pairs. Run the following to download the dataset: ```bash ./download_coco.sh ``` This dataset was used for the first pre-training phase. ## Google Conceptual Captions Conceptual Captions is a dataset consisting of ~3.3 million images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. Download the datasets urls/captions from [here](https://storage.cloud.google.com/gcc-data/Train/GCC-training.tsv?_ga=2.191230122.-1896153081.1529438250) as save it to `datasets/googlecc/googlecc.tsv`. The full dataset has over 3 million images, but you can select a subset by loading the `googlecc.tsv` file and saving only the number of rows you want (I have used 1 million images for training). Then run the following commands to download each image on the `googlecc.tsv` file: ```bash npm install node download_build_googlecc.js ``` This dataset was used for the second pre-training phase. ## Unplash This dataset was used as the test set during inference. Run `python3.8 download_unsplash.py` to download the dataset. # Training ![Training phase 1](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/858/047/datas/gallery.jpg) ![Training phase 2](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/858/048/datas/gallery.jpg) ## Setup Create two Habana instances ([AWS EC2 DL1](https://aws.amazon.com/ec2/instance-types/dl1/)) using [Habana® Deep Learning Base AMI (Ubuntu 20.04)](https://aws.amazon.com/marketplace/pp/prodview-fw46rwuxrtfse) Create the PyTorch docker container running: ```bash docker run --name pytorch -td --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.2.0/ubuntu20.04/habanalabs/pytorch-installer-1.10.0:1.2.0-585 ``` Enter the docker image by running: ``` docker exec -it pytorch /bin/bash ``` #### Setup password-less ssh between all connected servers 1. Configure password-less ssh between all nodes: Do the following in all the nodes' docker sessions: ```bash mkdir ~/.ssh cd ~/.ssh ssh-keygen -t rsa -b 4096 ``` Copy id_rsa.pub contents from every node's docker to every other node's docker's ~/.ssh/authorized_keys (all public keys need to be in all hosts' authorized_keys): ```bash cat id_rsa.pub > authorized_keys vi authorized_keys ``` Copy the contents from inside to other systems. Paste all hosts' public keys in all hosts' “authorized_keys” file. 2. On each system: Add all hosts (including itself) to known_hosts. The IP addresses used below are just for illustration: ```bash ssh-keyscan -p 3022 -H $IP1 >> ~/.ssh/known_hosts ssh-keyscan -p 3022 -H $IP2 >> ~/.ssh/known_hosts ``` 3. Change Docker SSH port to 3022 ```bash sed -i 's/#Port 22/Port 3022/g' /etc/ssh/sshd_config sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config service ssh restart ``` [Allow all TCP](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) traffic between the nodes on AWS Clone the git repo: ```bash git clone https://github.com/gzomer/clip-multilingual ``` Create environment: ```bash python3.8 -m venv .env ``` Install requirements: ```bash python3.8 -r requirements.txt ``` Activate environment ```bash source .env/bin/activate ``` ## Training params Learning rate: 1e-3 Batch size: 64 Phase 1 - Epochs: 100 Phase 2 - Epochs: 15 ## Train script arguments ``` --dataset-num-workers Number of workers (default: 8) --dataset-type Dataset type (coco or googlecc) (default: coco) --dataset-dir Dataset dir (default: ./datasets/coco/) --dataset-subset-size Load only a subset of the dataset (useful for debugging) --dataset-train-split Dataset train split (default: 0.8) --train-device Type of device to use (default: hpu) --distributed-num-nodes Number of nodes (machines) (default: 2) --distributed-parallel-devices Number of parallel devices per node (default: 8) --distributed-master-address Master node IP address --distributed-master-port Master node port (default: 12345) --distributed-bucket-cap-mb DDP bucket cap MB (default: 200) --checkpoint-dir Model checkpoint dir (default: ./models) --checkpoint-save-every-n Save every n epochs (default: 1) --checkpoint-load-vision-path Load vision encoder checkpoint --checkpoint-load-text-path Load text encoder checkpoint --model-visual-name Which visual model to use (default: RN50x4) --model-textual-name Which textual model to use (default: xlm-roberta-base) --hyperparam-num-layers Number of layers (default: 3) --hyperparam-lr Model learning rate (default: 0.001) --hyperparam-epochs Max epochs (default: 100) --hyperparam-precision Precision (default: 16) --hyperparam-batch-size Batch size (default: 64) --wandb-project W&B project name (default: clip) --wandb-enabled W&B is enabled? (default: True) ``` ## Habana Gaudi - 8 accelerators ### Phase 1 training ```bash python3.8 train.py --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 1 ``` ### Phase 2 training ```bash python3.8 train.py --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 1 --hyperparam-epochs 15 --checkpoint-load-text-path /home/models/text-last.ckpt --checkpoint-load-vision-path /home/models/vision-last.ckpt --checkpoint-dir ./models_phase2 ``` ## Habana Gaudi - 16 accelerators (multi-server training) Change the master IP address based on your instances (use local IP, not public IP). ### Phase 1 training ```bash NODE_RANK=0 python3.8 train.py --distributed-master-address 172.31.86.231 --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 2 ``` ```bash NODE_RANK=1 python3.8 train.py --distributed-master-address 172.31.86.231 --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 2 ``` ### Phase 2 training ```bash NODE_RANK=0 python3.8 train.py --distributed-master-address 172.31.86.231 --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 2 --hyperparam-epochs 10 --checkpoint-load-text-path /home/models/text-last.ckpt --checkpoint-load-vision-path /home/models/vision-last.ckpt --checkpoint-dir ./models_phase2 ``` ```bash NODE_RANK=1 python3.8 train.py --distributed-master-address 172.31.86.231 --train-device hpu --distributed-parallel-devices 8 --distributed-num-nodes 2 --hyperparam-epochs 15 --checkpoint-load-text-path /home/models/text-last.ckpt --checkpoint-load-vision-path /home/models/vision-last.ckpt --checkpoint-dir ./models_phase2 ``` ## Other devices If you don't have access to a Habana Gaudi accelerator yet, you can also train on CPU/GPU, although it will be way slower. To train on CPU, just pass `--train-device=cpu` and on GPU `--train-device=cuda` to the `train.py` script. # Inference ## Loading pre-trained model from Hugging Face HUB ```python from models import create_and_load_from_hub model = create_and_load_from_hub() ``` ## Loading model from local checkpoint ```python from models import MultiLingualCLIP, load_model text_checkpoint_path = '/path/to/text model checkpoint' vision_checkpoint_path = '/path/to/vision model checkpoint' model = MultiLingualCLIP(num_layers=3) load_model(model, vision_checkpoint_path, text_checkpoint_path) ``` ## Generate embeddings Run the following (after downloading Unplash dataset): `python3.8 ./generate_embeddings.py` ## Searching images ```python import numpy as np from search import MultiLingualSearch images_embeddings = np.load('/path/to/images_embeddings') images_data = [...] # List of image info for each row of the embeddings. For instance, it could be a list of urls, filepaths, ids. They will be returned when calling the search function semantic_search = MultiLingualSearch(model, images_embeddings, images_data) results = semantic_search.search('विद्यालय में') # Means at school print(results) ``` ```json [{"image": "https://images.unsplash.com/photo-1557804506-669a67965ba0?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHwxM3x8bWVldGluZ3N8ZW58MHx8fHwxNjQ1NjA2MjQz&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.2461608648300171}, {"image": "https://images.unsplash.com/photo-1558403194-611308249627?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHwyMXx8cGVvcGxlJTIwd29ya2luZ3xlbnwwfHx8fDE2NDU2MDMyMjE&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.16881239414215088}, {"image": "https://images.unsplash.com/photo-1531497865144-0464ef8fb9a9?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHw4Nnx8cGVvcGxlJTIwd29ya2luZ3xlbnwwfHx8fDE2NDU2MDY5ODc&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.14744874835014343}, {"image": "https://images.unsplash.com/photo-1561089489-f13d5e730d72?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHw5MHx8ZWR1Y2F0aW9ufGVufDB8fHx8MTY0NTYwNjk1Nw&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.095176100730896}, {"image": "https://images.unsplash.com/photo-1580582932707-520aed937b7b?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwyNDg3OTV8MHwxfHNlYXJjaHwxMnx8ZWR1Y2F0aW9ufGVufDB8fHx8MTY0NTYwMzIwMA&ixlib=rb-1.2.1&q=80&w=400", "prob": 0.05218643322587013}] ```
armageddon/distilbert-base-uncased-squad2-covid-qa-deepset
armageddon
2022-03-01T08:32:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:covid_qa_deepset", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - covid_qa_deepset model-index: - name: distilbert-base-uncased-squad2-covid-qa-deepset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-squad2-covid-qa-deepset This model is a fine-tuned version of [twmkn9/distilbert-base-uncased-squad2](https://huggingface.co/twmkn9/distilbert-base-uncased-squad2) on the covid_qa_deepset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.6
aasem/wav2vec2-xls-r-300m-Urdu
aasem
2022-03-01T08:28:25Z
5
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- datasets: - common_voice: ~ language: - ur: ~ library_name: transformers: ~ license: mit: ~ metrics: - wer: ~ model-index: - name: wav2vec2-xls-r-300m-Urdu: ~ results: - task: dataset: args: ur: ~ name: : "common_voice" : ~ type: common_voice: ~ metrics: - type: wer: ~ value: 0.2459: ~ - type: cer: ~ value: 0.0691: ~ type: automatic-speech-recognition: ~ tags: - audio: ~ - automatic-speech-recognition: ~ - speech: ~ Finetuning of [Facebook's 300M model](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Common Voice 8.0 Urdu dataset
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
ali2066
2022-03-01T04:37:52Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4208 - Accuracy: 0.8283 - F1: 0.8915 - Precision: 0.8487 - Recall: 0.9389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.4443 | 0.7768 | 0.8589 | 0.8072 | 0.9176 | | 0.4532 | 2.0 | 780 | 0.4603 | 0.8098 | 0.8791 | 0.8302 | 0.9341 | | 0.2608 | 3.0 | 1170 | 0.5284 | 0.8061 | 0.8713 | 0.8567 | 0.8863 | | 0.1577 | 4.0 | 1560 | 0.6398 | 0.8085 | 0.8749 | 0.8472 | 0.9044 | | 0.1577 | 5.0 | 1950 | 0.7089 | 0.8085 | 0.8741 | 0.8516 | 0.8979 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
ali2066
2022-03-01T03:51:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2899 - Precision: 0.3170 - Recall: 0.5261 - F1: 0.3956 - Accuracy: 0.8799 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.2912 | 0.2752 | 0.4444 | 0.3400 | 0.8730 | | No log | 2.0 | 60 | 0.2772 | 0.4005 | 0.4589 | 0.4277 | 0.8911 | | No log | 3.0 | 90 | 0.2267 | 0.3642 | 0.5281 | 0.4311 | 0.9043 | | No log | 4.0 | 120 | 0.2129 | 0.3617 | 0.5455 | 0.4350 | 0.9140 | | No log | 5.0 | 150 | 0.2399 | 0.3797 | 0.5556 | 0.4511 | 0.9114 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
SoLID/sgd-t5-tod
SoLID
2022-03-01T02:58:46Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "dialogue", "eng", "license:afl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - eng thumbnail: "https://townsquare.media/site/88/files/2020/06/C_Charlotte_RGB_7484.jpg" tags: - dialogue license: afl-3.0 datasets: - schema guided dialogue metrics: - exactness --- Hyperparameters: 1 epoch, max_len_dict including domain classification task, and 1e-5 learning rate
Ayham/albert_roberta_summarization_cnn_dailymail
Ayham
2022-03-01T01:54:22Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer datasets: - cnn_dailymail model-index: - name: albert_roberta_new_summarization_cnn_dailymail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert_roberta_new_summarization_cnn_dailymail This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
armageddon/roberta-large-squad2-covid-qa-deepset
armageddon
2022-03-01T01:48:21Z
13
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:covid_qa_deepset", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - covid_qa_deepset model-index: - name: covid_qa_analysis_roberta-large-squad2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # covid_qa_analysis_roberta-large-squad2 This model is a fine-tuned version of [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) on the covid_qa_deepset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
armageddon/roberta-base-squad2-covid-qa-deepset
armageddon
2022-02-28T22:34:27Z
19
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:covid_qa_deepset", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: cc-by-4.0 tags: - generated_from_trainer datasets: - covid_qa_deepset model-index: - name: covid_qa_analysis_roberta-base-squad2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # covid_qa_analysis_roberta-base-squad2 This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the covid_qa_deepset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.0+cu102 - Datasets 1.18.3 - Tokenizers 0.11.6
Msp/classifier
Msp
2022-02-28T22:02:26Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04Z
--- license: apache-2.0 ---
PhilSad/GPTJ2B-SCP
PhilSad
2022-02-28T20:45:35Z
8
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:04Z
GPT J 6B finetuned on SCP articles Very experimental
Kevincp560/bart-large-cnn-finetuned-pubmed
Kevincp560
2022-02-28T19:04:22Z
5
2
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:pub_med_summarization_dataset", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
--- license: mit tags: - generated_from_trainer datasets: - pub_med_summarization_dataset metrics: - rouge model-index: - name: bart-large-cnn-finetuned-pubmed results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: pub_med_summarization_dataset type: pub_med_summarization_dataset args: document metrics: - name: Rouge1 type: rouge value: 40.4866 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-cnn-finetuned-pubmed This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the pub_med_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 1.8416 - Rouge1: 40.4866 - Rouge2: 16.7472 - Rougel: 24.9831 - Rougelsum: 36.4002 - Gen Len: 142.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 1.932 | 1.0 | 4000 | 1.8110 | 38.1151 | 15.2255 | 23.4286 | 34.2521 | 141.8905 | | 1.7001 | 2.0 | 8000 | 1.7790 | 39.8217 | 16.3042 | 24.649 | 35.831 | 142.0 | | 1.5 | 3.0 | 12000 | 1.7971 | 40.6108 | 17.0446 | 25.1977 | 36.5556 | 141.9865 | | 1.3316 | 4.0 | 16000 | 1.8106 | 40.0466 | 16.4851 | 24.7094 | 36.0998 | 141.9335 | | 1.1996 | 5.0 | 20000 | 1.8416 | 40.4866 | 16.7472 | 24.9831 | 36.4002 | 142.0 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
akhaliq/YOLOP
akhaliq
2022-02-28T16:56:50Z
0
0
null
[ "object-detection", "arxiv:2108.11250", "arxiv:1612.07695", "arxiv:1606.02147", "region:us" ]
object-detection
2022-03-02T23:29:05Z
--- tags: - object-detection --- <div align="left"> ## You Only Look Once for Panoptic ​ Driving Perception > [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250) > > by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm) > > *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))* --- ### The Illustration of YOLOP ![yolop](pictures/yolop.png) ### Contributions * We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset. * We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization. ### Results #### Traffic Object Detection Result | Model | Recall(%) | mAP50(%) | Speed(fps) | | -------------- | --------- | -------- | ---------- | | `Multinet` | 81.3 | 60.2 | 8.6 | | `DLT-Net` | 89.4 | 68.4 | 9.3 | | `Faster R-CNN` | 77.2 | 55.6 | 5.3 | | `YOLOv5s` | 86.8 | 77.2 | 82 | | `YOLOP(ours)` | 89.2 | 76.5 | 41 | #### Drivable Area Segmentation Result | Model | mIOU(%) | Speed(fps) | | ------------- | ------- | ---------- | | `Multinet` | 71.6 | 8.6 | | `DLT-Net` | 71.3 | 9.3 | | `PSPNet` | 89.6 | 11.1 | | `YOLOP(ours)` | 91.5 | 41 | #### Lane Detection Result: | Model | mIOU(%) | IOU(%) | | ------------- | ------- | ------ | | `ENet` | 34.12 | 14.64 | | `SCNN` | 35.79 | 15.84 | | `ENet-SAD` | 36.56 | 16.02 | | `YOLOP(ours)` | 70.50 | 26.20 | #### Ablation Studies 1: End-to-end v.s. Step-by-step: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | | --------------- | --------- | ----- | ------- | ----------- | ------ | | `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 | | `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 | | `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 | | `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 | | `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | #### Ablation Studies 2: Multi-task v.s. Single task: | Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) | | --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- | | `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 | | `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 | | `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 | | `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 | **Notes**: - The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)),`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)),`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)),`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works. - In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others. --- ### Visualization #### Traffic Object Detection Result ![detect result](pictures/detect.png) #### Drivable Area Segmentation Result ![](pictures/da.png) #### Lane Detection Result ![](pictures/ll.png) **Notes**: - The visualization of lane detection result has been post processed by quadratic fitting. --- ### Project Structure ```python ├─inference │ ├─images # inference images │ ├─output # inference result ├─lib │ ├─config/default # configuration of training and validation │ ├─core │ │ ├─activations.py # activation function │ │ ├─evaluate.py # calculation of metric │ │ ├─function.py # training and validation of model │ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization │ │ ├─loss.py # loss function │ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper) │ ├─dataset │ │ ├─AutoDriveDataset.py # Superclass dataset,general function │ │ ├─bdd.py # Subclass dataset,specific function │ │ ├─hust.py # Subclass dataset(Campus scene, unrelated to paper) │ │ ├─convect.py │ │ ├─DemoDataset.py # demo dataset(image, video and stream) │ ├─models │ │ ├─YOLOP.py # Setup and Configuration of model │ │ ├─light.py # Model lightweight(unrelated to paper, zwt) │ │ ├─commom.py # calculation module │ ├─utils │ │ ├─augmentations.py # data augumentation │ │ ├─autoanchor.py # auto anchor(k-means) │ │ ├─split_dataset.py # (Campus scene, unrelated to paper) │ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training │ ├─run │ │ ├─dataset/training time # Visualization, logging and model_save ├─tools │ │ ├─demo.py # demo(folder、camera) │ │ ├─test.py │ │ ├─train.py ├─toolkits │ │ ├─depoly # Deployment of model ├─weights # Pretraining model ``` --- ### Requirement This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+: ``` conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch ``` See `requirements.txt` for additional dependencies and version requirements. ```setup pip install -r requirements.txt ``` ### Data preparation #### Download - Download the images from [images](https://bdd-data.berkeley.edu/). - Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing). - Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing). - Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing). We recommend the dataset directory structure to be the following: ``` # The id represent the correspondence relation ├─dataset root │ ├─images │ │ ├─train │ │ ├─val │ ├─det_annotations │ │ ├─train │ │ ├─val │ ├─da_seg_annotations │ │ ├─train │ │ ├─val │ ├─ll_seg_annotations │ │ ├─train │ │ ├─val ``` Update the your dataset path in the `./lib/config/default.py`. ### Training You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size). If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end). ```python # Alternating optimization _C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs _C.TRAIN.DET_ONLY = False # Only train detection branch _C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs _C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch # Single task _C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task _C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task _C.TRAIN.DET_ONLY = False # Only train detection task ``` Start training: ```shell python tools/train.py ``` ### Evaluation You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms). Start evaluating: ```shell python tools/test.py --weights weights/End-to-end.pth ``` ### Demo Test We provide two testing method. #### Folder You can store the image or video in `--source`, and then save the reasoning result to `--save-dir` ```shell python tools/demo --source inference/images ``` #### Camera If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0). ```shell python tools/demo --source 0 ``` ### Deployment Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`. ## Citation If you find our paper and code useful for your research, please consider giving a star and citation: ```BibTeX @misc{2108.11250, Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang}, Title = {YOLOP: You Only Look Once for Panoptic Driving Perception}, Year = {2021}, Eprint = {arXiv:2108.11250}, } ```
frahman/distilbert-base-uncased-distilled-clinc
frahman
2022-02-28T15:54:22Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9406451612903226 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.1002 - Accuracy: 0.9406 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9039 | 1.0 | 318 | 0.5777 | 0.7335 | | 0.4486 | 2.0 | 636 | 0.2860 | 0.8768 | | 0.2528 | 3.0 | 954 | 0.1792 | 0.9210 | | 0.176 | 4.0 | 1272 | 0.1398 | 0.9274 | | 0.1417 | 5.0 | 1590 | 0.1209 | 0.9329 | | 0.1245 | 6.0 | 1908 | 0.1110 | 0.94 | | 0.1135 | 7.0 | 2226 | 0.1061 | 0.9390 | | 0.1074 | 8.0 | 2544 | 0.1026 | 0.94 | | 0.1032 | 9.0 | 2862 | 0.1006 | 0.9410 | | 0.1017 | 10.0 | 3180 | 0.1002 | 0.9406 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
EngNada/wav2vec2-large-xlsr-53-demo-colab
EngNada
2022-02-28T15:47:56Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-53-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 7.9807 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 22.8021 | 1.78 | 80 | 7.9807 | 1.0 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
mradau/stress_score
mradau
2022-02-28T15:34:22Z
5
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_keras_callback model-index: - name: tmp10l_qol1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tmp10l_qol1 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.6
frahman/distilbert-base-uncased-finetuned-clinc
frahman
2022-02-28T15:10:11Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9187096774193548 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7703 - Accuracy: 0.9187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2896 | 1.0 | 318 | 3.2887 | 0.7419 | | 2.6309 | 2.0 | 636 | 1.8797 | 0.8310 | | 1.5443 | 3.0 | 954 | 1.1537 | 0.8974 | | 1.0097 | 4.0 | 1272 | 0.8560 | 0.9135 | | 0.7918 | 5.0 | 1590 | 0.7703 | 0.9187 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
inovex/multi2convai-quality-fr-logreg-ft
inovex
2022-02-28T13:43:14Z
0
0
null
[ "text-classification", "fr", "license:mit", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Hosted inference API not supported" license: mit language: fr --- # Multi2ConvAI-Quality: French logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: French (fr) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-fr-logreg-ft >>> Create pipeline for config: multi2convai-quality-fr-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'fr'. >>> >>> Enter your text (type 'stop' to end execution): Lancer le programme >>> 'Lancer le programme' was classified as 'neo.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "fr" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/fr/wiki.200k.fr.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/fr/wiki.200k.fr.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Lancer le programme") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/fr curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.fr.vec --output models/fasttext/fr/wiki.fr.vec python scripts/serialize_fasttext.py -r fasttext/wiki.fr.vec -v fasttext/fr/wiki.200k.fr.vocab -e fasttext/fr/wiki.200k.fr.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-quality-en-logreg-ft
inovex
2022-02-28T13:42:54Z
0
0
null
[ "text-classification", "en", "license:mit", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Hosted inference API not supported" license: mit language: en --- # Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: English (en) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-en-logreg-ft >>> Create pipeline for config: multi2convai-quality-en-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'en'. >>> >>> Enter your text (type 'stop' to end execution): Start the program >>> 'Start the program' was classified as 'neo.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "en" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/en/wiki.200k.en.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/en/wiki.200k.en.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Start the program") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/en curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.vec --output models/fasttext/en/wiki.en.vec python scripts/serialize_fasttext.py -r fasttext/wiki.en.vec -v fasttext/en/wiki.200k.en.vocab -e fasttext/en/wiki.200k.en.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-quality-de-logreg-ft
inovex
2022-02-28T13:42:37Z
0
0
null
[ "text-classification", "de", "license:mit", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Hosted inference API not supported" license: mit language: de --- # Multi2ConvAI-Quality: English logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-de-logreg-ft >>> Create pipeline for config: multi2convai-quality-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Starte das Programm >>> 'Starte das Programm' was classified as 'no.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Starte das Programm") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-quality-it-logreg-ft
inovex
2022-02-28T13:42:18Z
0
0
null
[ "text-classification", "it", "license:mit", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Hosted inference API not supported" license: mit language: it --- # Multi2ConvAI-Quality: Italian logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Quality (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: Italian (ml) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-quality-it-logreg-ft >>> Create pipeline for config: multi2convai-quality-it-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'quality' and language 'it'. >>> >>> Enter your text (type 'stop' to end execution): Avviare il programma >>> 'Avviare il programma' was classified as 'neo.start' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "it" domain = "quality" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/it/wiki.200k.it.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/it/wiki.200k.it.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Avviare il programma") label >>> Label(string='neo.start', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/it curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.it.vec --output models/fasttext/it/wiki.it.vec python scripts/serialize_fasttext.py -r fasttext/wiki.it.vec -v fasttext/it/wiki.200k.it.vocab -e fasttext/it/wiki.200k.it.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
inovex/multi2convai-logistics-de-logreg-ft
inovex
2022-02-28T12:31:23Z
0
0
null
[ "text-classification", "de", "license:mit", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - text-classification widget: - text: "Hosted inference API not supported" license: mit language: de --- # Multi2ConvAI-Logistics: German logistic regression model using fasttext embeddings This model was developed in the [Multi2ConvAI](https://multi2conv.ai) project: - domain: Logistics (more details about our use cases: ([en](https://multi2convai/en/blog/use-cases), [de](https://multi2convai/en/blog/use-cases))) - language: German (de) - model type: logistic regression - embeddings: fastText embeddings ## How to run Requires: - [multi2convai](https://github.com/inovex/multi2convai) - serialized fastText embeddings (see last section of this readme or [these instructions](https://github.com/inovex/multi2convai/models/embeddings.README.md)) ### Run with one line of code After installing `multi2convai` and locally available fastText embeddings you can run: ````bash # assumes working dir is the root of the cloned multi2convai repo python scripts/run_inference.py -m multi2convai-logistics-de-logreg-ft >>> Create pipeline for config: multi2convai-logistics-de-logreg-ft. >>> Created a LogisticRegressionFasttextPipeline for domain: 'logistics' and language 'de'. >>> >>> Enter your text (type 'stop' to end execution): Muss ich eine Maske tragen? >>> 'Wo kann ich das Paket ablegen?' was classified as 'details.safeplace' (confidence: 0.8943) ```` ### How to run model using multi2convai After installing `multi2convai` and locally available fastText embeddings you can run: ````python # assumes working dir is the root of the cloned multi2convai repo from pathlib import Path from multi2convai.pipelines.inference.base import ClassificationConfig from multi2convai.pipelines.inference.logistic_regression_fasttext import ( LogisticRegressionFasttextConfig, LogisticRegressionFasttextPipeline, ) language = "de" domain = "logistics" # 1. Define paths of model, label dict and embeddings model_file = "model.pth" label_dict_file = "label_dict.json" embedding_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.embed" ) vocabulary_path = Path( f"../models/embeddings/fasttext/de/wiki.200k.de.vocab" ) # 2. Create and setup pipeline model_config = LogisticRegressionFasttextConfig( model_file, embedding_path, vocabulary_path ) config = ClassificationConfig(language, domain, label_dict_file, model_config) pipeline = LogisticRegressionFasttextPipeline(config) pipeline.setup() # 3. Run intent classification on a text of your choice label = pipeline.run("Wo kann ich das Paket ablegen?") label >>> Label(string='details.safeplace', ratio='0.8943') ```` ### Download and serialize fastText ````bash # assumes working dir is the root of the cloned multi2convai repo mkdir models/fasttext/de curl https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.de.vec --output models/fasttext/de/wiki.de.vec python scripts/serialize_fasttext.py -r fasttext/wiki.de.vec -v fasttext/de/wiki.200k.de.vocab -e fasttext/de/wiki.200k.de.embed -n 200000 ```` ## Further information on Multi2ConvAI: - https://multi2conv.ai - https://github.com/inovex/multi2convai - mailto: [email protected]
cnicu/led-booksum
cnicu
2022-02-28T12:12:55Z
17
1
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "summarization", "dataset:kmfoda/booksum", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- license: mit tags: - summarization datasets: - kmfoda/booksum ---
cnicu/pegasus-large-booksum
cnicu
2022-02-28T12:12:37Z
16
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "summarization", "dataset:kmfoda/booksum", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- license: mit tags: - summarization datasets: - kmfoda/booksum ---
spy24/autonlp-AUS-to-US-601516964
spy24
2022-02-28T11:21:11Z
5
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autonlp", "unk", "dataset:spy24/autonlp-data-AUS-to-US", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - spy24/autonlp-data-AUS-to-US co2_eq_emissions: 3.3930796843275846 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 601516964 - CO2 Emissions (in grams): 3.3930796843275846 ## Validation Metrics - Loss: 1.9823806285858154 - Rouge1: 42.8783 - Rouge2: 7.4603 - RougeL: 42.8492 - RougeLsum: 43.0556 - Gen Len: 2.8952 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-AUS-to-US-601516964 ```
spy24/autonlp-UK-to-US-600416931
spy24
2022-02-28T09:59:04Z
3
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autonlp", "unk", "dataset:spy24/autonlp-data-UK-to-US", "co2_eq_emissions", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - spy24/autonlp-data-UK-to-US co2_eq_emissions: 1.113131499202784 --- # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 600416931 - CO2 Emissions (in grams): 1.113131499202784 ## Validation Metrics - Loss: 1.8278849124908447 - Rouge1: 45.7945 - Rouge2: 8.5245 - RougeL: 45.8031 - RougeLsum: 45.9067 - Gen Len: 3.0622 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/spy24/autonlp-UK-to-US-600416931 ```
Theivaprakasham/layoutlmv2-finetuned-sroie_mod
Theivaprakasham
2022-02-28T09:50:47Z
7
1
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv2", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-finetuned-sroie_mod results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-finetuned-sroie_mod This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 3000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.8.0+cu101 - Datasets 1.18.3 - Tokenizers 0.11.0
peterhsu/marian-finetuned-kde4-en-to-zh_TW-accelerate
peterhsu
2022-02-28T09:36:28Z
10
0
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "dataset:kde4", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - translation datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-zh_TW-accelerate results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-zh_TW metrics: - name: Bleu type: bleu value: 40.07 --- # marian-finetuned-kde4-en-to-zh_TW-accelerate ## Model description This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Bleu: 40.70 More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
Kuray107/timit-5percent-supervised
Kuray107
2022-02-28T06:07:49Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: timit-5percent-supervised results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # timit-5percent-supervised This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6615 - Wer: 0.2788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 5.3773 | 33.33 | 500 | 2.9693 | 1.0 | | 1.4746 | 66.67 | 1000 | 0.5050 | 0.3359 | | 0.1067 | 100.0 | 1500 | 0.5981 | 0.3054 | | 0.0388 | 133.33 | 2000 | 0.6192 | 0.2712 | | 0.0244 | 166.67 | 2500 | 0.6392 | 0.2776 | | 0.018 | 200.0 | 3000 | 0.6615 | 0.2788 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
Kuray107/timit-supervised
Kuray107
2022-02-28T02:18:20Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer model-index: - name: timit-supervised results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # timit-supervised This model is a fine-tuned version of [Experiments/single_dataset/timit-supervised/checkpoint-3500](https://huggingface.co/Experiments/single_dataset/timit-supervised/checkpoint-3500) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1272 - Wer: 0.0532 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0554 | 1.77 | 500 | 0.1310 | 0.0697 | | 0.0509 | 3.53 | 1000 | 0.1497 | 0.0710 | | 0.038 | 5.3 | 1500 | 0.1190 | 0.0659 | | 0.0328 | 7.07 | 2000 | 0.0926 | 0.0596 | | 0.0247 | 8.83 | 2500 | 0.0873 | 0.0570 | | 0.0229 | 10.6 | 3000 | 0.0890 | 0.0532 | | 0.0183 | 12.37 | 3500 | 0.0969 | 0.0532 | | 0.0326 | 14.13 | 4000 | 0.0809 | 0.0469 | | 0.03 | 15.9 | 4500 | 0.0758 | 0.0444 | | 0.0264 | 17.67 | 5000 | 0.0973 | 0.0520 | | 0.0244 | 19.43 | 5500 | 0.1272 | 0.0532 | ### Framework versions - Transformers 4.14.1 - Pytorch 1.10.2 - Datasets 1.18.2 - Tokenizers 0.10.3
mipatov/rugpt3_nb_descr
mipatov
2022-02-27T23:44:38Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
based on `sberbank-ai/rugpt3medium_based_on_gpt2` finetuned for generate text description for notebook-devices
mipatov/rut5_nb_descr
mipatov
2022-02-27T23:43:38Z
15
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
based on `sberbank-ai/ruT5-large` finetuned for generate text description for notebook-devices
MatsUy/wav2vec2-common_voice-nl-demo
MatsUy
2022-02-27T22:07:14Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "common_voice", "generated_from_trainer", "nl", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: - nl license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-nl-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-nl-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - NL dataset. It achieves the following results on the evaluation set: - Loss: 0.3523 - Wer: 0.2046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0536 | 1.12 | 500 | 0.5349 | 0.4338 | | 0.2543 | 2.24 | 1000 | 0.3859 | 0.3029 | | 0.1472 | 3.36 | 1500 | 0.3471 | 0.2818 | | 0.1088 | 4.47 | 2000 | 0.3489 | 0.2731 | | 0.0855 | 5.59 | 2500 | 0.3582 | 0.2558 | | 0.0721 | 6.71 | 3000 | 0.3457 | 0.2471 | | 0.0653 | 7.83 | 3500 | 0.3299 | 0.2357 | | 0.0527 | 8.95 | 4000 | 0.3440 | 0.2334 | | 0.0444 | 10.07 | 4500 | 0.3417 | 0.2289 | | 0.0404 | 11.19 | 5000 | 0.3691 | 0.2204 | | 0.0345 | 12.3 | 5500 | 0.3453 | 0.2102 | | 0.0288 | 13.42 | 6000 | 0.3634 | 0.2089 | | 0.027 | 14.54 | 6500 | 0.3532 | 0.2044 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
osanseviero/xlm-roberta-base-finetuned-panx-de
osanseviero
2022-02-27T21:34:59Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8647022085959235 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1344 - F1: 0.8647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2568 | 1.0 | 525 | 0.1596 | 0.8210 | | 0.1279 | 2.0 | 1050 | 0.1368 | 0.8522 | | 0.0814 | 3.0 | 1575 | 0.1344 | 0.8647 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
ali2066
2022-02-27T21:30:48Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4638 - Accuracy: 0.8247 - F1: 0.8867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4069 | 0.7976 | 0.875 | | No log | 2.0 | 390 | 0.4061 | 0.8134 | 0.8838 | | 0.4074 | 3.0 | 585 | 0.4075 | 0.8134 | 0.8798 | | 0.4074 | 4.0 | 780 | 0.4746 | 0.8256 | 0.8885 | | 0.4074 | 5.0 | 975 | 0.4881 | 0.8220 | 0.8845 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
lighteternal/fact-or-opinion-xlmr-el
lighteternal
2022-02-27T19:41:57Z
949
21
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "fact-or-opinion", "en", "el", "multilingual", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en - el - multilingual tags: - text-classification - fact-or-opinion - transformers widget: - text: "Ξεχωρίζει η καθηλωτική ερμηνεία του πρωταγωνιστή." - text: "Η Ελλάδα είναι χώρα της Ευρώπης." - text: "Tolkien was an English writer" - text: "Tolkien is my favorite writer." pipeline_tag: text-classification license: apache-2.0 --- # Fact vs. opinion binary classifier, trained on a mixed EN-EL annotated corpus. ### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) This is an XLM-Roberta-base model with a binary classification head. Given a sentence, it can classify it either as a fact or an opinion based on its content. You can use this model in any of the XLM-R supported languages for the same task, taking advantage of its 0-shot learning capabilities. However, the model was trained only using English and Greek sentences. Legend of HuggingFace API labels: * Label 0: Opinion/Subjective sentence * Label 1: Fact/Objective sentence ## Dataset training info The original dataset (available here: https://github.com/1024er/cbert_aug/tree/crayon/datasets/subj) contained aprox. 9000 annotated sentences (classified as subjective or objective). It was translated to Greek using Google Translate. The Greek version was then concatenated with the original English one to create the mixed EN-EL dataset. The model was trained for 5 epochs, using batch size = 8. Detailed metrics and hyperparameters available on the "Metrics" tab. ## Evaluation Results on test set | accuracy | precision | recall | f1 | | ----------- | ----------- | ----------- | ----------- | |0.952 | 0.945 | 0.960 | 0.952 | ## Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
ali2066/finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22
ali2066
2022-02-27T18:50:02Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0890 - Accuracy: 0.9750 - F1: 0.9873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0485 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0558 | 0.9857 | 0.9927 | | No log | 3.0 | 312 | 0.0501 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0593 | 0.9828 | 0.9913 | | 0.04 | 5.0 | 520 | 0.0653 | 0.9828 | 0.9913 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
ali2066
2022-02-27T18:46:16Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0926 - Accuracy: 0.9772 - F1: 0.9883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0539 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0282 | 0.9885 | 0.9942 | | No log | 3.0 | 312 | 0.0317 | 0.9914 | 0.9956 | | No log | 4.0 | 416 | 0.0462 | 0.9885 | 0.9942 | | 0.0409 | 5.0 | 520 | 0.0517 | 0.9885 | 0.9942 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
ali2066
2022-02-27T18:33:05Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Accuracy: 0.8609 - F1: 0.9156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4468 | 0.8235 | 0.8929 | | No log | 2.0 | 162 | 0.4497 | 0.8382 | 0.9 | | No log | 3.0 | 243 | 0.4861 | 0.8309 | 0.8940 | | No log | 4.0 | 324 | 0.5087 | 0.8235 | 0.8879 | | No log | 5.0 | 405 | 0.5228 | 0.8199 | 0.8858 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
ali2066
2022-02-27T18:16:49Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4064 - Accuracy: 0.8289 - F1: 0.8901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 | | No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 | | 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 | | 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 | | 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
bullmount/hseBert-it-cased
bullmount
2022-02-27T18:08:11Z
14
2
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: it license: mit widget: - text: "È stata pubblicata la [MASK] di conversione del D.L. 24 dicembre 2021 n. 221 ." - text: "La legge fornisce l’esatta [MASK] di Green pass base." - text: "Il datore di lavoro organizza e predispone i posti di lavoro di cui all'articolo 173, in [MASK] ai requisiti minimi di cui all'allegato XXXIV." - text: "Le principali novità riguardano la quarantena precauzionale e il [MASK] di autosorveglianza." --- # hseBERT **hseBert-it-cased** is a BERT model obtained by MLM adaptive-tuning [**bert-base-italian-xxl-cased**](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased) on texts of Italian regulation (Testo unico sulla sicurezza sul lavoro - D.lgs. 9 aprile 2008, n. 81, Codice dell'Ambiente - D.lgs. 3 aprile 2006, n. 152), approximately 7k sentences. # Usage ```python from transformers import AutoModel, AutoTokenizer model_name = "bullmount/hseBert-it-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ```
ali2066/finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
ali2066
2022-02-27T17:59:00Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
ali2066
2022-02-27T17:54:05Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
ali2066
2022-02-27T17:51:50Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40
ali2066
2022-02-27T17:46:15Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
nimrah/wav2vec2-large-xls-r-300m-my_hindi_home-latest-colab
nimrah
2022-02-27T17:42:46Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-my_hindi_home-latest-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-my_hindi_home-latest-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
ali2066
2022-02-27T17:40:35Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24
ali2066
2022-02-27T17:34:56Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
ali2066
2022-02-27T17:06:54Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
ali2066
2022-02-27T16:55:39Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
ali2066/finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
ali2066
2022-02-27T16:38:53Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
emilyalsentzer/Bio_Discharge_Summary_BERT
emilyalsentzer
2022-02-27T13:59:50Z
5,949
34
transformers
[ "transformers", "pytorch", "jax", "bert", "fill-mask", "en", "arxiv:1904.03323", "arxiv:1901.08746", "license:mit", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "en" tags: - fill-mask license: mit --- # ClinicalBERT - Bio + Discharge Summary BERT Model The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) & trained on either all MIMIC notes or only discharge summaries. This model card describes the Bio+Discharge Summary BERT model, which was initialized from [BioBERT](https://arxiv.org/abs/1901.08746) & trained on only discharge summaries from MIMIC. ## Pretraining Data The `Bio_Discharge_Summary_BERT` model was trained on all discharge summaries from [MIMIC III](https://www.nature.com/articles/sdata201635), a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see [here](https://mimic.physionet.org/). All notes from the `NOTEEVENTS` table were included (~880M words). ## Model Pretraining ### Note Preprocessing Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy (`en core sci md` tokenizer). ### Pretraining Procedures The model was trained using code from [Google's BERT repository](https://github.com/google-research/bert) on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`). ### Pretraining Hyperparameters We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15 and max predictions per sequence = 20). ## How to use the model Load the model via the transformers library: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT") model = AutoModel.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT") ``` ## More Information Refer to the original paper, [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks. ## Questions? Post a Github issue on the [clinicalBERT repo](https://github.com/EmilyAlsentzer/clinicalBERT) or email [email protected] with any questions.
facebook/wav2vec2-base-el-voxpopuli-v2
facebook
2022-02-27T13:15:45Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "el", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: el tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **el** on **17.7k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **el**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-lt-voxpopuli-v2
facebook
2022-02-27T13:15:36Z
22
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "lt", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: lt tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **lt** on **14.4k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **lt**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-lv-voxpopuli-v2
facebook
2022-02-27T13:15:26Z
6
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "lv", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: lv tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **lv** on **13.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **lv**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-sl-voxpopuli-v2
facebook
2022-02-27T13:14:49Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "sl", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: sl tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sl** on **11.3k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sl**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-hr-voxpopuli-v2
facebook
2022-02-27T13:14:14Z
6
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "hr", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: hr tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **hr** on **8.1k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **hr**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-cs-voxpopuli-v2
facebook
2022-02-27T13:14:02Z
4
1
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "cs", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: cs tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **cs** on **18.7k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **cs**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-bg-voxpopuli-v2
facebook
2022-02-27T13:13:50Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "bg", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: bg tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **bg** on **17.6k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **bg**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-da-voxpopuli-v2
facebook
2022-02-27T13:13:38Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "da", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: da tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **da** on **13.6k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **da**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).
facebook/wav2vec2-base-sv-voxpopuli-v2
facebook
2022-02-27T13:13:27Z
9
0
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "voxpopuli-v2", "sv", "dataset:voxpopuli", "arxiv:2101.00390", "license:cc-by-nc-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: sv tags: - audio - automatic-speech-recognition - voxpopuli-v2 datasets: - voxpopuli license: cc-by-nc-4.0 inference: false --- # Wav2Vec2-base-VoxPopuli-V2 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) base model pretrained only in **sv** on **16.3k** unlabeled datat of the [VoxPopuli corpus](https://arxiv.org/abs/2101.00390). The model is pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data in **sv**. Check out [this blog](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) for a more in-detail explanation of how to fine-tune the model. **Paper**: *[VoxPopuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation](https://arxiv.org/abs/2101.00390)* **Authors**: *Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux* from *Facebook AI*. See the official website for more information, [here](https://github.com/facebookresearch/voxpopuli/).