modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-01 00:49:44
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
461 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-01 00:49:44
card
stringlengths
11
1.01M
lighteternal/SSE-TUC-mt-en-el-cased
lighteternal
2021-03-31T17:27:05Z
16
0
transformers
[ "transformers", "pytorch", "fsmt", "text2text-generation", "translation", "en", "el", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - el tags: - translation widget: - text: "'Katerina', is the best name for a girl." license: apache-2.0 metrics: - bleu --- ## English to Greek NMT ## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) * source languages: en * target languages: el * licence: apache-2.0 * dataset: Opus, CCmatrix * model: transformer(fairseq) * pre-processing: tokenization + BPE segmentation * metrics: bleu, chrf ### Model description Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\ BPE segmentation (20k codes).\\ Mixed-case model. ### How to use ``` from transformers import FSMTTokenizer, FSMTForConditionalGeneration mname = "lighteternal/SSE-TUC-mt-en-el-cased" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) text = " 'Katerina', is the best name for a girl." encoded = tokenizer.encode(text, return_tensors='pt') outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True) for i, output in enumerate(outputs): i += 1 print(f"{i}: {output.tolist()}") decoded = tokenizer.decode(output, skip_special_tokens=True) print(f"{i}: {decoded}") ``` ## Training data Consolidated corpus from Opus and CC-Matrix (~6.6GB in total) ## Eval results Results on Tatoeba testset (EN-EL): | BLEU | chrF | | ------ | ------ | | 76.9 | 0.733 | Results on XNLI parallel (EN-EL): | BLEU | chrF | | ------ | ------ | | 65.4 | 0.624 | ### BibTeX entry and citation info Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
lighteternal/SSE-TUC-mt-el-en-lowercase
lighteternal
2021-03-31T17:26:44Z
10
0
transformers
[ "transformers", "pytorch", "fsmt", "text2text-generation", "translation", "en", "el", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - el tags: - translation widget: - text: "Η τύχη βοηθάει τους τολμηρούς." license: apache-2.0 metrics: - bleu --- ## Greek to English NMT (lower-case output) ## By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC) * source languages: el * target languages: en * licence: apache-2.0 * dataset: Opus, CCmatrix * model: transformer(fairseq) * pre-processing: tokenization + BPE segmentation * metrics: bleu, chrf * output: lowercase only, for mixed-cased model use this: https://huggingface.co/lighteternal/SSE-TUC-mt-el-en-cased ### Model description Trained using the Fairseq framework, transformer_iwslt_de_en architecture.\\ BPE segmentation (10k codes).\\ Lower-case model. ### How to use ``` from transformers import FSMTTokenizer, FSMTForConditionalGeneration mname = " <your_downloaded_model_folderpath_here> " tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) text = "Η τύχη βοηθάει τους τολμηρούς." encoded = tokenizer.encode(text, return_tensors='pt') outputs = model.generate(encoded, num_beams=5, num_return_sequences=5, early_stopping=True) for i, output in enumerate(outputs): i += 1 print(f"{i}: {output.tolist()}") decoded = tokenizer.decode(output, skip_special_tokens=True) print(f"{i}: {decoded}") ``` ## Training data Consolidated corpus from Opus and CC-Matrix (~6.6GB in total) ## Eval results Results on Tatoeba testset (EL-EN): | BLEU | chrF | | ------ | ------ | | 79.3 | 0.795 | Results on XNLI parallel (EL-EN): | BLEU | chrF | | ------ | ------ | | 66.2 | 0.623 | ### BibTeX entry and citation info Dimitris Papadopoulos, et al. "PENELOPIE: Enabling Open Information Extraction for the Greek Language through Machine Translation." (2021). Accepted at EACL 2021 SRW ### Acknowledgement The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
Wikidepia/indobert-lite-squadx
Wikidepia
2021-03-31T13:28:04Z
26
0
transformers
[ "transformers", "pytorch", "albert", "question-answering", "id", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: id widget: - text: "Kapan Einstein melepas kewarganegaraan Jerman?" context: "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900." --- # IndoBERT-Lite-SQuAD base fine-tuned on Full Translated SQuAD v2 [IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/squad) for **Q&A** downstream task. ## Model in action Fast usage with **pipelines**: ```python from transformers import BertTokenizerFast, pipeline tokenizer = BertTokenizerFast.from_pretrained( 'Wikidepia/indobert-lite-squad' ) qa_pipeline = pipeline( "question-answering", model="Wikidepia/indobert-lite-squad", tokenizer=tokenizer ) qa_pipeline({ 'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.", 'question': "Kapan Einstein melepas kewarganegaraan Jerman?" }) ``` # Output: ```json { "score": 0.9169162511825562, "start": 147, "end": 151, "answer": "1896" } ``` README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
Wikidepia/indobert-lite-squad
Wikidepia
2021-03-31T13:26:55Z
132
6
transformers
[ "transformers", "pytorch", "albert", "question-answering", "id", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: id widget: - text: "Kapan Einstein melepas kewarganegaraan Jerman?" context: "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900." --- # IndoBERT-Lite base fine-tuned on Translated SQuAD v2 [IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/SQuAD) for **Q&A** downstream task. ## Model in action Fast usage with **pipelines**: ```python from transformers import BertTokenizerFast, pipeline tokenizer = BertTokenizerFast.from_pretrained( 'Wikidepia/indobert-lite-squad' ) qa_pipeline = pipeline( "question-answering", model="Wikidepia/indobert-lite-squad", tokenizer=tokenizer ) qa_pipeline({ 'context': "Setelah menghabiskan waktu satu tahun di Praha, Einstein tinggal di Swiss antara tahun 1895 dan 1914, melepas kewarganegaraan Jermannya pada tahun 1896, dan lulus sarjana dari sekolah politeknik federal Swiss (kelak Eidgenössische Technische Hochschule, ETH) di Zürich pada tahun 1900.", 'question': "Kapan Einstein melepas kewarganegaraan Jerman?" }) ``` # Output: ```json { "score":0.9799205660820007, "start":147, "end":151, "answer":"1896" } ``` README copied from [mrm8488's repository](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2)
katoensp/GG-12
katoensp
2021-03-30T15:55:30Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://www.geogebra.org/m/cwcveget https://www.geogebra.org/m/b8dzxk6z https://www.geogebra.org/m/nqanttum https://www.geogebra.org/m/pd3g8a4u https://www.geogebra.org/m/jw8324jz https://www.geogebra.org/m/wjbpvz5q https://www.geogebra.org/m/qm3g3ma6 https://www.geogebra.org/m/sdajgph8 https://www.geogebra.org/m/e3ghhcbf https://www.geogebra.org/m/msne4bfm https://www.geogebra.org/m/nmcv2te5 https://www.geogebra.org/m/hguqx6cn https://www.geogebra.org/m/jnyvpgqu https://www.geogebra.org/m/syctd97g https://www.geogebra.org/m/nq9erdby https://www.geogebra.org/m/au4har8c
xsway/wav2vec2-large-xlsr-georgian
xsway
2021-03-29T21:07:53Z
2,540
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ka", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ka datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec finetuned for Georgian results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ka type: common_voice args: ka metrics: - name: Test WER type: wer value: 45.28 --- # Wav2Vec2-Large-XLSR-53-Georgian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import librosa import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ka", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("xsway/wav2vec2-large-xlsr-georgian") model = Wav2Vec2ForCTC.from_pretrained("xsway/wav2vec2-large-xlsr-georgian") resampler = lambda sampling_rate, y: librosa.resample(y.numpy().squeeze(), sampling_rate, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): \\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\\\tbatch["speech"] = resampler(sampling_rate, speech_array).squeeze() \\\\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Georgian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import librosa test_dataset = load_dataset("common_voice", "ka", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("xsway/wav2vec2-large-xlsr-georgian") model = Wav2Vec2ForCTC.from_pretrained("xsway/wav2vec2-large-xlsr-georgian") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]' resampler = lambda sampling_rate, y: librosa.resample(y.numpy().squeeze(), sampling_rate, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(sampling_rate, speech_array).squeeze() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 45.28 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](...)
othrif/wav2vec2-large-xlsr-arabic
othrif
2021-03-29T18:43:31Z
74
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ar", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ar datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Arabic by Othmane Rifki results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ar type: common_voice args: ar metrics: - name: Test WER type: wer value: 46.77 --- # Wav2Vec2-Large-XLSR-53-Arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ar", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-arabic") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-arabic") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Arabic test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ar", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-arabic") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-arabic") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\؛\\\\\\\\\\\\\\\\—\\\\\\\\\\\\\\\\_get\\\\\\\\\\\\\\\\«\\\\\\\\\\\\\\\\»\\\\\\\\\\\\\\\\ـ\\\\\\\\\\\\\\\\ـ\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\#\\\\\\\\\\\\\\\\،\\\\\\\\\\\\\\\\☭,\\\\\\\\\\\\\\\\؟]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 46.77 ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://huggingface.co/othrif/wav2vec2-large-xlsr-arabic/tree/main)
qqpann/wav2vec2-large-xlsr-japanese-0325-1200
qqpann
2021-03-29T10:26:40Z
8
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ja", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ja datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Japanese XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ja type: common_voice args: ja metrics: - name: Test WER type: wer value: { wer_result_on_test } #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value --- # Wav2Vec2-Large-XLSR-53-{language} #TODO: replace language with your {language}, _e.g._ French Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, _e.g._ French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ja", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, _e.g._ French ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ja", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: XX.XX % <!-- # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags. --> ## Training The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... <!-- # TODO: adapt to state all the datasets that were used for training. --> The script used for training can be found [here](...) <!-- # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->
othrif/wav2vec_test
othrif
2021-03-29T02:48:07Z
22
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "ar", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ar datasets: - https://arabicspeech.org/ tags: - audio - automatic-speech-recognition - speech license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Egyptian by Zaid Alyafeai and Othmane Rifki results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: arabicspeech.org MGB-3 type: arabicspeech.org MGB-3 args: ar metrics: - name: Test WER type: wer value: 55.2 --- # Test Wav2Vec2 with egyptian arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor dataset = load_dataset("arabic_speech_corpus", split="test") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec_test") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec_test") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
vasilis/wav2vec2-large-xlsr-53-finnish
vasilis
2021-03-29T02:30:18Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "fi", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fi datasets: - common_voice - CSS10 finnish: Single Speaker Speech Dataset metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: V XLSR Wav2Vec2 Large 53 - finnish results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fi type: common_voice args: fi metrics: - name: Test WER type: wer value: 38.335242 - name: Test CER type: cer value: 6.552408 --- # Wav2Vec2-Large-XLSR-53-finnish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on finnish using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 finnish: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/finnish-single-speaker-speech-dataset). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the finnish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fi", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-finnish") model.to("cuda") chars_to_ignore_regex = "[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']" # TODO: adapt this list to include all special characters you removed from the data replacements = {"…": "", "–": ''} resampler = { 48_000: torchaudio.transforms.Resample(48_000, 16_000), 44100: torchaudio.transforms.Resample(44100, 16_000), 32000: torchaudio.transforms.Resample(32000, 16_000) } # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() for key, value in replacements.items(): batch["sentence"] = batch["sentence"].replace(key, value) speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]]))) ``` **Test Result**: 38.335242 % ## Training The Common Voice train dataset was used for training. Also all of `CSS10 Finnish` was used using the normalized transcripts. After 20000 steps the models was finetuned using the common voice train and validation sets for 2000 steps more.
wietsedv/wav2vec2-large-xlsr-53-frisian
wietsedv
2021-03-28T20:09:35Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: fy-NL datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Frisian XLSR Wav2Vec2 Large 53 by Wietse de Vries results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fy-NL type: common_voice args: fy-NL metrics: - name: Test WER type: wer value: 16.25 --- # Wav2Vec2-Large-XLSR-53-Frisian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Frisian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "fy-NL", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian") model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Frisian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fy-NL", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian") model = Wav2Vec2ForCTC.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-frisian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\'\“\%\‘\”]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:.2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 16.25 % ## Training The Common Voice `train` and `validation` datasets were used for training.
pcuenq/wav2vec2-large-xlsr-53-eu
pcuenq
2021-03-28T19:35:49Z
1,790
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "eu", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: eu datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large 53 Basque by pcuenq results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice eu type: common_voice args: eu metrics: - name: Test WER type: wer value: 15.34 --- # Wav2Vec2-Large-XLSR-53-EU Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Basque using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eu", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu") model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Basque test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "eu", split="test") wer = load_metric("wer") model_name = "pcuenq/wav2vec2-large-xlsr-53-eu" processor = Wav2Vec2Processor.from_pretrained(model_name) model = Wav2Vec2ForCTC.from_pretrained(model_name) model.to("cuda") ## Text pre-processing chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]' chars_to_ignore_pattern = re.compile(chars_to_ignore_regex) def remove_special_characters(batch): batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " " return batch ## Audio pre-processing import librosa def speech_file_to_array_fn(batch): speech_array, sample_rate = torchaudio.load(batch["path"]) batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000) return batch # Text transformation and audio resampling def cv_prepare(batch): batch = remove_special_characters(batch) batch = speech_file_to_array_fn(batch) return batch # Number of CPUs or None num_proc = 16 test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) # WER Metric computation print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 15.34 % ## Training The Common Voice `train` and `validation` datasets were used for training. Training was performed for 22 + 20 epochs with the following parameters: - Batch size 16, 2 gradient accumulation steps. - Learning rate: 2.5e-4 - Activation dropout: 0.05 - Attention dropout: 0.1 - Hidden dropout: 0.05 - Feature proj. dropout: 0.05 - Mask time probability: 0.08 - Layer dropout: 0.05
pcuenq/wav2vec2-large-xlsr-53-es
pcuenq
2021-03-28T19:06:18Z
6
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "es", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: es datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large 53 Spanish by pcuenq results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice es type: common_voice args: es metrics: - name: Test WER type: wer value: 10.50 --- # Wav2Vec2-Large-XLSR-53-Spanish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Spanish using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset{s}. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "es", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es") model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Spanish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "es", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es") model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-es") model.to("cuda") ## Text pre-processing chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]' chars_to_ignore_pattern = re.compile(chars_to_ignore_regex) def remove_special_characters(batch): batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " " return batch def replace_diacritics(batch): sentence = batch["sentence"] sentence = re.sub('ì', 'í', sentence) sentence = re.sub('ù', 'ú', sentence) sentence = re.sub('ò', 'ó', sentence) sentence = re.sub('à', 'á', sentence) batch["sentence"] = sentence return batch def replace_additional(batch): sentence = batch["sentence"] sentence = re.sub('ã', 'a', sentence) # Portuguese, as in São Paulo sentence = re.sub('ō', 'o', sentence) # Japanese sentence = re.sub('ê', 'e', sentence) # Português batch["sentence"] = sentence return batch ## Audio pre-processing # I tried to perform the resampling using a `torchaudio` `Resampler` transform, # but found that the process deadlocked when using multiple processes. # Perhaps my torchaudio is using the wrong sox library under the hood, I'm not sure. # Fortunately, `librosa` seems to work fine, so that's what I'll use for now. import librosa def speech_file_to_array_fn(batch): speech_array, sample_rate = torchaudio.load(batch["path"]) batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000) return batch # One-pass mapping function # Text transformation and audio resampling def cv_prepare(batch): batch = remove_special_characters(batch) batch = replace_diacritics(batch) batch = replace_additional(batch) batch = speech_file_to_array_fn(batch) return batch # Number of CPUs or None num_proc = 16 test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) # WER Metric computation # `wer.compute` crashes in my computer with more than ~10000 samples. # Until I confirm in a different one, I created a "chunked" version of the computation. # It gives the same results as `wer.compute` for smaller datasets. import jiwer def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000))) #print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 10.50 % ## Text processing The Common Voice `es` dataset has a lot of characters that don't belong to the Spanish language, even after discarding separators and punctuators. I made some translations and discarded most of the extraneous characters. I decided to keep all the Spanish language diacritics. This is a difficult decision. Some times the diacritics are added just because of ortography rules, but they don't alter the meaning of the word. In other cases, however, the diacritics carry meaning, as they disambiguate among different senses. A better WER score would surely have been achieved using just the non-accented characters, and the resulting text would be understood by Spanish speakers. Nevertheless, I think keeping them is "more correct". All the rules I applied are shown in the evaluation script. ## Training The Common Voice `train` and `validation` datasets were used for training. For dataset handling reasons, I initially split `train`+`validation` in 10% splits so I could see progress earlier and react if needed. * I trained for 30 epochs on the first split only, using similar values as the ones proposed by Patrick in his demo notebook. I used a batch_size of 24 with 2 gradient accumulation steps. This gave a WER of about 16.3%on the full test set. * I then trained the resulting model on the 9 remaining splits, for 3 epochs each, but with a faster warmup of 75 steps. * Next, I trained 3 epochs on each of the 10 splits using a smaller learning rate of `1e-4`. A warmup of 75 steps was used in this case too. The final model had a WER of about 11.7%. * By this time we had already figured out the reason for the initial delay in training time, and I decided to use the full dataset for training. However, in my tests I had seen that varying the learning rate seemed to work well, so I wanted to replicate that. I selected a cosine schedule with hard restarts, a reference learning rate of `3e-5` and 10 epochs. I configured the cosine schedule to have 10 cycles too, and used no warmup. This produced a WER of ~10.5%. ## Other things I tried * Starting from the same fine-tuned model, I compared a constant lr of 1e-4 against a linear schedule with warmup. The linear schedule worked better (11.85 vs 12.72 WER%). * I tried to use a Spanish model to improve a Basque one. I transformed the text to make ortography more similar to the target language, but the Basque model did not improve. * Label smoothing did not work. ## Issues and other technical challenges I had previously used the `transformers` library as an end user, just to try Bert on some tasks, but this is the first time I have needed to look into the code. * The `Datasets` abstraction is great because, being based on memory-mapped files, it allows arbitrarily-sized datasets to be processed. However, it is important to understand its limitations and trade-offs. I found caching convenient, but disk usage explodes fast. I keep the datasets for my current projects in a 1 TB, fast SSD disk, and a couple of times I ran out of space. I had to understand how cache files are stored and learn when it's best to disable caching and manually save when you need to. I found that data exploration is better suited for smaller datasets or sampled ones, but actual processing is most efficient when you have identified the transformations you need and apply them in a single `map` operation. * There was a noticeable delay before training started. Fortunately, we found the reason why, discussed it in Slack and the forums and created a workaround. * The WER metric crashed on large datasets. I evaluated on a small sample (also, it's faster) and wrote an accumulative version of wer that runs on fixed memory. I'd like to verify whether this change makes sense to be used inside the training loop. * `torchaudio` deadlocks when using multiple processes. `librosa` works fine. To be investigated. * When using `num_proc` inside a notebook, I could not see progress bars. This is surely some permissions issue in my computer. I still need to find it out.
vasudevgupta/mbart-summarizer-interiit
vasudevgupta
2021-03-28T17:49:15Z
10
0
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
This model is trained as a part of **InterIIT'21 competition**, on the dataset provided by Bridgei2i. It is able to do multilingual (Hindi, English, Hinglish) summarization (many -> one) & is capable of generating summaries in English regardless of the input language. | Rouge-L | Sacrebleu | Headline Similarity (using sentence-transformers) | |-----------------------|-----------|---------------------------------------------------| | p=0.46 r=0.49 f1=0.52 | 23.46 | 0.75 | mBART is initialized from **facebook/mbart-large-cc25** and is trained as per strategy mentioned in our [GitHub](https://github.com/vasudevgupta7/Bridgei2i-Winning-Solutions).
dispenst/hgfytgfg
dispenst
2021-03-28T15:32:14Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
<a href="https://www.geogebra.org/m/w8uzjttg">.</a> <a href="https://www.geogebra.org/m/gvn7m78g">.</a> <a href="https://www.geogebra.org/m/arxecanq">.</a> <a href="https://www.geogebra.org/m/xb69bvww">.</a> <a href="https://www.geogebra.org/m/apvepfnd">.</a> <a href="https://www.geogebra.org/m/evmj8ckk">.</a> <a href="https://www.geogebra.org/m/qxcxwmhp">.</a> <a href="https://www.geogebra.org/m/p3cxqh6c">.</a> <a href="https://www.geogebra.org/m/ggrahbgd">.</a> <a href="https://www.geogebra.org/m/pnhymrbc">.</a> <a href="https://www.geogebra.org/m/zjukbtk9">.</a> <a href="https://www.geogebra.org/m/bbezun8r">.</a> <a href="https://www.geogebra.org/m/sgwamtru">.</a> <a href="https://www.geogebra.org/m/fpunkxxp">.</a> <a href="https://www.geogebra.org/m/acxebrr7">.</a> <a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-full-1818658-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-godzilla-vs-kong-online-2021-full-f-r-e-e-1818655-cd">.</a> <a href="https://jobs.acm.org/jobs/watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-f-u-l-l-f-r-e-e-1818661-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-zack-snyder-s-justice-league-online-2021-full-f-r-e-e-1818662-cd">.</a> <a href="https://jobs.acm.org/jobs/hd-watch-godzilla-vs-kong-2021-version-full-hbomax-1818659-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-girl-in-the-basement-online-2021-full-f-r-e-e-1818663-cd">.</a> <a href="https://jobs.acm.org/jobs/watch-godzilla-vs-kong-2021-f-u-l-l-h-d-1818660-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-billie-eilish-the-world-s-a-little-blurry-2021-f-u-l-l-f-r-e-e-1818666-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-monster-hunter-2020-f-u-l-l-f-r-e-e-1818667-cd">.</a> <a href="https://jobs.acm.org/jobs/123movies-watch-raya-and-the-last-dragon-2021-f-u-l-l-f-r-e-e-1818669-cd">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-365-days-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-billie-eilish-the-worlds-a-little-blurry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-cherry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-coming-2-america-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-demon-slayer-kimetsu-no-yaiba-mugen-train-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-judas-and-the-black-messiah-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-monster-hunter-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-mortal-kombat-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-raya-and-the-last-dragon-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-tenet-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-the-world-to-come-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-tom-and-jerry-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-willys-wonderland-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-wonder-woman-1984-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-wrong-turn-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-zack-snyders-justice-league-2021-hd-online-full-free-stream-2/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-a-writers-odyssey-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-the-marksman-2021-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-after-we-collided-2020-version-full-online-free/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-123movies/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-2/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-3/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-4/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/full-watch-123movies-godzilla-vs-kong-2021/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-free-hd/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-online/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-5/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online-full-version-hd/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-full-2021-free/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-2/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-6/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-7/">.</a> <a href="https://pactforanimals.org/advert/free-download-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/godzilla-vs-kong-2021-google-drive-mp4/">.</a> <a href="https://pactforanimals.org/advert/google-docs-godzilla-vs-kong-2021-google-drive-full-hd-mp4/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-8/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-9/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-3/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-online/">.</a> <a href="https://pactforanimals.org/advert/free-watch-godzilla-vs-kong-2021-full-4/">.</a> <a href="https://pactforanimals.org/advert/free-godzilla-vs-kong-2021-watch-full/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-10/">.</a> <a href="https://pactforanimals.org/advert/online-watch-godzilla-vs-kong-2021-full/">.</a> <a href="https://pactforanimals.org/advert/123movies-watch-godzilla-vs-kong-2021-full-online/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-full-11/">.</a> <a href="https://pactforanimals.org/advert/full-watch-godzilla-vs-kong-2021-free-hd/">.</a> <a href="https://pactforanimals.org/advert/watch-godzilla-vs-kong-2021-free-online/">.</a> <a href="https://pactforanimals.org/advert/full-godzilla-vs-kong-2021-watch-online/">.</a> <a href="https://sites.google.com/view/mortalkombat1/">.</a> <a href="https://sites.google.com/view/free-watch-mortal-kombat-2021-/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-2021-f-u-l/">.</a> <a href="https://sites.google.com/view/mortalkombat2/">.</a> <a href="https://sites.google.com/view/mortalkombat3/">.</a> <a href="https://sites.google.com/view/mortalkombat5/">.</a> <a href="https://sites.google.com/view/fullwatchmortalkombat2021-movi/">.</a> <a href="https://sites.google.com/view/mortalkombat7/">.</a> <a href="https://sites.google.com/view/mortalkombat8/">.</a> <a href="https://sites.google.com/view/mortalkombat9/">.</a> <a href="https://sites.google.com/view/mortalkombat10/">.</a> <a href="https://sites.google.com/view/watch-mort-tal-kombat/">.</a> <a href="https://sites.google.com/view/free-watch-mort-tal-kombat/">.</a> <a href="https://sites.google.com/view/watch-mort-tal-kombatfree-/">.</a> <a href="https://sites.google.com/view/full-watch-mortal-kombat/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-2021-/">.</a> <a href="https://sites.google.com/view/watch-free-mortal-kombat-2021/">.</a> <a href="https://sites.google.com/view/full-watch-mortal-kombat-/">.</a> <a href="https://sites.google.com/view/watch-mortal-kombat-g-drive/">.</a> <a href="https://sites.google.com/view/g-docs-mortalkombat-g-drive/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a> <a href="https://sites.google.com/view/mortal-kombat-2021-full-free-o/">.</a> <a href="https://paiza.io/projects/56xFAEq61pSSn8VnKnHO6Q">.</a> <a href="https://www.posts123.com/post/1450667/mariners-announce-spring-training">.</a> <a href="https://sites.google.com/view/sfdjgkdfghdkfgjherghkkdfjg/home">.</a> <a href="https://dskfjshdkjfewhgf.blogspot.com/2021/03/sdkjfhwekjhfjdherjgfdjg.html">.</a> <a href="https://grahmaulidia.wordpress.com/2021/03/28/mariners-announce-spring-training-roster-moves/">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner-f83a9ea92f89">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner1-b2847091ff9f">.</a> <a href="https://4z5v6wq7a.medium.com/a-letter-to-nationals-fans-from-mark-d-lerner2-df35041eec3a">.</a> <a href="https://4z5v6wq7a.medium.com">.</a> <a href="https://onlinegdb.com/BJaH8WR4O">.</a>
nithinholla/wav2vec2-large-xlsr-53-dutch
nithinholla
2021-03-28T10:48:00Z
15
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "nl", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: nl datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Dutch XLSR Wav2Vec2 Large 53 results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice nl type: common_voice args: nl metrics: - name: Test WER type: wer value: 21.59 --- # Wav2Vec2-Large-XLSR-53-Dutch Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dutch using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "nl", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("nithinholla/wav2vec2-large-xlsr-53-dutch") model = Wav2Vec2ForCTC.from_pretrained("nithinholla/wav2vec2-large-xlsr-53-dutch") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Dutch test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "nl", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("nithinholla/wav2vec2-large-xlsr-53-dutch") model = Wav2Vec2ForCTC.from_pretrained("nithinholla/wav2vec2-large-xlsr-53-dutch") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\'\�\(\)\&\–\—\=\…]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("´", "'").replace("’", "'") speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 21.59 % ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://github.com/Nithin-Holla/wav2vec2-sprint/blob/main/train_nl.sh).
shahukareem/wav2vec2-large-xlsr-53-dhivehi
shahukareem
2021-03-28T08:47:31Z
78
1
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "dv", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: dv datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Shahu Kareem XLSR Wav2Vec2 Large 53 Dhivehi results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice dv type: common_voice args: dv metrics: - name: Test WER type: wer value: 32.85 --- # Wav2Vec2-Large-XLSR-53-Dhivehi Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dhivehi using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "dv", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi") model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi") # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Dhivehi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "dv", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi") model = Wav2Vec2ForCTC.from_pretrained("shahukareem/wav2vec2-large-xlsr-53-dhivehi") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\،\.\؟\!\'\"\–\’]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 32.85% ## Training The Common Voice `train` and `validation` datasets were used for training. ## Example predictions ```-- reference: ކަރަންޓް ވައިރުކޮށް ބޮކި ހަރުކުރުން predicted: ކަރަންޓް ވައިރުކޮށް ބޮކި ހަރުކުރުން -- reference: ދެން އެކުދިންނާ ދިމާއަށް އަތް ދިށްކޮށްލެވެ predicted: ދެން އެކުދިންނާ ދިމާއަށް އަތް ދިއްކޮށްލެވެ ް -- reference: ރަކި ހިނިތުންވުމަކާއެކު އޭނާ އަމިއްލައަށް ތައާރަފްވި predicted: ރަކި ހިނިތުންވުމަކާއެކު އޭނާ އަމިއްލައަށް ތައަރަފްވި -- reference: ކޮޓަރީގެ ކުޑަދޮރުން ބޭރު ބަލަހައްޓައިގެން އިން ރޫނާގެ މޫނުމަތިން ފާޅުވަމުން ދިޔައީ ކަންބޮޑުވުމުގެ އަސަރުތައް predicted: ކޮޓަރީގެ ކުޑަދޮރުން ބޭރު ބަލަހައްޓައިގެން އިން ރނާގެ މޫނުމަތިން ފާޅުވަމުން ދިޔައީ ކަންބޮޑުވުމުގެ އަސަރުތައް -- ```
simonsr/wav2vec2-large-xlsr-dutch
simonsr
2021-03-26T13:53:35Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "nl", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: nl datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: simonsr wav2vec2-large-xlsr-dutch results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice nl type: common_voice args: nl metrics: - name: Test WER type: wer value: 38.74 --- # Wav2Vec2-Large-XLSR-53-Dutch Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Dutch using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "nl", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("simonsr/wav2vec2-large-xlsr-dutch") model = Wav2Vec2ForCTC.from_pretrained("simonsr/wav2vec2-large-xlsr-dutch") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Dutch test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import unidecode import re test_dataset = load_dataset("common_voice", "nl", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic` model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\(\)\=\´\–\&\…\—\’]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = unidecode.unidecode(batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 38.74 % ## Training The Common Voice `train`, `validation`, and ... datasets were used for training. The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
trueto/medalbert-base-wwm-chinese
trueto
2021-03-26T05:33:51Z
6
0
transformers
[ "transformers", "pytorch", "albert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# [medbert](https://github.com/trueto/medbert) 本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型 ## 评估基准 构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、 中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。 | **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** | | ---- | ---- | ---- |---- |---- |:----:| | CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 | | CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 | | CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 | | CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 | ## 开源模型 在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。 ## 性能表现 在同等实验环境,相同训练参数和脚本下,各模型的性能表现 | **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** | | :---- | :----: | :----: | :----: | :----: | | [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% | | [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% | | [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% | | MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** | |MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% | |MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% | |- | - | - | - | - | | [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% | | MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% | |MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** | ## 引用格式 ``` 杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03. ```
DarshanDeshpande/marathi-distilbert
DarshanDeshpande
2021-03-23T08:20:29Z
8
3
transformers
[ "transformers", "pytorch", "tf", "distilbert", "fill-mask", "mr", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: - mr tags: - fill-mask license: apache-2.0 datasets: - Oscar Corpus, News, Stories widget: - text: "हा खरोखर चांगला [MASK] आहे." --- # Marathi DistilBERT ## Model description This model is an adaptation of DistilBERT (Victor Sanh et al., 2019) for Marathi language. This version of Marathi-DistilBERT is trained from scratch on approximately 11.2 million sentences. ``` DISCLAIMER This model has not been thoroughly tested and may contain biased opinions or inappropriate language. User discretion is advised ``` ## Training data The training data has been extracted from a variety of sources, mainly including: 1. Oscar Corpus 2. Marathi Newspapers 3. Marathi storybooks and articles The data is cleaned by removing all languages other than Marathi, while preserving common punctuations ## Training procedure The model is trained from scratch using an Adam optimizer with a learning rate of 1e-4 and default β1 and β2 values of 0.9 and 0.999 respectively with a total batch size of 256 on a v3-8 TPU and mask probability of 15%. ## Example ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="DarshanDeshpande/marathi-distilbert", tokenizer="DarshanDeshpande/marathi-distilbert", ) fill_mask("हा खरोखर चांगला [MASK] आहे.") ``` ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <h3>Authors </h3> <h5>1. Darshan Deshpande: <a href="https://github.com/DarshanDeshpande">GitHub</a>, <a href="https://www.linkedin.com/in/darshan-deshpande/">LinkedIn</a><h5> <h5>2. Harshavardhan Abichandani: <a href="https://github.com/Baras64">GitHub</a>, <a href="http://​www.linkedin.com/in/harsh-abhi">LinkedIn</a><h5>
HooshvareLab/albert-fa-zwnj-base-v2-ner
HooshvareLab
2021-03-21T14:25:09Z
64
0
transformers
[ "transformers", "pytorch", "tf", "albert", "token-classification", "fa", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: fa --- # AlbertNER This model fine-tuned for the Named Entity Recognition (NER) task on a mixed NER dataset collected from [ARMAN](https://github.com/HaniehP/PersianNER), [PEYMA](http://nsurl.org/2019-2/tasks/task-7-named-entity-recognition-ner-for-farsi/), and [WikiANN](https://elisa-ie.github.io/wikiann/) that covered ten types of entities: - Date (DAT) - Event (EVE) - Facility (FAC) - Location (LOC) - Money (MON) - Organization (ORG) - Percent (PCT) - Person (PER) - Product (PRO) - Time (TIM) ## Dataset Information | | Records | B-DAT | B-EVE | B-FAC | B-LOC | B-MON | B-ORG | B-PCT | B-PER | B-PRO | B-TIM | I-DAT | I-EVE | I-FAC | I-LOC | I-MON | I-ORG | I-PCT | I-PER | I-PRO | I-TIM | |:------|----------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:|--------:| | Train | 29133 | 1423 | 1487 | 1400 | 13919 | 417 | 15926 | 355 | 12347 | 1855 | 150 | 1947 | 5018 | 2421 | 4118 | 1059 | 19579 | 573 | 7699 | 1914 | 332 | | Valid | 5142 | 267 | 253 | 250 | 2362 | 100 | 2651 | 64 | 2173 | 317 | 19 | 373 | 799 | 387 | 717 | 270 | 3260 | 101 | 1382 | 303 | 35 | | Test | 6049 | 407 | 256 | 248 | 2886 | 98 | 3216 | 94 | 2646 | 318 | 43 | 568 | 888 | 408 | 858 | 263 | 3967 | 141 | 1707 | 296 | 78 | ## Evaluation The following tables summarize the scores obtained by model overall and per each class. **Overall** | Model | accuracy | precision | recall | f1 | |:----------:|:--------:|:---------:|:--------:|:--------:| | Albert | 0.993405 | 0.938907 | 0.943966 | 0.941429 | **Per entities** | | number | precision | recall | f1 | |:---: |:------: |:---------: |:--------: |:--------: | | DAT | 407 | 0.820639 | 0.820639 | 0.820639 | | EVE | 256 | 0.936803 | 0.984375 | 0.960000 | | FAC | 248 | 0.925373 | 1.000000 | 0.961240 | | LOC | 2884 | 0.960818 | 0.960818 | 0.960818 | | MON | 98 | 0.913978 | 0.867347 | 0.890052 | | ORG | 3216 | 0.920892 | 0.937500 | 0.929122 | | PCT | 94 | 0.946809 | 0.946809 | 0.946809 | | PER | 2644 | 0.960000 | 0.944024 | 0.951945 | | PRO | 318 | 0.942943 | 0.987421 | 0.964670 | | TIM | 43 | 0.780488 | 0.744186 | 0.761905 | ## How To Use You use this model with Transformers pipeline for NER. ### Installing requirements ```bash pip install sentencepiece pip install transformers ``` ### How to predict using pipeline ```python from transformers import AutoTokenizer from transformers import AutoModelForTokenClassification # for pytorch from transformers import TFAutoModelForTokenClassification # for tensorflow from transformers import pipeline model_name_or_path = "HooshvareLab/albert-fa-zwnj-base-v2-ner" # Albert tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) # Pytorch # model = TFAutoModelForTokenClassification.from_pretrained(model_name_or_path) # Tensorflow nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "در سال ۲۰۱۳ درگذشت و آندرتیکر و کین برای او مراسم یادبود گرفتند." ner_results = nlp(example) print(ner_results) ``` ## Questions? Post a Github issue on the [ParsNER Issues](https://github.com/hooshvare/parsner/issues) repo.
sebastian-hofstaetter/colbert-distilbert-margin_mse-T2-msmarco
sebastian-hofstaetter
2021-03-18T10:35:12Z
61
14
transformers
[ "transformers", "pytorch", "ColBERT", "dpr", "dense-passage-retrieval", "knowledge-distillation", "en", "dataset:ms_marco", "arxiv:2004.12832", "arxiv:2010.02666", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: "en" tags: - dpr - dense-passage-retrieval - knowledge-distillation datasets: - ms_marco --- # Margin-MSE Trained ColBERT We provide a retrieval trained DistilBert-based ColBERT model (https://arxiv.org/pdf/2004.12832.pdf). Our model is trained with Margin-MSE using a 3 teacher BERT_Cat (concatenated BERT scoring) ensemble on MSMARCO-Passage. This instance can be used to **re-rank a candidate set** or **directly for a vector index based dense retrieval**. The architecure is a 6-layer DistilBERT, with an additional single linear layer at the end. If you want to know more about our simple, yet effective knowledge distillation method for efficient information retrieval models for a variety of student architectures that is used for this model instance check out our paper: https://arxiv.org/abs/2010.02666 🎉 For more information, training data, source code, and a minimal usage example please visit: https://github.com/sebastian-hofstaetter/neural-ranking-kd ## Configuration - fp16 trained, so fp16 inference shouldn't be a problem - We use no compression: 768 dim output vectors (better suited for re-ranking, or storage for smaller collections, MSMARCO gets to ~1TB vector storage with fp16 ... ups) - Query [MASK] augmention = 8x regardless of batch-size (needs to be added before the model, see the usage example in GitHub repo for more) ## Model Code ````python from transformers import AutoTokenizer,AutoModel, PreTrainedModel,PretrainedConfig from typing import Dict import torch class ColBERTConfig(PretrainedConfig): model_type = "ColBERT" bert_model: str compression_dim: int = 768 dropout: float = 0.0 return_vecs: bool = False trainable: bool = True class ColBERT(PreTrainedModel): """ ColBERT model from: https://arxiv.org/pdf/2004.12832.pdf We use a dot-product instead of cosine per term (slightly better) """ config_class = ColBERTConfig base_model_prefix = "bert_model" def __init__(self, cfg) -> None: super().__init__(cfg) self.bert_model = AutoModel.from_pretrained(cfg.bert_model) for p in self.bert_model.parameters(): p.requires_grad = cfg.trainable self.compressor = torch.nn.Linear(self.bert_model.config.hidden_size, cfg.compression_dim) def forward(self, query: Dict[str, torch.LongTensor], document: Dict[str, torch.LongTensor]): query_vecs = self.forward_representation(query) document_vecs = self.forward_representation(document) score = self.forward_aggregation(query_vecs,document_vecs,query["attention_mask"],document["attention_mask"]) return score def forward_representation(self, tokens, sequence_type=None) -> torch.Tensor: vecs = self.bert_model(**tokens)[0] # assuming a distilbert model here vecs = self.compressor(vecs) # if encoding only, zero-out the mask values so we can compress storage if sequence_type == "doc_encode" or sequence_type == "query_encode": vecs = vecs * tokens["tokens"]["mask"].unsqueeze(-1) return vecs def forward_aggregation(self,query_vecs, document_vecs,query_mask,document_mask): # create initial term-x-term scores (dot-product) score = torch.bmm(query_vecs, document_vecs.transpose(2,1)) # mask out padding on the doc dimension (mask by -1000, because max should not select those, setting it to 0 might select them) exp_mask = document_mask.bool().unsqueeze(1).expand(-1,score.shape[1],-1) score[~exp_mask] = - 10000 # max pooling over document dimension score = score.max(-1).values # mask out paddding query values score[~(query_mask.bool())] = 0 # sum over query values score = score.sum(-1) return score tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") # honestly not sure if that is the best way to go, but it works :) model = ColBERT.from_pretrained("sebastian-hofstaetter/colbert-distilbert-margin_mse-T2-msmarco") ```` ## Effectiveness on MSMARCO Passage & TREC Deep Learning '19 We trained our model on the MSMARCO standard ("small"-400K query) training triples with knowledge distillation with a batch size of 32 on a single consumer-grade GPU (11GB memory). For re-ranking we used the top-1000 BM25 results. ### MSMARCO-DEV Here, we use the larger 49K query DEV set (same range as the smaller 7K DEV set, minimal changes possible) | | MRR@10 | NDCG@10 | |----------------------------------|--------|---------| | BM25 | .194 | .241 | | **Margin-MSE ColBERT** (Re-ranking) | .375 | .436 | ### TREC-DL'19 For MRR we use the recommended binarization point of the graded relevance of 2. This might skew the results when compared to other binarization point numbers. | | MRR@10 | NDCG@10 | |----------------------------------|--------|---------| | BM25 | .689 | .501 | | **Margin-MSE ColBERT** (Re-ranking) | .878 | .744 | For more metrics, baselines, info and analysis, please see the paper: https://arxiv.org/abs/2010.02666 ## Limitations & Bias - The model inherits social biases from both DistilBERT and MSMARCO. - The model is only trained on relatively short passages of MSMARCO (avg. 60 words length), so it might struggle with longer text. ## Citation If you use our model checkpoint please cite our work as: ``` @misc{hofstaetter2020_crossarchitecture_kd, title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation}, author={Sebastian Hofst{\"a}tter and Sophia Althammer and Michael Schr{\"o}der and Mete Sertkan and Allan Hanbury}, year={2020}, eprint={2010.02666}, archivePrefix={arXiv}, primaryClass={cs.IR} } ```
acul3/xlsr_indonesia
acul3
2021-03-18T09:53:35Z
7
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "xlsr-fine-tuning-week", "id", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: id datasets: - common_voice tags: - speech - audio - automatic-speech-recognition - xlsr-fine-tuning-week license: apache-2.0 --- ## Evaluation on Common Voice ID Test ```python import torchaudio from datasets import load_dataset, load_metric from transformers import ( Wav2Vec2ForCTC, Wav2Vec2Processor, ) import torch import re import sys model_name = "munggok/xlsr_indonesia" device = "cuda" chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605 model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device) processor = Wav2Vec2Processor.from_pretrained(model_name) ds = load_dataset("common_voice", "id", split="test", data_dir="./cv-corpus-6.1-2020-12-11") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ds = ds.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt") input_values = features.input_values.to(device) attention_mask = features.attention_mask.to(device) with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) batch["target"] = batch["sentence"] return batch result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys())) wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` **Result**: 25.7 %
HooshvareLab/albert-fa-zwnj-base-v2
HooshvareLab
2021-03-16T16:36:38Z
340
3
transformers
[ "transformers", "pytorch", "tf", "albert", "fill-mask", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: fa license: apache-2.0 --- # ALBERT-Persian A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language > میتونی بهش بگی برت_کوچولو > Call it little_berty ### BibTeX entry and citation info Please cite in your publication as the following: ```bibtex @misc{ALBERTPersian, author = {Hooshvare Team}, title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}}, } ``` ## Questions? Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
HooshvareLab/distilbert-fa-zwnj-base
HooshvareLab
2021-03-16T16:30:29Z
322
1
transformers
[ "transformers", "pytorch", "tf", "distilbert", "fill-mask", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: fa license: apache-2.0 --- # DistilBERT This model can tackle the zero-width non-joiner character for Persian writing. Also, the model was trained on new multi-types corpora with a new set of vocabulary. ## Questions? Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
airesearch/xlm-roberta-base-finetuned
airesearch
2021-03-16T09:23:27Z
12
0
transformers
[ "transformers", "xlm-roberta", "fill-mask", "arxiv:1911.02116", "arxiv:2101.09635", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
# Finetuend `xlm-roberta-base` model on Thai sequence and token classification datasets <br> Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers). <br> ## Model description <br> We use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](https://arxiv.org/abs/1911.02116). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/xlm-roberta-base) <br> ## Intended uses & limitations <br> You can use the finetuned models for multiclass/multilabel text classification and token classification task. <br> **Multiclass text classification** - `wisesight_sentiment` 4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets. - `wongnai_reivews` Users' review rating classification task (scale is ranging from 1 to 5) - `generated_reviews_enth` : (`review_star` as label) Generated users' review rating classification task (scale is ranging from 1 to 5). **Multilabel text classification** - `prachathai67k` Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k). **Token classification** - `thainer` Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer). - `lst20` : NER NER and POS tagging Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20). <br> ## How to use <br> The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko) <br> **BibTeX entry and citation info** ``` @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
cemigo/cemigo-test-model
cemigo
2021-03-15T18:09:36Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
tags: - array - of - tags license: "any valid license identifier"
facebook/rag-sequence-nq
facebook
2021-03-12T11:04:28Z
24,970
41
transformers
[ "transformers", "pytorch", "tf", "rag", "en", "dataset:wiki_dpr", "arxiv:2005.11401", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - wiki_dpr thumbnail: https://huggingface.co/front/thumbnails/facebook.png --- ## RAG This is the RAG-Sequence Model of the the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/pdf/2005.11401.pdf) by Patrick Lewis, Ethan Perez, Aleksandara Piktus et al. The model is a *uncased* model, which means that capital letters are simply converted to lower-case letters. The model consits of a *question_encoder*, *retriever* and a *generator*. The retriever extracts relevant passages from the *wiki_dpr* `train` datasets, which is linked above. The question_encoder and retriever are based on `facebook/dpr-question_encoder-single-nq-base` and `facebook/bart-large`, which were jointly finetuned on on the *wiki_dpr* QA dataset in an end-to-end fashion. ## Usage: **Note**: In the usage example below only the *dummy* retriever of *wiki_dpr* is used because the complete *lecagy* index requires over 75 GB of RAM. The model can generate answers to any factoid question as follows: ```python from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True) model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("how many countries are in europe", return_tensors="pt") generated = model.generate(input_ids=input_dict["input_ids"]) print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) # should give 54 => google says either 44 or 51 ```
gagan3012/keytotext
gagan3012
2021-03-11T20:23:32Z
4
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# keytotext Idea is to build a model which will take keywords as inputs and generate sentences as outputs. ### Model: Two Models have been built: - Using T5-base size = 850 MB can be found here: https://huggingface.co/gagan3012/keytotext - Using T5-small size = 230 MB can be found here: https://huggingface.co/gagan3012/keytotext-small #### Usage: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("gagan3012/keytotext-small") model = AutoModelWithLMHead.from_pretrained("gagan3012/keytotext-small") ``` ### Demo: [![Streamlit App](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://share.streamlit.io/gagan3012/keytotext/app.py) https://share.streamlit.io/gagan3012/keytotext/app.py ![image](https://user-images.githubusercontent.com/49101362/110660053-3b20fe80-81d4-11eb-9275-ba402134e8d9.png) ### Example: ['India', 'Wedding'] -> We are celebrating today in New Delhi with three wedding anniversary parties.
nsi319/legal-pegasus
nsi319
2021-03-11T08:50:52Z
1,042
12
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "summarization", "en", "license:mit", "autotrain_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en tags: summarization metrics: - rouge - precision inference: false license: mit --- ## PEGASUS for legal document summarization **legal-pegasus** is a finetuned version of ([**google/pegasus-cnn_dailymail**](https://huggingface.co/google/pegasus-cnn_dailymail)) for the **legal domain**, trained to perform **abstractive summarization** task. The maximum length of input sequence is 1024 tokens. ## Training data This model was trained on [**sec-litigation-releases**](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints. ## How to use ```Python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-pegasus") model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-pegasus") text = """On March 5, 2021, the Securities and Exchange Commission charged AT&T, Inc. with repeatedly violating Regulation FD, and three of its Investor Relations executives with aiding and abetting AT&T's violations, by selectively disclosing material nonpublic information to research analysts. According to the SEC's complaint, AT&T learned in March 2016 that a steeper-than-expected decline in its first quarter smartphone sales would cause AT&T's revenue to fall short of analysts' estimates for the quarter. The complaint alleges that to avoid falling short of the consensus revenue estimate for the third consecutive quarter, AT&T Investor Relations executives Christopher Womack, Michael Black, and Kent Evans made private, one-on-one phone calls to analysts at approximately 20 separate firms. On these calls, the AT&T executives allegedly disclosed AT&T's internal smartphone sales data and the impact of that data on internal revenue metrics, despite the fact that internal documents specifically informed Investor Relations personnel that AT&T's revenue and sales of smartphones were types of information generally considered "material" to AT&T investors, and therefore prohibited from selective disclosure under Regulation FD. The complaint further alleges that as a result of what they were told on these calls, the analysts substantially reduced their revenue forecasts, leading to the overall consensus revenue estimate falling to just below the level that AT&T ultimately reported to the public on April 26, 2016. The SEC's complaint, filed in federal district court in Manhattan, charges AT&T with violations of the disclosure provisions of Section 13(a) of the Securities Exchange Act of 1934 and Regulation FD thereunder, and charges Womack, Evans and Black with aiding and abetting these violations. The complaint seeks permanent injunctive relief and civil monetary penalties against each defendant. The SEC's investigation was conducted by George N. Stepaniuk, Thomas Peirce, and David Zetlin-Jones of the SEC's New York Regional Office. The SEC's litigation will be conducted by Alexander M. Vasilescu, Victor Suthammanont, and Mr. Zetlin-Jones. The case is being supervised by Sanjay Wadhwa.""" input_tokenized = tokenizer.encode(text, return_tensors='pt',max_length=1024,truncation=True) summary_ids = model.generate(input_tokenized, num_beams=9, no_repeat_ngram_size=3, length_penalty=2.0, min_length=150, max_length=250, early_stopping=True) summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0] ### Summary Output # The Securities and Exchange Commission today charged AT&T, Inc. and three of its Investor Relations executives with aiding and abetting the company's violations of the antifraud provisions of Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. According to the SEC's complaint, the company learned in March 2016 that a steeper-than-expected decline in its first quarter smartphone sales would cause its revenue to fall short of analysts' estimates for the quarter. The complaint alleges that to avoid falling short of the consensus revenue estimate for the third consecutive quarter, the executives made private, one-on-one phone calls to analysts at approximately 20 separate firms. On these calls, the SEC alleges that Christopher Womack, Michael Black, and Kent Evans allegedly disclosed internal smartphone sales data and the impact of that data on internal revenue metrics. The SEC further alleges that as a result of what they were told, the analysts substantially reduced their revenue forecasts, leading to the overall consensus Revenue Estimate falling to just below the level that AT&t ultimately reported to the public on April 26, 2016. The SEC is seeking permanent injunctive relief and civil monetary penalties against each defendant. ``` ## Evaluation results | Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision | |:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:| | legal-pegasus | **57.39** | **62.97** | **26.85** | **28.42** | **30.91** | **33.22** | | pegasus-cnn_dailymail | 43.16 | 45.68 | 13.75 | 14.56 | 18.82 | 20.07 |
navteca/electra-base-squad2
navteca
2021-03-10T15:30:09Z
5
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "en", "dataset:squad_v2", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- datasets: - squad_v2 language: en license: mit pipeline_tag: question-answering tags: - electra - question-answering --- # Electra base model for QA (SQuAD 2.0) This model uses [electra-base](https://huggingface.co/google/electra-base-discriminator). ## Training Data The models have been trained on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. It can be used for question answering task. ## Usage and Performance The trained model can be used like this: ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline # Load model & tokenizer electra_model = AutoModelForQuestionAnswering.from_pretrained('navteca/electra-base-squad2') electra_tokenizer = AutoTokenizer.from_pretrained('navteca/electra-base-squad2') # Get predictions nlp = pipeline('question-answering', model=electra_model, tokenizer=electra_tokenizer) result = nlp({ 'question': 'How many people live in Berlin?', 'context': 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.' }) print(result) #{ # "answer": "3,520,031" # "end": 36, # "score": 0.99983448, # "start": 27, #} ```
yjernite/bart_eli5
yjernite
2021-03-09T22:31:11Z
359
11
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "en", "dataset:eli5", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en license: apache-2.0 datasets: - eli5 --- ## BART ELI5 Read the article at https://yjernite.github.io/lfqa.html and try the demo at https://huggingface.co/qa/
wptoux/albert-chinese-large-qa
wptoux
2021-03-09T07:48:40Z
65
12
transformers
[ "transformers", "pytorch", "albert", "question-answering", "Question Answering", "zh", "dataset:webqa", "dataset:dureader", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - zh tags: - Question Answering license: apache-2.0 datasets: - webqa - dureader --- # albert-chinese-large-qa Albert large QA model pretrained from baidu webqa and baidu dureader datasets. ## Data source + baidu webqa 1.0 + baidu dureader ## Traing Method We combined the two datasets together and created a new dataset in squad format, including 705139 samples for training and 69638 samples for validation. We finetune the model based on the albert chinese large model. ## Hyperparams + learning_rate 1e-5 + max_seq_length 512 + max_query_length 50 + max_answer_length 300 + doc_stride 256 + num_train_epochs 2 + warmup_steps 1000 + per_gpu_train_batch_size 8 + gradient_accumulation_steps 3 + n_gpu 2 (Nvidia Tesla P100) ## Usage ``` from transformers import AutoModelForQuestionAnswering, BertTokenizer model = AutoModelForQuestionAnswering.from_pretrained('wptoux/albert-chinese-large-qa') tokenizer = BertTokenizer.from_pretrained('wptoux/albert-chinese-large-qa') ``` ***Important: use BertTokenizer*** ## MoreInfo Please visit https://github.com/wptoux/albert-chinese-large-webqa for details.
Jade/bert_base_law
Jade
2021-03-08T06:59:50Z
0
0
null
[ "NLP", "LAW", "dataset:WIP", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: "zh_CN" thumbnail: "url to a thumbnail used in social sharing" tags: - NLP - LAW license: "MIT" datasets: - WIP metrics: - WIP ---
uasoyasser/eefdfgdg
uasoyasser
2021-03-05T15:37:12Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://teacher.desmos.com/activitybuilder/teacherguide/604249659240440d25a27d0c https://teacher.desmos.com/activitybuilder/teacherguide/604249a365ecd40d30b4ad18 https://teacher.desmos.com/activitybuilder/teacherguide/604249e2cfb0a20d51e13768 https://teacher.desmos.com/activitybuilder/teacherguide/60424a1c9240440d25a27e22 https://teacher.desmos.com/activitybuilder/teacherguide/60424a58cefbd00d5da96390 https://teacher.desmos.com/activitybuilder/teacherguide/60424a90229a7d0cfb807295 https://teacher.desmos.com/activitybuilder/teacherguide/60424ad532e0730c4bdcbbab https://teacher.desmos.com/activitybuilder/teacherguide/60424b0f1d780b0b7395f36d https://teacher.desmos.com/activitybuilder/teacherguide/60424c01534b110d262d4d46 https://teacher.desmos.com/activitybuilder/teacherguide/60424c47969a440d13c62ffb https://teacher.desmos.com/activitybuilder/teacherguide/60424cd7f17f6b0d4550c269 https://teacher.desmos.com/activitybuilder/teacherguide/60424d0dcfb0a20d51e13c97 https://teacher.desmos.com/activitybuilder/teacherguide/60424d5796540a0cf95ff215 https://teacher.desmos.com/activitybuilder/teacherguide/60424d9163a2220bc4c8f2be https://teacher.desmos.com/activitybuilder/teacherguide/60424e030d98a80d53856ab2 https://teacher.desmos.com/activitybuilder/teacherguide/60424e37ed488c0cfbbaab2f
yhavinga/mt5-base-cnn-nl
yhavinga
2021-03-05T07:48:08Z
8
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "dataset:cnn_dm_nl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- tags: - summarization language: - dutch datasets: - cnn_dm_nl widget: - text: "(CNN) Skywatchers in West-Noord-Amerika zijn in voor een traktatie: een bijna vijf minuten totale maansverduistering vanmorgen. Hier is hoe het zich ontvouwt:. Het begon om 3:16 a.m. Pacific Daylight Tijd, toen de maan begon te bewegen in de schaduw van de Aarde. Voor het volgende uur en 45 minuten, die schaduw zal bewegen over de maan en verzwolgen het om 4:58 a.m. Pacific Time. De totale verduistering zal slechts vier minuten en 43 seconden duren, en NASA zegt dat maakt het de kortste van de eeuw. Kijken live op NASA TV. Terwijl mensen ten westen van de Mississippi River zal het beste uitzicht hebben, ten minste een gedeeltelijke verduistering zal zichtbaar zijn over de hele natie. Maar zonsopgang zal de show te onderbreken op de Oostkust. Delen van Zuid-Amerika, India, China en China Een maansverduistering gebeurt wanneer de zon, de aarde en de maan een rechte lijn vormen in de ruimte, met de aarde in het midden. De zon schijnt op de Aarde en creëert een schaduw. Als de maan dieper in die schaduw beweegt, lijkt het donker te worden en lijkt zelfs een roodachtige kleur te zijn. Waarom rood? Omdat de atmosfeer van de Aarde het grootste deel van het blauwe licht filtert. Sommige mensen hebben het effect van de \"bloedmaan\" bijgenaamd. NASA zegt dat maansverduisteringen meestal ten minste twee keer per jaar plaatsvinden, maar deze verduistering is de derde in een reeks van vier op een rij, bekend als een \"tetrad.\" De eerste was op 15 april 2014. De tweede was in september 2014, de volgende is zaterdag en er zal er een meer zijn, op 28 september. Als je meer wilt weten over de verduistering, NASA astronoom Mitzi Adam. Deel uw foto's met CNN iReport." - text: "(CNN) Filipino's worden gewaarschuwd om op wacht te staan voor flash overstromingen en aardverschuivingen als tropische storm Maysak benaderde de Aziatische eiland natie zaterdag. Slechts een paar dagen geleden, Maysak kreeg super tyfoon status dankzij zijn aanhoudende 150 km/h winden. Het heeft sindsdien verloren veel stoom als het naar het westen in de Stille Oceaan heeft gedraaid. Het is nu geclassificeerd als een tropische storm, volgens de Filipijnse nationale weerdienst, die noemt het een andere naam, Chedeng. Het heeft stabiele winden van meer dan 70 km/h (115 km/h) en gusts tot 90 km/h vanaf 17.00 uur (5 uur ET) Zaterdag. Toch, dat betekent niet dat Maysak zal geen pak een wallop. Autoriteiten nam preventieve stappen om mensen veilig te houden zoals barring outdoor activiteiten zoals zwemmen, surfen, di. Gabriel Llave, een ramp ambtenaar, vertelde PNA dat toeristen die aankomen zaterdag in en rond de kustplaats van Aurora \"zal niet worden geaccepteerd door de eigenaren van hotels, resorts, herbergen en dergelijke... en zal worden geadviseerd om terug te keren naar hun respectievelijke plaatsen.\" Aldczar Aurelio, een meteoroloog met de Filippijnse Atmosferische, Geofysische en Astronomische Diensten Administratie (PAGASA), zei dat de storm was gecentreerd 200 mijl ten zuidwesten van de provincie Aurora vanaf 5 uur (5 uur ET) en richting het westen op een 12.5 mph clip. Het is verwacht dat landval zondagochtend maken op de zuidoostelijke kust van de provincie Isabela en zijn uit de Filippijnen tegen maandag. Ahead van de storm. Isabela Gov. Faustino Dry III waarschuwde zaterdag dat bewoners moet handelen als deze zal maken landfall zondagochtend op de zuidoostelijke kust van de provincie Isabela en zijn uit de Filippijnen voor maandag." --- # mt5-base-cnn-nl mt5-base finetuned on CNN DM translated to nl (Dutch). * Learning rate 1e-3 * Trained for 1 epoch * Max source length 1024 * Max target length 142 * rouge1 31.1766 * rouge2 8.4538 * rougeL 17.8674
tiedeman/opus-mt-he-en
tiedeman
2021-03-04T17:46:12Z
12
0
transformers
[ "transformers", "pytorch", "rust", "marian", "text2text-generation", "translation", "he", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - he - en tags: - translation license: apache-2.0 --- ### he-en * source group: Hebrew * target group: English * OPUS readme: [heb-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-eng/README.md) * model: transformer * source language(s): heb * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-10-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.zip) * test set translations: [opus-2020-10-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.test.txt) * test set scores: [opus-2020-10-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.heb.eng | 52.0 | 0.670 | ### System Info: - hf_name: he-en - source_languages: heb - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/heb-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['he', 'en'] - src_constituents: ('Hebrew', {'heb'}) - tgt_constituents: ('English', {'eng'}) - src_multilingual: False - tgt_multilingual: False - long_pair: heb-eng - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/heb-eng/opus-2020-10-04.test.txt - src_alpha3: heb - tgt_alpha3: eng - chrF2_score: 0.67 - bleu: 52.0 - brevity_penalty: 0.9690000000000001 - ref_len: 73560.0 - src_name: Hebrew - tgt_name: English - train_date: 2020-10-04 00:00:00 - src_alpha2: he - tgt_alpha2: en - prefer_old: False - short_pair: he-en - helsinki_git_sha: 61fd6908b37d9a7b21cc3e27c1ae1fccedc97561 - transformers_git_sha: d99ed7ad618037ae878f0758157ed0764bd7f935 - port_machine: LM0-400-22516.local - port_time: 2020-10-15-16:31
hfl/chinese-xlnet-mid
hfl
2021-03-03T01:46:39Z
30
9
transformers
[ "transformers", "pytorch", "tf", "xlnet", "text-generation", "zh", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- ## Chinese Pre-Trained XLNet This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. We welcome all experts and scholars to download and use this model. This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
hfl/chinese-electra-large-discriminator
hfl
2021-03-03T01:42:48Z
10
1
transformers
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- **Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
hfl/chinese-electra-base-discriminator
hfl
2021-03-03T01:40:07Z
245
9
transformers
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- **Please use `ElectraForPreTraining` for `discriminator` and `ElectraForMaskedLM` for `generator` if you are re-training these models.** ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
hfl/chinese-electra-180g-large-discriminator
hfl
2021-03-03T01:29:12Z
214
5
transformers
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- # This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
hfl/chinese-electra-180g-base-generator
hfl
2021-03-03T01:26:40Z
19
0
transformers
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" pipeline_tag: "fill-mask" --- # This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
hfl/chinese-electra-180g-small-ex-generator
hfl
2021-03-03T01:25:06Z
1
2
transformers
[ "transformers", "pytorch", "tf", "electra", "fill-mask", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" pipeline_tag: "fill-mask" --- # This model is trained on 180G data, we recommend using this one than the original version. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
flair/upos-multi-fast
flair
2021-03-02T22:22:55Z
402
5
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "de", "fr", "it", "nl", "pl", "es", "sv", "da", "no", "fi", "cs", "dataset:ontonotes", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - flair - token-classification - sequence-tagger-model language: - en - de - fr - it - nl - pl - es - sv - da - no - fi - cs datasets: - ontonotes widget: - text: "Ich liebe Berlin, as they say." --- ## Multilingual Universal Part-of-Speech Tagging in Flair (fast model) This is the fast multilingual universal part-of-speech tagging model that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,88** (12 UD Treebanks covering English, German, French, Italian, Dutch, Polish, Spanish, Swedish, Danish, Norwegian, Finnish and Czech) Predicts universal POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADJ | adjective | | ADP | adposition | | ADV | adverb | | AUX | auxiliary | | CCONJ | coordinating conjunction | | DET | determiner | | INTJ | interjection | | NOUN | noun | | NUM | numeral | | PART | particle | | PRON | pronoun | | PROPN | proper noun | | PUNCT | punctuation | | SCONJ | subordinating conjunction | | SYM | symbol | | VERB | verb | | X | other | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/upos-multi-fast") # make example sentence sentence = Sentence("Ich liebe Berlin, as they say. ") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "Ich" [− Labels: PRON (0.9999)] Span [2]: "liebe" [− Labels: VERB (0.9999)] Span [3]: "Berlin" [− Labels: PROPN (0.9997)] Span [4]: "," [− Labels: PUNCT (1.0)] Span [5]: "as" [− Labels: SCONJ (0.9991)] Span [6]: "they" [− Labels: PRON (0.9998)] Span [7]: "say" [− Labels: VERB (0.9998)] Span [8]: "." [− Labels: PUNCT (1.0)] ``` So, the words "*Ich*" and "*they*" are labeled as **pronouns** (PRON), while "*liebe*" and "*say*" are labeled as **verbs** (VERB) in the multilingual sentence "*Ich liebe Berlin, as they say*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import MultiCorpus from flair.datasets import UD_ENGLISH, UD_GERMAN, UD_FRENCH, UD_ITALIAN, UD_POLISH, UD_DUTCH, UD_CZECH, \ UD_DANISH, UD_SPANISH, UD_SWEDISH, UD_NORWEGIAN, UD_FINNISH from flair.embeddings import StackedEmbeddings, FlairEmbeddings # 1. make a multi corpus consisting of 12 UD treebanks (in_memory=False here because this corpus becomes large) corpus = MultiCorpus([ UD_ENGLISH(in_memory=False), UD_GERMAN(in_memory=False), UD_DUTCH(in_memory=False), UD_FRENCH(in_memory=False), UD_ITALIAN(in_memory=False), UD_SPANISH(in_memory=False), UD_POLISH(in_memory=False), UD_CZECH(in_memory=False), UD_DANISH(in_memory=False), UD_SWEDISH(in_memory=False), UD_NORWEGIAN(in_memory=False), UD_FINNISH(in_memory=False), ]) # 2. what tag do we want to predict? tag_type = 'upos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('multi-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('multi-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=False) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/upos-multi-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
flair/ner-dutch
flair
2021-03-02T22:03:57Z
316
3
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "nl", "dataset:conll2003", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - flair - token-classification - sequence-tagger-model language: nl datasets: - conll2003 widget: - text: "George Washington ging naar Washington." --- # Dutch NER in Flair (default model) This is the standard 4-class NER model for Dutch that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,58** (CoNLL-03) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on Transformer embeddings and LSTM-CRF. --- # Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-dutch") # make example sentence sentence = Sentence("George Washington ging naar Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.997)] Span [5]: "Washington" [− Labels: LOC (0.9996)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging naar Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03_DUTCH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_03_DUTCH() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize embeddings embeddings = TransformerWordEmbeddings('wietsedv/bert-base-dutch-cased') # 5. initialize sequence tagger tagger: SequenceTagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer trainer: ModelTrainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-dutch', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik-etal-2019-flair, title = "{FLAIR}: An Easy-to-Use Framework for State-of-the-Art {NLP}", author = "Akbik, Alan and Bergmann, Tanja and Blythe, Duncan and Rasul, Kashif and Schweter, Stefan and Vollgraf, Roland", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics (Demonstrations)", year = "2019", url = "https://www.aclweb.org/anthology/N19-4010", pages = "54--59", } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
stefan-it/flair-distilbert-ner-germeval14
stefan-it
2021-03-02T18:32:30Z
10
1
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "dataset:germeval_14", "license:mit", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- datasets: - germeval_14 tags: - flair - token-classification - sequence-tagger-model language: de widget: - text: "Hugging Face ist eine französische Firma mit Sitz in New York." license: mit --- # Flair NER model trained on GermEval14 dataset This model was trained on the official [GermEval14](https://sites.google.com/site/germeval2014ner/data) dataset using the [Flair](https://github.com/flairNLP/flair) framework. It uses a fine-tuned German DistilBERT model from [here](https://huggingface.co/distilbert-base-german-cased). # Results | Dataset \ Run | Run 1 | Run 2 | Run 3† | Run 4 | Run 5 | Avg. | ------------- | ----- | ----- | --------- | ----- | ----- | ---- | Development | 87.05 | 86.52 | **87.34** | 86.85 | 86.46 | 86.84 | Test | 85.43 | 85.88 | 85.72 | 85.47 | 85.62 | 85.62 † denotes that this model is selected for upload. # Flair Fine-Tuning We used the following script to fine-tune the model on the GermEval14 dataset: ```python from argparse import ArgumentParser import torch, flair # dataset, model and embedding imports from flair.datasets import GERMEVAL_14 from flair.embeddings import TransformerWordEmbeddings from flair.models import SequenceTagger from flair.trainers import ModelTrainer if __name__ == "__main__": # All arguments that can be passed parser = ArgumentParser() parser.add_argument("-s", "--seeds", nargs='+', type=int, default='42') # pass list of seeds for experiments parser.add_argument("-c", "--cuda", type=int, default=0, help="CUDA device") # which cuda device to use parser.add_argument("-m", "--model", type=str, help="Model name (such as Hugging Face model hub name") # Parse experimental arguments args = parser.parse_args() # use cuda device as passed flair.device = f'cuda:{str(args.cuda)}' # for each passed seed, do one experimental run for seed in args.seeds: flair.set_seed(seed) # model hf_model = args.model # initialize embeddings embeddings = TransformerWordEmbeddings( model=hf_model, layers="-1", subtoken_pooling="first", fine_tune=True, use_context=False, respect_document_boundaries=False, ) # select dataset depending on which language variable is passed corpus = GERMEVAL_14() # make the dictionary of tags to predict tag_dictionary = corpus.make_tag_dictionary('ner') # init bare-bones sequence tagger (no reprojection, LSTM or CRF) tagger: SequenceTagger = SequenceTagger( hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type='ner', use_crf=False, use_rnn=False, reproject_embeddings=False, ) # init the model trainer trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW) # make string for output folder output_folder = f"flert-ner-{hf_model}-{seed}" # train with XLM parameters (AdamW, 20 epochs, small LR) from torch.optim.lr_scheduler import OneCycleLR trainer.train( output_folder, learning_rate=5.0e-5, mini_batch_size=16, mini_batch_chunk_size=1, max_epochs=10, scheduler=OneCycleLR, embeddings_storage_mode='none', weight_decay=0., train_with_dev=False, ) ```
nsi319/legal-led-base-16384
nsi319
2021-03-01T12:33:48Z
298
13
transformers
[ "transformers", "pytorch", "led", "text2text-generation", "summarization", "en", "license:mit", "autotrain_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en tags: summarization metrics: - rouge - precision inference: false license: mit --- ## LED for legal summarization of documents This is a Longformer Encoder Decoder ([led-base-16384](https://huggingface.co/allenai/led-base-16384)) model for the **legal domain**, trained for **long document abstractive summarization** task. The length of the document can be upto 16,384 tokens. ## Training data The **legal-led-base-16384** model was trained on [sec-litigation-releases](https://www.sec.gov/litigation/litreleases.htm) dataset consisting more than 2700 litigation releases and complaints. ## How to use ```Python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("nsi319/legal-led-base-16384") model = AutoModelForSeq2SeqLM.from_pretrained("nsi319/legal-led-base-16384") padding = "max_length" text="""On March 2, 2018, the Securities and Exchange Commission announced securities fraud charges against a U.K.-based broker-dealer and its investment manager in connection with manipulative trading in the securities of HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View's securities as well as the securities of another microcap issuer, West Coast Ventures Group Corp. The SEC further announced the institution of an order suspending trading in the securities of HD View.These charges arise in part from an undercover operation by the Federal Bureau of Investigation, which also resulted in related criminal prosecutions against these defendants by the Office of the United States Attorney for the Eastern District of New York.In a complaint filed in the U.S. District Court for the Eastern District of New York, the SEC alleges that Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, manipulated the market for HD View's common stock. The scheme involved an undercover FBI agent who described his business as manipulating U.S. stocks through pump-and-dump schemes. Kyriacou and the agent discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit.The SEC's complaint against Beaufort and Kyriacou alleges that they:opened brokerage accounts for the undercover agent in the names of nominees in order to conceal his identity and his connection to the anticipated trading activity in the accounts suggested that the undercover agent could create the false appearance that HD View's stock was liquid in advance of a pump-and-dump by "gam[ing] the market" through matched trades executed multiple purchase orders of HD View shares with the understanding that Beaufort's client had arranged for an associate to simultaneously offer an equivalent number of shares at the same priceA second complaint filed by the SEC in the U.S. District Court for the Eastern District of New York alleges that in a series of recorded telephone conversations with the undercover agent, HD View CEO Dennis Mancino and William T. Hirschy agreed to manipulate HD View's common stock by using the agent's network of brokers to generate fraudulent retail demand for the stock in exchange for a kickback from the trading proceeds. According to the complaint, the three men agreed that Mancino and Hirschy would manipulate HD View stock to a higher price before using the agent's brokers to liquidate their positions at an artificially inflated price. The SEC's complaint also alleges that Mancino and Hirschy executed a "test trade" on Jan. 31, 2018, coordinated by the agent, consisting of a sell order placed by the defendants filled by an opposing purchase order placed by a broker into an account at Beaufort. Unbeknownst to Mancino and Hirschy, the Beaufort account used for this trade was a nominal account that was opened and funded by the agent. The SEC's complaint also alleges that, prior to their contact with the undercover agent, Mancino and Hirschy manipulated the market for HD View and for West Coast by using brokerage accounts that they owned, controlled, or were associated with –including TJM Investments Inc., DJK Investments 10 Inc., WT Consulting Group LLC – to effect manipulative "matched trades."The SEC's complaint against Beaufort and Kyriacou charges the defendants with violating Section 10(b) of the Securities Exchange Act of 1934 and Rule 10b-5 thereunder. The SEC also charged Hirschy, Mancino, and their corporate entities with violating Section 17(a)(1) of the Securities Act of 1933, Sections 9(a)(1), 9(a)(2), and 10(b) of the Exchange Act and Rules 10b-5(a) and (c) thereunder. The SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, and penny stock bars from Beaufort and Kyriacou. With respect to Hirschy, Mancino, and their corporate entities, the SEC is seeking injunctions, disgorgement, prejudgment interest, penalties, penny stock bars, and an officer-and-director bar against Mancino.The investigation was conducted in the SEC's New York Regional Office by Tejal Shah and Joseph Darragh, Lorraine Collazo, and Michael D. Paley of the Microcap Fraud Task Force and supervised by Lara S. Mehraban, and in Washington, D.C. by Patrick L. Feeney, Robert Nesbitt, and Kevin Guerrero, and supervised by Antonia Chion. Preethi Krishnamurthy and Ms. Shah will lead the SEC's litigation against Beaufort and Kyriacou. Ann H. Petalas and Mr. Feeney, under the supervision of Cheryl Crumpton, will handle the SEC's litigation against Mancino, Hirschy, and their entities. The SEC appreciates the assistance of the Office of the United States Attorney for the Eastern District of New York, the Federal Bureau of Investigation, the Internal Revenue Service, the Alberta Securities Commission, the Ontario Securities Commission, the Financial Conduct Authority of the United Kingdom, and the Financial Industry Regulatory Authority.The Commission's investigation in this matter is continuing.""" input_tokenized = tokenizer.encode(text, return_tensors='pt',padding=padding,pad_to_max_length=True, max_length=6144,truncation=True) summary_ids = model.generate(input_tokenized, num_beams=4, no_repeat_ngram_size=3, length_penalty=2, min_length=350, max_length=500) summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0] ### Summary Output # On March 2, 2018, the Securities and Exchange Commission charged Beaufort Securities Ltd. and Peter Kyriacou, an investment manager at Beaufort, with manipulating the market for HD View 360 Inc., a U.S.-based microcap issuer. The SEC also announced charges against HD View's CEO, another individual, and three entities they control for manipulating HD View through pump-and-dump schemes. According to the SEC's complaint, the defendants discussed depositing large blocks of microcap stock in Beaufort accounts, driving up the price of the stock through promotions, manipulating the stock's price and volume through matched trades, and then selling the shares for a large profit. In a parallel action, the United States Attorney's Office for the Eastern District of New York announced criminal charges against the defendants. On March 4, the SEC announced the entry of an order suspending trading in the securities of HD View and for West Coast, pending the outcome of a parallel criminal action by the Federal Bureau of Investigation. Following the announcement of the suspension, HD View stock prices and volume increased significantly, and the defendants agreed to pay over $1.5 million in disgorgement, prejudgment interest, penalties, and an officer and director bar. Beaufort agreed to settle the charges without admitting or denying the allegations of the complaint, and to pay a $1 million civil penalty. The SEC's investigation, which is continuing, has been conducted by Patrick McCluskey and Cheryl Crumpton of the SEC Enforcement Division's Market Abuse Unit in the New York Regional Office. The SEC appreciates the assistance of the Financial Industry Regulatory Authority of the United Kingdom, the Canadian Securities Commission, the Alberta Securities Commission and the Ontario Securities Commission. ``` ## Evaluation results When the model is used for summarizing legal documents, it achieves the following results: | Model | rouge1 | rouge1-precision | rouge2 | rouge2-precision | rougeL | rougeL-precision | |:-----------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:| | legal-led-base-16384 | **55.69** | **61.73** | **29.03** | **36.68** | **32.65** | **40.43** | | led-base-16384 | 29.19 | 30.43 | 15.23 | 16.27 | 16.32 | 16.58 |
CouchCat/ma_ner_v7_distil
CouchCat
2021-02-28T20:54:46Z
12
0
transformers
[ "transformers", "pytorch", "distilbert", "token-classification", "ner", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- language: en license: mit tags: - ner widget: - text: "These shoes I recently bought from Tommy Hilfiger fit quite well. The shirt, however, has got a hole" --- ### Description A Named Entity Recognition model trained on a customer feedback data using DistilBert. Possible labels are in BIO-notation. Performance of the PERS tag could be better because of low data samples: - PROD: for certain products - BRND: for brands - PERS: people names The following tags are simply in place to help better categorize the previous tags - MATR: relating to materials, e.g. cloth, leather, seam, etc. - TIME: time related entities - MISC: any other entity that might skew the results ### Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_ner_v7_distil") model = AutoModelForTokenClassification.from_pretrained("CouchCat/ma_ner_v7_distil") ```
aishoo1612/VADER-With-heatmaps
aishoo1612
2021-02-28T11:46:32Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
pip install vaderSentiment from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer analyser = SentimentIntensityAnalyzer() analyser.polarity_scores("I hate watching movies") import nltk from nltk.tokenize import word_tokenize, RegexpTokenizer from nltk.sentiment.vader import SentimentIntensityAnalyzer nltk.download('all') import numpy as np sentence = """I love dancing & painting""" tokenized_sentence = nltk.word_tokenize(sentence) from nltk import word_tokenize from typing import List Analyzer = SentimentIntensityAnalyzer() pos_word_list=[] neu_word_list=[] neg_word_list=[] pos_score_list=[] neg_score_list=[] score_list=[] for word in tokenized_sentence: if (Analyzer.polarity_scores(word)['compound']) >= 0.1: pos_word_list.append(word) score_list.append(Analyzer.polarity_scores(word)['compound']) elif (Analyzer.polarity_scores(word)['compound']) <= -0.1: neg_word_list.append(word) score_list.append(Analyzer.polarity_scores(word)['compound']) else: neu_word_list.append(word) score_list.append(Analyzer.polarity_scores(word)['compound']) print('Positive:',pos_word_list) print('Neutral:',neu_word_list) print('Negative:',neg_word_list) print('Score:', score_list) score = Analyzer.polarity_scores(sentence) print('\nScores:', score) predict_log=score.values() value_iterator=iter(predict_log) neg_prediction=next(value_iterator) neu_prediction=next(value_iterator) pos_prediction=next(value_iterator) prediction_list=[neg_prediction, pos_prediction] prediction_list_array=np.array(prediction_list) def predict(): probs = [] for text in texts: offset = (self.score(text) + 1) / 2. binned = np.digitize(5 * offset, self.classes) + 1 simulated_probs = scipy.stats.norm.pdf(self.classes, binned, scale=0.5) probs.append(simulated_probs) return np.array(probs) latex_special_token = ["!@#$%^&*()"] import operator def generate(text_list, attention_list, latex_file, color_neg='red', color_pos='green', rescale_value = False): print("hello") attention_list = rescale(attention_list) word_num = len(text_list) print(len(attention_list)) print(len(text_list)) text_list = clean_word(text_list) with open(latex_file,'w') as f: f.write(r'''\documentclass[varwidth]{standalone} \special{papersize=210mm,297mm} \usepackage{color} \usepackage{tcolorbox} \usepackage{CJK} \usepackage{adjustbox} \tcbset{width=0.9\textwidth,boxrule=0pt,colback=red,arc=0pt,auto outer arc,left=0pt,right=0pt,boxsep=5pt} \begin{document} \begin{CJK*}{UTF8}{gbsn}'''+'\n') string = r'''{\setlength{\fboxsep}{0pt}\colorbox{white!0}{\parbox{0.9\textwidth}{'''+"\n" for idx in range(len(attention_list)): if attention_list[idx] > 0: string += "\\colorbox{%s!%s}{"%(color_pos, attention_list[idx])+"\\strut " + text_list[idx]+"} " else: string += "\\colorbox{%s!%s}{"%(color_neg, -attention_list[idx])+"\\strut " + text_list[idx]+"} " string += "\n}}}" f.write(string+'\n') f.write(r'''\end{CJK*} \end{document}''') def rescale(input_list): the_array = np.asarray(input_list) the_max = np.max(abs(the_array)) rescale = the_array/the_max rescale = rescale*100 rescale = np.round(rescale, 3) ''' the_array = np.asarray(input_list) the_max = np.max(the_array) the_min = np.min(the_array) rescale = ((the_array - the_min)/(the_max-the_min))*100 for i in rescale: print(rescale) ''' return rescale.tolist() def clean_word(word_list): new_word_list = [] for word in word_list: for latex_sensitive in ["\\", "%", "&", "^", "#", "_", "{", "}"]: if latex_sensitive in word: word = word.replace(latex_sensitive, '\\'+latex_sensitive) new_word_list.append(word) return new_word_list if __name__ == '__main__': color_1 = 'red' color_2 = 'green' words = word_tokenize(sentence) word_num = len(words) generate(words, score_list, "sple.tex", color_1, color_2)
dbmdz/flair-historic-ner-onb
dbmdz
2021-02-26T15:41:21Z
27
3
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "de", "license:mit", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - flair - token-classification - sequence-tagger-model language: de widget: - text: "April Martin Ansclm, K. Gefangen-Auffehers Georg Sausgruber." license: mit --- # Towards Robust Named Entity Recognition for Historic German Based on [our paper](https://www.aclweb.org/anthology/W19-4312/) we release a new model trained on the ONB dataset. **Note:** We use BPEmbeddings instead of the combination of Wikipedia, Common Crawl and character embeddings (as used in the paper), so save space and training/inferencing time. # Results | Dataset \ Run | Run 1 | Run 2 | Run 3 | Avg. | ------------- | ----- | ----- | --------- | ------------ | Development | 86.69 | 86.13 | **87.18** | 86.67 | Test | 85.27 | 86.05 | 85.75† | 85.69 Paper reported an averaged F1-score of 85.31. † denotes that this model is selected for upload.
flair/ner-danish
flair
2021-02-26T15:33:02Z
70
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "da", "dataset:DaNE", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - flair - token-classification - sequence-tagger-model language: da datasets: - DaNE widget: - text: "Jens Peter Hansen kommer fra Danmark" --- # Danish NER in Flair (default model) This is the standard 4-class NER model for Danish that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **81.78** (DaNER) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on Transformer embeddings and LSTM-CRF. --- # Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-danish") # make example sentence sentence = Sentence("Jens Peter Hansen kommer fra Danmark") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2,3]: "Jens Peter Hansen" [− Labels: PER (0.9961)] Span [6]: "Danmark" [− Labels: LOC (0.9816)] ``` So, the entities "*Jens Peter Hansen*" (labeled as a **person**) and "*Danmark*" (labeled as a **location**) are found in the sentence "*Jens Peter Hansen kommer fra Danmark*". --- ### Training: Script to train this model The model was trained by the [DaNLP project](https://github.com/alexandrainst/danlp) using the [DaNE corpus](https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md#danish-dependency-treebank-dane-dane). Check their repo for more information. The following Flair script may be used to train such a model: ```python from flair.data import Corpus from flair.datasets import DANE from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = DANE() # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('da'), # contextual string embeddings, forward FlairEmbeddings('da-forward'), # contextual string embeddings, backward FlairEmbeddings('da-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-danish', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following papers when using this model. ``` @inproceedings{akbik-etal-2019-flair, title = "{FLAIR}: An Easy-to-Use Framework for State-of-the-Art {NLP}", author = "Akbik, Alan and Bergmann, Tanja and Blythe, Duncan and Rasul, Kashif and Schweter, Stefan and Vollgraf, Roland", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics (Demonstrations)", year = "2019", url = "https://www.aclweb.org/anthology/N19-4010", pages = "54--59", } ``` And check the [DaNLP project](https://github.com/alexandrainst/danlp) for more information. --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
valhalla/s2t_librispeech_large
valhalla
2021-02-26T14:25:12Z
3
0
transformers
[ "transformers", "pytorch", "speech_to_text_transformer", "text2text-generation", "audio", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition license: apache-2.0 --- TODO: [To be filled] ## Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset. ```python from datasets import load_dataset from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer import soundfile as sf from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_large").to("cuda") tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_large", do_upper_case=True) def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 3.3 | 7.5 |
valhalla/s2t_librispeech_medium
valhalla
2021-02-26T14:24:39Z
4
0
transformers
[ "transformers", "pytorch", "speech_to_text_transformer", "text2text-generation", "audio", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition license: apache-2.0 --- TODO: [To be filled] ## Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset. ```python from datasets import load_dataset from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer import soundfile as sf from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_medium").to("cuda") tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_medium", do_upper_case=True) def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 3.5 | 7.8 |
valhalla/s2t_librispeech_small
valhalla
2021-02-26T14:24:09Z
3
0
transformers
[ "transformers", "pytorch", "speech_to_text_transformer", "text2text-generation", "audio", "automatic-speech-recognition", "en", "dataset:librispeech_asr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition license: apache-2.0 --- TODO: [To be filled] ## Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset. ```python from datasets import load_dataset from transformers import Speech2TextTransformerForConditionalGeneration, Speech2TextTransformerTokenizer import soundfile as sf from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset model = Speech2TextTransformerForConditionalGeneration.from_pretrained("valhalla/s2t_librispeech_small").to("cuda") tokenizer = Speech2TextTransformerTokenizer.from_pretrained("valhalla/s2t_librispeech_small", do_upper_case=True) def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = tokenizer(batch["speech"], sample_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 4.3 | 9.0 |
sismetanin/xlm_roberta_large-ru-sentiment-rusentiment
sismetanin
2021-02-25T23:57:27Z
13
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "sentiment analysis", "Russian", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - ru tags: - sentiment analysis - Russian --- ## XML-RoBERTa-Large-ru-sentiment-RuSentiment XML-RoBERTa-Large-ru-sentiment-RuSentiment is a [XML-RoBERTa-Large](https://huggingface.co/xlm-roberta-large) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte. <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>wighted</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table> The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @article{Smetanin2021Deep, author = {Sergey Smetanin and Mikhail Komarov}, title = {Deep transfer learning baselines for sentiment analysis in Russian}, journal = {Information Processing & Management}, volume = {58}, number = {3}, pages = {102484}, year = {2021}, issn = {0306-4573}, doi = {0.1016/j.ipm.2020.102484} } ``` Dataset: ``` @inproceedings{rogers2018rusentiment, title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian}, author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex}, booktitle={Proceedings of the 27th international conference on computational linguistics}, pages={755--763}, year={2018} } ```
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rusentiment
sismetanin
2021-02-25T23:56:23Z
12
0
transformers
[ "transformers", "pytorch", "mbart", "text-classification", "sentiment analysis", "Russian", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - ru tags: - sentiment analysis - Russian --- ## MBARTRuSumGazeta-ru-sentiment-RuSentiment MBARTRuSumGazeta-ru-sentiment-RuSentiment is a [MBARTRuSumGazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte. <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>wighted</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table> The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @article{Smetanin2021Deep, author = {Sergey Smetanin and Mikhail Komarov}, title = {Deep transfer learning baselines for sentiment analysis in Russian}, journal = {Information Processing & Management}, volume = {58}, number = {3}, pages = {102484}, year = {2021}, issn = {0306-4573}, doi = {0.1016/j.ipm.2020.102484} } ``` Dataset: ``` @inproceedings{rogers2018rusentiment, title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian}, author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex}, booktitle={Proceedings of the 27th international conference on computational linguistics}, pages={755--763}, year={2018} } ```
superspray/electra_large_discriminator_squad2_custom_dataset
superspray
2021-02-20T07:00:12Z
9
0
transformers
[ "transformers", "pytorch", "electra", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
# Question & Answering Model for 'Save Your Minutes' from Dobby-AI Electra_Large Discriminator fine-tuned on SQuAD2.0 and custom QA dataset This model is [ahotrod/electra_large_discriminator_squad2_512](https://huggingface.co/ahotrod/electra_large_discriminator_squad2_512/blob/main/README.md) trained on additional custom dataset as: ``` !python3 run_squad.py --model_type electra \ --model_name_or_path /content/electra_large_512 \ --do_lower_case \ --output_dir /content/model/\ --do_train \ --train_file $data_dir/additional_qa.json\ --version_2_with_negative \ --do_lower_case \ --num_train_epochs 3 \ --weight_decay 0.01 \ --learning_rate 3e-5 \ --max_grad_norm 0.5 \ --adam_epsilon 1e-6 \ --max_seq_length 512 \ --doc_stride 128 \ --threads 12 \ --logging_steps 50 \ --save_steps 1000 \ --overwrite_output_dir \ --per_gpu_train_batch_size 4 ``` We used Google Colab for training the model,
joeddav/distilbert-base-uncased-agnews-student
joeddav
2021-02-18T20:41:19Z
14
5
transformers
[ "transformers", "pytorch", "tf", "distilbert", "text-classification", "tensorflow", "en", "dataset:ag_news", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - text-classification - pytorch - tensorflow datasets: - ag_news license: mit widget: - text: "Armed conflict has been a near-constant policial and economic burden." - text: "Tom Brady won his seventh Super Bowl last night." - text: "Dow falls more than 100 points after disappointing jobs data" - text: "A new moon has been discovered in Jupter's orbit." --- # distilbert-base-uncased-agnews-student ## Model Description This model is distilled from the zero-shot classification pipeline on the unlabeled AG's News dataset using [this script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/zero-shot-distillation). It is the result of the demo notebook [here](https://colab.research.google.com/drive/1mjBjd0cR8G57ZpsnFCS3ngGyo5nCa9ya?usp=sharing), where more details about the model can be found. - Teacher model: [roberta-large-mnli](https://huggingface.co/roberta-large-mnli) - Teacher hypothesis template: `"This text is about {}."` ## Intended Usage The model can be used like any other model trained on AG's News, but will likely not perform as well as a model trained with full supervision. It is primarily intended as a demo of how an expensive NLI-based zero-shot model can be distilled to a more efficient student.
julien-c/timm-dpn92
julien-c
2021-02-18T11:18:56Z
2
0
timm
[ "timm", "pytorch", "image-classification", "dpn", "dataset:imagenet", "arxiv:1707.01629", "arxiv:1906.02659", "arxiv:2010.15052", "license:apache-2.0", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification - timm - dpn license: apache-2.0 datasets: - imagenet --- # `dpn92` from `rwightman/pytorch-image-models` From [`rwightman/pytorch-image-models`](https://github.com/rwightman/pytorch-image-models): ``` """ PyTorch implementation of DualPathNetworks Based on original MXNet implementation https://github.com/cypw/DPNs with many ideas from another PyTorch implementation https://github.com/oyam/pytorch-DPNs. This implementation is compatible with the pretrained weights from cypw's MXNet implementation. Hacked together by / Copyright 2020 Ross Wightman """ ``` ## Model description [Dual Path Networks](https://arxiv.org/abs/1707.01629) ## Intended uses & limitations You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head to fine-tune it on a downstream task (another classification task with different labels, image segmentation or object detection, to name a few). ### How to use You can use this model with the usual factory method in `timm`: ```python import PIL import timm import torch model = timm.create_model("julien-c/timm-dpn92") img = PIL.Image.open(path_to_an_image) img = img.convert("RGB") config = model.default_cfg if isinstance(config["input_size"], tuple): img_size = config["input_size"][-2:] else: img_size = config["input_size"] transform = timm.data.transforms_factory.transforms_imagenet_eval( img_size=img_size, interpolation=config["interpolation"], mean=config["mean"], std=config["std"], ) input_tensor = transform(cat_img) input_tensor = input_tensor.unsqueeze(0) # ^ batch size = 1 with torch.no_grad(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ### Limitations and bias The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will probably not generalize well on drawings or images containing multiple objects with different labels. The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or models created by fine-tuning this model will work better on images picturing scenes from these countries (see [this paper](https://arxiv.org/abs/1906.02659) for examples). More generally, [recent research](https://arxiv.org/abs/2010.15052) has shown that even models trained in an unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in the training images. ## Training data This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 millions of hand-annotated images with 1,000 categories. ## Training procedure To be completed ### Preprocessing To be completed ## Evaluation results To be completed ### BibTeX entry and citation info ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ``` and ```bibtex @misc{chen2017dual, title={Dual Path Networks}, author={Yunpeng Chen and Jianan Li and Huaxin Xiao and Xiaojie Jin and Shuicheng Yan and Jiashi Feng}, year={2017}, eprint={1707.01629}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
julien-c/roberta-threejs
julien-c
2021-02-18T09:50:34Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
<style> @import url('https://fonts.googleapis.com/css2?family=Roboto+Slab:wght@900&family=Rokkitt:wght@900&display=swap'); .text1 { position: absolute; top: 3vh; left: calc(50% - 50vh); } .text2 { position: absolute; bottom: 4vh; left: 50%; } .retro { font-family: "Roboto Slab"; font-size: 13vh; display: block; color: #000; text-shadow: -0.5vh 0 #8800aa, 0 0.5vh #8800aa, 0.5vh 0 #aa0088, 0 -0.5vh #aa0088; } </style> <div class="text1"> <span class="retro">RETRO</span> </div> <div class="text2"> <span class="retro">WAVE</span> </div> <script type="module"> import * as THREE from "https://cdn.jsdelivr.net/npm/[email protected]/build/three.module.js"; import { OrbitControls } from "https://cdn.jsdelivr.net/npm/[email protected]/examples/jsm/controls/OrbitControls.js"; import { TWEEN } from "https://cdn.jsdelivr.net/npm/[email protected]/examples/jsm/libs/tween.module.min.js"; let scene = new THREE.Scene(); let camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 100); camera.position.set(-5, 10, 20); let renderer = new THREE.WebGLRenderer({antialias: true}); renderer.setSize(innerWidth, innerHeight); document.querySelector("div.prose").appendChild(renderer.domElement); const textureCube = generateCubeMap(); let controls = new OrbitControls(camera, renderer.domElement); controls.enableZoom = false; controls.enablePan = false; controls.enableKeys = false; let square = new THREE.GridHelper(20, 1, 0xaaaaff, 0xaaaff); square.position.y = 0.01; scene.add(square); let grid = new THREE.GridHelper(20, 10, "magenta", "magenta"); console.log(grid.geometry.attributes.position.count); let moveable = []; for(let i = 0; i < grid.geometry.attributes.position.count / 4; i++){ moveable.push(1, 1, 0, 0); } console.log(moveable.length) grid.geometry.setAttribute("moveable", new THREE.Float32BufferAttribute(moveable, 1)); let uniforms = { time: {value: 0}, speed: {value: 1}, size: {value: 20} } grid.material.onBeforeCompile = shader => { shader.uniforms.time = uniforms.time; shader.uniforms.speed = uniforms.speed; shader.uniforms.size = uniforms.size; shader.vertexShader = ` uniform float time; uniform float speed; uniform float size; attribute float moveable; ${shader.vertexShader} `.replace( `#include <begin_vertex>`, `#include <begin_vertex> if (floor(moveable + 0.1) > 0.5){ float start = size * -0.5; float zPos = mod( (position.z - start) + (time * speed), size) + start; transformed.z = zPos; } ` ); console.log(shader.vertexShader) } scene.add(grid); // palm let base = new THREE.Object3D(); let baseSpline = new THREE.CatmullRomCurve3([ new THREE.Vector2(), new THREE.Vector2(3, 0), new THREE.Vector2(2.5, -7), new THREE.Vector2(-4, -6), new THREE.Vector2(-4.8, 0) ], true, "catmullrom", 0.1); let baseG = new THREE.ExtrudeBufferGeometry(new THREE.Shape(baseSpline.getPoints(50)), {depth: 0.2, bevelEnabled: true, bevelThickness: 0.8, bevelSize: 0.2}); let baseObject = new THREE.Mesh(baseG, new THREE.MeshBasicMaterial({color: "magenta", wireframe: false, envMap: textureCube})); base.add(baseObject); scene.add(base); let phalanxes = []; let f1 = createFinger(new THREE.Object3D(), 0.8, false); // pinky let f2 = createFinger(new THREE.Object3D(), 0.95, false); // ring let f3 = createFinger(new THREE.Object3D(), 1, false); // middle let f4 = createFinger(new THREE.Object3D(), 0.95, false); // index let f5Base = new THREE.Object3D(); let f5 = createFinger(new THREE.Object3D(), 0.75, true); // thumb f5Base.add(f5); base.add(f1, f2, f3, f4, f5Base); f1.position.set( -4, 0.2, 0); f2.position.set( -2, 0.2, 0); f3.position.set( 0, 0.2, 0); f4.position.set( 2, 0.2, 0); f5Base.position.set( 3, -3, 0); f5Base.rotation.set( 0, 0, THREE.MathUtils.degToRad(-60)); f5Base.updateMatrixWorld(); let g = createPhalanxGeom(1, 3); let m = new THREE.MeshBasicMaterial({color: "aqua", wireframe: false, envMap: textureCube}); let o = new THREE.InstancedMesh(g, m, phalanxes.length); phalanxes.forEach( (ph, i) => { ph.updateMatrixWorld(); o.setMatrixAt(i, ph.matrixWorld); }) scene.add(o); window.addEventListener( 'resize', onWindowResize, false ); let t = new TWEEN.Tween({value: Math.PI * 0.075}) .to({value: Math.PI * 0.45}, 4000) .easing(TWEEN.Easing.Quadratic.InOut) .repeat(Infinity) .yoyo(true) .onUpdate(val => { phalanxes.forEach((ph, i) => { ph.rotation.x = val.value; ph.updateMatrixWorld(); o.setMatrixAt(i, ph.matrixWorld) }); o.instanceMatrix.needsUpdate = true; }); t.start(); let clock = new THREE.Clock(); renderer.setAnimationLoop(() => { let t = clock.getElapsedTime(); TWEEN.update(); uniforms.time.value = t; base.rotation.x = (Math.sin(t * 0.125) * 0.5 + 0.5) * -Math.PI * 0.5; base.rotation.y = -t * 0.125; renderer.render(scene, camera); }); function onWindowResize() { camera.aspect = innerWidth / innerHeight; camera.updateProjectionMatrix(); renderer.setSize( innerWidth, innerHeight ); } function createFinger(phalanx, scale, isThumb){ phalanxes.push(phalanx); let current = phalanx; for(let i = 0; i < (isThumb ? 1 : 2); i++){ let p = new THREE.Object3D(); p.position.y = 3; p.scale.setScalar(0.85); current.add(p); phalanxes.push(p); current = p; } phalanx.scale.setScalar(scale); return phalanx; } function createPhalanxGeom(R, L){ let r = R * 0.85; let R1 = R - r; let a = Math.asin(R1 / L); let path = new THREE.Path(); path.absarc(0, 0, R, Math.PI * 1.5, a); path.absarc(0, L, r, a, Math.PI * 0.5); let pts = path.getPoints(5); let g = new THREE.LatheBufferGeometry(pts); return g; } function generateCubeMap(){ let images = []; let c = document.createElement("canvas"); c.width = 4; c.height = c.width; let ctx = c.getContext("2d"); for(let i= 0; i < 6;i++){ ctx.fillStyle = "#fff"; ctx.fillRect(0, 0, c.width, c.height); for(let j = 0; j < (c.width * c.height) / 2; j++){ ctx.fillStyle = Math.random() < 0.5 ? "#f0f" : "#40f"; ctx.fillRect( Math.floor(Math.random() * c.width), Math.floor(Math.random() * c.height), 2, 1 ); } images.push(c.toDataURL()); } let cm = new THREE.CubeTextureLoader().load(images); console.log(cm); return cm; } </script>
flexudy/t5-small-wav2vec2-grammar-fixer
flexudy
2021-02-16T01:56:40Z
131,235
12
transformers
[ "transformers", "pytorch", "tf", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
# flexudy-pipe-question-generation-v2 After transcribing your audio with Wav2Vec2, you might be interested in a post processor. All paragraphs had at most 128 tokens (separated by white spaces) ```python from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = "flexudy/t5-small-wav2vec2-grammar-fixer" tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) sent = """GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS""" input_text = "fix: { " + sent + " } </s>" input_ids = tokenizer.encode(input_text, return_tensors="pt", max_length=256, truncation=True, add_special_tokens=True) outputs = model.generate( input_ids=input_ids, max_length=256, num_beams=4, repetition_penalty=1.0, length_penalty=1.0, early_stopping=True ) sentence = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) print(f"{sentence}") ``` INPUT 1: ``` WHEN ARE YOU COMING TOMORROW I AM ASKING BECAUSE OF THE MONEY YOU OWE ME PLEASE GIVE IT TO ME I AM WAITING YOU HAVE BEEN AVOIDING ME SINCE TWO THOUSAND AND THREE ``` OUTPUT 1: ``` When are you coming tomorrow? I am asking because of the money you owe me, please give it to me. I am waiting. You have been avoiding me since 2003. ``` INPUT 2: ``` GOING ALONG SLUSHY COUNTRY ROADS AND SPEAKING TO DAMP AUDIENCES IN DRAUGHTY SCHOOL ROOMS DAY AFTER DAY FOR A FORTNIGHT HE'LL HAVE TO PUT IN AN APPEARANCE AT SOME PLACE OF WORSHIP ON SUNDAY MORNING AND HE CAN COME TO US IMMEDIATELY AFTERWARDS ``` OUTPUT 2: ``` Going along Slushy Country Roads and speaking to Damp audiences in Draughty School rooms day after day for a fortnight, he'll have to put in an appearance at some place of worship on Sunday morning and he can come to us immediately afterwards. ``` I strongly recommend improving the performance via further fine-tuning or by training more examples. - Possible Quick Rule based improvements: Align the transcribed version and the generated version. If the similarity of two words (case-insensitive) vary by more than some threshold based on some similarity metric (e.g. Levenshtein), then keep the transcribed word.
CouchCat/ma_sa_v7_distil
CouchCat
2021-02-15T23:19:57Z
13
2
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "sentiment-analysis", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- language: en license: mit tags: - sentiment-analysis widget: - text: "I am disappointed in the terrible quality of my dress" --- ### Description A Sentiment Analysis model trained on customer feedback data using DistilBert. Possible sentiments are: * negative * neutral * positive ### Usage ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_sa_v7_distil") model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_sa_v7_distil") ```
tner/xlm-roberta-large-uncased-wnut2017
tner
2021-02-13T00:12:33Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-wnut2017") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-wnut2017") ```
asahi417/tner-xlm-roberta-large-uncased-ontonotes5
asahi417
2021-02-13T00:12:08Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-ontonotes5") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-ontonotes5") ```
tner/xlm-roberta-large-panx-dataset-ja
tner
2021-02-13T00:11:28Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ja") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ja") ```
tner/xlm-roberta-large-conll2003
tner
2021-02-13T00:11:10Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-conll2003") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-conll2003") ```
tner/xlm-roberta-base-uncased-panx-dataset-en
tner
2021-02-13T00:10:50Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-panx-dataset-en") ```
asahi417/tner-xlm-roberta-large-ontonotes5
asahi417
2021-02-13T00:10:15Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-ontonotes5") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-ontonotes5") ```
tner/xlm-roberta-base-uncased-bc5cdr
tner
2021-02-13T00:08:23Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-bc5cdr") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-bc5cdr") ```
tner/xlm-roberta-base-uncased-conll2003
tner
2021-02-13T00:08:16Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-conll2003") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-conll2003") ```
asahi417/tner-xlm-roberta-base-uncased-ontonotes5
asahi417
2021-02-13T00:08:01Z
287
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-ontonotes5") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-ontonotes5") ```
tner/xlm-roberta-base-bc5cdr
tner
2021-02-13T00:06:56Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-bc5cdr") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-bc5cdr") ```
tner/xlm-roberta-large-wnut2017
tner
2021-02-13T00:06:30Z
55
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-wnut2017") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-wnut2017") ```
tner/xlm-roberta-large-uncased-panx-dataset-en
tner
2021-02-13T00:06:19Z
3
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-panx-dataset-en") ```
tner/xlm-roberta-large-uncased-mit-restaurant
tner
2021-02-13T00:06:06Z
4
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-restaurant") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-mit-restaurant") ```
tner/xlm-roberta-large-uncased-bionlp2004
tner
2021-02-13T00:05:40Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bionlp2004") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-uncased-bionlp2004") ```
tner/xlm-roberta-large-panx-dataset-ko
tner
2021-02-13T00:05:08Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ko") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-panx-dataset-ko") ```
tner/xlm-roberta-large-fin
tner
2021-02-13T00:04:30Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-large-fin") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-large-fin") ```
tner/xlm-roberta-base-uncased-all-english
tner
2021-02-12T23:35:06Z
7
1
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-all-english") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-all-english") ```
tner/xlm-roberta-base-panx-dataset-ko
tner
2021-02-12T23:34:47Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ko") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-ko") ```
tner/xlm-roberta-base-panx-dataset-es
tner
2021-02-12T23:34:35Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-es") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-panx-dataset-es") ```
tner/xlm-roberta-base-fin
tner
2021-02-12T23:33:59Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-fin") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-fin") ```
asahi417/tner-xlm-roberta-base-all-english
asahi417
2021-02-12T23:31:37Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
# XLM-RoBERTa for NER XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner). ## Usage ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-all-english") model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-all-english") ```
Musixmatch/umberto-commoncrawl-cased-v1
Musixmatch
2021-02-12T11:31:59Z
16,559
14
transformers
[ "transformers", "pytorch", "camembert", "fill-mask", "it", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- language: it --- # UmBERTo Commoncrawl Cased [UmBERTo](https://github.com/musixmatchresearch/umberto) is a Roberta-based Language Model trained on large Italian Corpora and uses two innovative approaches: SentencePiece and Whole Word Masking. Now available at [github.com/huggingface/transformers](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1) <p align="center"> <img src="https://user-images.githubusercontent.com/7140210/72913702-d55a8480-3d3d-11ea-99fc-f2ef29af4e72.jpg" width="700"> </br> Marco Lodola, Monument to Umberto Eco, Alessandria 2019 </p> ## Dataset UmBERTo-Commoncrawl-Cased utilizes the Italian subcorpus of [OSCAR](https://traces1.inria.fr/oscar/) as training set of the language model. We used deduplicated version of the Italian corpus that consists in 70 GB of plain text data, 210M sentences with 11B words where the sentences have been filtered and shuffled at line level in order to be used for NLP research. ## Pre-trained model | Model | WWM | Cased | Tokenizer | Vocab Size | Train Steps | Download | | ------ | ------ | ------ | ------ | ------ |------ | ------ | | `umberto-commoncrawl-cased-v1` | YES | YES | SPM | 32K | 125k | [Link](http://bit.ly/35zO7GH) | This model was trained with [SentencePiece](https://github.com/google/sentencepiece) and Whole Word Masking. ## Downstream Tasks These results refers to umberto-commoncrawl-cased model. All details are at [Umberto](https://github.com/musixmatchresearch/umberto) Official Page. #### Named Entity Recognition (NER) | Dataset | F1 | Precision | Recall | Accuracy | | ------ | ------ | ------ | ------ | ------ | | **ICAB-EvalITA07** | **87.565** | 86.596 | 88.556 | 98.690 | | **WikiNER-ITA** | **92.531** | 92.509 | 92.553 | 99.136 | #### Part of Speech (POS) | Dataset | F1 | Precision | Recall | Accuracy | | ------ | ------ | ------ | ------ | ------ | | **UD_Italian-ISDT** | 98.870 | 98.861 | 98.879 | **98.977** | | **UD_Italian-ParTUT** | 98.786 | 98.812 | 98.760 | **98.903** | ## Usage ##### Load UmBERTo with AutoModel, Autotokenizer: ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1") umberto = AutoModel.from_pretrained("Musixmatch/umberto-commoncrawl-cased-v1") encoded_input = tokenizer.encode("Umberto Eco è stato un grande scrittore") input_ids = torch.tensor(encoded_input).unsqueeze(0) # Batch size 1 outputs = umberto(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output ``` ##### Predict masked token: ```python from transformers import pipeline fill_mask = pipeline( "fill-mask", model="Musixmatch/umberto-commoncrawl-cased-v1", tokenizer="Musixmatch/umberto-commoncrawl-cased-v1" ) result = fill_mask("Umberto Eco è <mask> un grande scrittore") # {'sequence': '<s> Umberto Eco è considerato un grande scrittore</s>', 'score': 0.18599839508533478, 'token': 5032} # {'sequence': '<s> Umberto Eco è stato un grande scrittore</s>', 'score': 0.17816807329654694, 'token': 471} # {'sequence': '<s> Umberto Eco è sicuramente un grande scrittore</s>', 'score': 0.16565583646297455, 'token': 2654} # {'sequence': '<s> Umberto Eco è indubbiamente un grande scrittore</s>', 'score': 0.0932890921831131, 'token': 17908} # {'sequence': '<s> Umberto Eco è certamente un grande scrittore</s>', 'score': 0.054701317101716995, 'token': 5269} ``` ## Citation All of the original datasets are publicly available or were released with the owners' grant. The datasets are all released under a CC0 or CCBY license. * UD Italian-ISDT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ISDT) * UD Italian-ParTUT Dataset [Github](https://github.com/UniversalDependencies/UD_Italian-ParTUT) * I-CAB (Italian Content Annotation Bank), EvalITA [Page](http://www.evalita.it/) * WIKINER [Page](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) , [Paper](https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub) ``` @inproceedings {magnini2006annotazione, title = {Annotazione di contenuti concettuali in un corpus italiano: I - CAB}, author = {Magnini,Bernardo and Cappelli,Amedeo and Pianta,Emanuele and Speranza,Manuela and Bartalesi Lenzi,V and Sprugnoli,Rachele and Romano,Lorenza and Girardi,Christian and Negri,Matteo}, booktitle = {Proc.of SILFI 2006}, year = {2006} } @inproceedings {magnini2006cab, title = {I - CAB: the Italian Content Annotation Bank.}, author = {Magnini,Bernardo and Pianta,Emanuele and Girardi,Christian and Negri,Matteo and Romano,Lorenza and Speranza,Manuela and Lenzi,Valentina Bartalesi and Sprugnoli,Rachele}, booktitle = {LREC}, pages = {963--968}, year = {2006}, organization = {Citeseer} } ``` ## Authors **Loreto Parisi**: `loreto at musixmatch dot com`, [loretoparisi](https://github.com/loretoparisi) **Simone Francia**: `simone.francia at musixmatch dot com`, [simonefrancia](https://github.com/simonefrancia) **Paolo Magnani**: `paul.magnani95 at gmail dot com`, [paulthemagno](https://github.com/paulthemagno) ## About Musixmatch AI ![Musxmatch Ai mac app icon-128](https://user-images.githubusercontent.com/163333/72244273-396aa380-35ee-11ea-894b-4ea48230c02b.png) We do Machine Learning and Artificial Intelligence @[musixmatch](https://twitter.com/Musixmatch) Follow us on [Twitter](https://twitter.com/musixmatchai) [Github](https://github.com/musixmatchresearch)
byan/librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_accum_grad3_optim_conflr0.001_sp
byan
2021-02-11T21:30:57Z
5
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 --- ## Example ESPnet2 ASR model ### `Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best` ♻️ Imported from https://zenodo.org/record/3966501 This model was trained by Shinji Watanabe using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
microsoft/deberta-xxlarge-v2-mnli
microsoft
2021-02-11T02:05:00Z
20
0
transformers
[ "transformers", "pytorch", "deberta-v2", "deberta", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: deberta thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention ## This model is DEPRECATED, please use [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli)
microsoft/deberta-xlarge-v2
microsoft
2021-02-11T02:04:50Z
24
0
transformers
[ "transformers", "pytorch", "deberta-v2", "deberta", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: deberta thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention ## This model is DEPRECATED, please use [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)
microsoft/deberta-xlarge-v2-mnli
microsoft
2021-02-11T02:04:40Z
15
0
transformers
[ "transformers", "pytorch", "deberta-v2", "deberta", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en tags: deberta thumbnail: https://huggingface.co/front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention ## This model is DEPRECATED, please use [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli)
valhalla/longformer-base-4096-finetuned-squadv1
valhalla
2021-02-10T16:35:40Z
513
22
transformers
[ "transformers", "pytorch", "tf", "rust", "longformer", "question-answering", "dataset:squad_v1", "arxiv:2004.05150", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- datasets: - squad_v1 license: mit --- # LONGFORMER-BASE-4096 fine-tuned on SQuAD v1 This is longformer-base-4096 model fine-tuned on SQuAD v1 dataset for question answering task. [Longformer](https://arxiv.org/abs/2004.05150) model created by Iz Beltagy, Matthew E. Peters, Arman Coha from AllenAI. As the paper explains it > `Longformer` is a BERT-like model for long documents. The pre-trained model can handle sequences with upto 4096 tokens. ## Model Training This model was trained on google colab v100 GPU. You can find the fine-tuning colab here [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1zEl5D-DdkBKva-DdreVOmN0hrAfzKG1o?usp=sharing). Few things to keep in mind while training longformer for QA task, by default longformer uses sliding-window local attention on all tokens. But For QA, all question tokens should have global attention. For more details on this please refer the paper. The `LongformerForQuestionAnswering` model automatically does that for you. To allow it to do that 1. The input sequence must have three sep tokens, i.e the sequence should be encoded like this ` <s> question</s></s> context</s>`. If you encode the question and answer as a input pair, then the tokenizer already takes care of that, you shouldn't worry about it. 2. `input_ids` should always be a batch of examples. ## Results |Metric | # Value | |-------------|---------| | Exact Match | 85.1466 | | F1 | 91.5415 | ## Model in Action 🚀 ```python import torch from transformers import AutoTokenizer, AutoModelForQuestionAnswering, tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1") text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this." question = "What has Huggingface done ?" encoding = tokenizer(question, text, return_tensors="pt") input_ids = encoding["input_ids"] # default is local attention everywhere # the forward method will automatically set global attention on question tokens attention_mask = encoding["attention_mask"] start_scores, end_scores = model(input_ids, attention_mask=attention_mask) all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist()) answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1] answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # output => democratized NLP ``` The `LongformerForQuestionAnswering` isn't yet supported in `pipeline` . I'll update this card once the support has been added. > Created with ❤️ by Suraj Patil [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/patil-suraj/) [![Twitter icon](https://cdn0.iconfinder.com/data/icons/shift-logotypes/32/Twitter-32.png)](https://twitter.com/psuraj28)
byan/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp
byan
2021-02-09T04:09:12Z
5
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librispeech", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librispeech license: cc-by-4.0 --- ## Example ESPnet2 ASR model ### `Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best` ♻️ Imported from https://zenodo.org/record/3966501 This model was trained by Shinji Watanabe using librispeech recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
astarostap/distilbert-cased-antisemitic-tweets
astarostap
2021-02-08T15:03:10Z
16
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit widget: - text: "Jews run the world." --- This model takes a tweet with the word "jew" in it, and determines if it's antisemitic. *Training data:* This model was trained on 4k tweets, where ~50% were labeled as antisemitic. I labeled them myself based on personal experience and knowledge about common antisemitic tropes. *Note:* The goal for this model is not to be used as a final say on what is or is not antisemitic, but rather as a first pass on what might be antisemitic and should be reviewed by human experts. Please keep in mind that I'm not an expert on antisemitism or hatespeech. Whether something is antisemitic or not depends on the context, as for any hate speech, and everyone has a different definition for what is hate speech. If you would like to collaborate on antisemitism detection, please feel free to contact me at [email protected] This model is not ready for production, it needs more evaluation and more training data.
cahya/distilbert-base-indonesian
cahya
2021-02-08T09:06:09Z
1,651
14
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "id", "dataset:wikipedia", "dataset:id_newspapers_2018", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: "id" license: "mit" datasets: - wikipedia - id_newspapers_2018 widget: - text: "ayahku sedang bekerja di sawah untuk [MASK] padi." --- # Indonesian DistilBERT base model (uncased) ## Model description This model is a distilled version of the [Indonesian BERT base model](https://huggingface.co/cahya/bert-base-indonesian-1.5G). This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/distilbert-base-indonesian') >>> unmasker("Ayahku sedang bekerja di sawah untuk [MASK] padi") [ { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk menanam padi [SEP]", "score": 0.6853187084197998, "token": 12712, "token_str": "menanam" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk bertani padi [SEP]", "score": 0.03739545866847038, "token": 15484, "token_str": "bertani" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk memetik padi [SEP]", "score": 0.02742469497025013, "token": 30338, "token_str": "memetik" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk penggilingan padi [SEP]", "score": 0.02214187942445278, "token": 28252, "token_str": "penggilingan" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk tanam padi [SEP]", "score": 0.0185895636677742, "token": 11308, "token_str": "tanam" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = DistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import DistilBertTokenizer, TFDistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = TFDistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was distiled with 522MB of indonesian Wikipedia and 1GB of [indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018). The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are then of the form: ```[CLS] Sentence A [SEP] Sentence B [SEP]```
acul3/mt5-translate-en-id
acul3
2021-01-25T12:40:58Z
10
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "translation", "id", "dataset:OPUS", "dataset:CC-aligned", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- tags: - translation language: "id" license: "mit" datasets: - OPUS - CC-aligned widget: - text: "I love you" --- ## MT5-Large-Translate-en-id ## Prefix use Use prefix "translate:" before input to generate the translation e.g "translate: i love you" ## Training data Opus (Open Subtittle and Wikimatrix) CCaligned (en-id sentence pair)
aychang/distilbert-squad
aychang
2021-01-25T08:37:16Z
0
0
null
[ "question-answering", "torchscript", "FastNN", "en", "dataset:squad", "license:mit", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - question-answering - torchscript - FastNN license: mit datasets: - squad metrics: --- # TorchScript model of distilbert-squad ## Model description A serialized torchscript model of distilbert-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
aychang/fasterrcnn-resnet50-cpu
aychang
2021-01-25T08:29:49Z
0
1
null
[ "object-detection", "torchscript", "FastNN", "en", "dataset:coco", "license:mit", "region:us" ]
object-detection
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - object-detection - torchscript - FastNN license: mit datasets: - coco metrics: --- # TorchScript model of faster-rcnn ## Model description A serialized torchscript model of [faster-rcnn](https://pytorch.org/vision/stable/models.html#faster-r-cnn) with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
aychang/bert-large-cased-whole-word-masking-finetuned-squad
aychang
2021-01-25T08:04:52Z
0
0
null
[ "question-answering", "torchscript", "FastNN", "en", "dataset:squad", "license:mit", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - question-answering - torchscript - FastNN license: mit datasets: - squad metrics: --- # TorchScript model of bert-large-cased-whole-word-masking-finetuned-squad ## Model description A serialized torchscript model of bert-large-cased-whole-word-masking-finetuned-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
hfl/chinese-legal-electra-small-discriminator
hfl
2021-01-22T05:19:55Z
1
1
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- # This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
hfl/chinese-legal-electra-large-discriminator
hfl
2021-01-22T05:19:50Z
57
4
transformers
[ "transformers", "pytorch", "tf", "electra", "zh", "arxiv:2004.13922", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - zh license: "apache-2.0" --- # This model is specifically designed for legal domain. ## Chinese ELECTRA Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants. This project is based on the official code of ELECTRA: [https://github.com/google-research/electra](https://github.com/google-research/electra) You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find our resource or paper is useful, please consider including the following citation in your paper. - https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ```
kykim/funnel-kor-base
kykim
2021-01-22T01:56:37Z
11
1
transformers
[ "transformers", "pytorch", "tf", "funnel", "feature-extraction", "ko", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: ko --- # Funnel-transformer base model for Korean * 70GB Korean text dataset and 42000 lower-cased subwords are used * Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor) ```python from transformers import FunnelTokenizer, FunnelModel tokenizer = FunnelTokenizer.from_pretrained("kykim/funnel-kor-base") model = FunnelModel.from_pretrained("kykim/funnel-kor-base") ```
kykim/electra-kor-base
kykim
2021-01-22T00:28:50Z
2,985
2
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "ko", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ko --- # Electra base model for Korean * 70GB Korean text dataset and 42000 lower-cased subwords are used * Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor) ```python from transformers import ElectraTokenizerFast, ElectraModel tokenizer_electra = ElectraTokenizerFast.from_pretrained("kykim/electra-kor-base") model = ElectraModel.from_pretrained("kykim/electra-kor-base") ```