modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ncduy/bert-base-cased-finetuned-emotion | 365b205f05d81e52aa139dfc9e5da84d8146e05d | 2021-12-09T10:30:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | ncduy | null | ncduy/bert-base-cased-finetuned-emotion | 10 | 1 | transformers | 11,700 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: bert-base-cased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: F1
type: f1
value: 0.9365323747830425
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-emotion
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1342
- F1: 0.9365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7357 | 1.0 | 250 | 0.2318 | 0.9224 |
| 0.1758 | 2.0 | 500 | 0.1679 | 0.9349 |
| 0.1228 | 3.0 | 750 | 0.1385 | 0.9382 |
| 0.0961 | 4.0 | 1000 | 0.1452 | 0.9340 |
| 0.0805 | 5.0 | 1250 | 0.1342 | 0.9365 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nepp1d0/Bert-pretrained-proteinBindingDB | ddc5215e99986e8b00bd32a5d30808bd7e938693 | 2022-02-04T16:12:07.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | nepp1d0 | null | nepp1d0/Bert-pretrained-proteinBindingDB | 10 | null | transformers | 11,701 | Entry not found |
nielsr/canine-s | 09f9a6e82343ab8a7bb471bd4c826429c7233bb7 | 2021-06-05T09:33:04.000Z | [
"pytorch",
"canine",
"feature-extraction",
"transformers"
]
| feature-extraction | false | nielsr | null | nielsr/canine-s | 10 | null | transformers | 11,702 | Entry not found |
nielsr/detr-resnet-50-new | 8054ee0bd98b101ef18c36d06594cc416d89e198 | 2021-02-09T10:27:09.000Z | [
"pytorch",
"detr",
"object-detection",
"transformers"
]
| object-detection | false | nielsr | null | nielsr/detr-resnet-50-new | 10 | null | transformers | 11,703 | Entry not found |
ontocord/mt5-fix-asr-vietnamese | 2a54621c5632eaa2a08d51c7714da756d55cbc6e | 2021-06-23T15:21:52.000Z | [
"pytorch",
"jax",
"mt5",
"vi",
"transformers",
"language-modeling",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | ontocord | null | ontocord/mt5-fix-asr-vietnamese | 10 | null | transformers | 11,704 | ---
language: vi
datasets:
- common_voice
- FOSD: https://data.mendeley.com/datasets/k9sxg2twv4/4
metrics:
- wer
tags:
- language-modeling
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: MT5 Fix Asr Vietnamese by Ontocord
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 25.207182
---
# Ontocord/mt5-fix-asr-vietnamese
Fine-tuned mt5 to correct output of an ASR model trained on [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) which was trained on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), and [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4).
## Usage
The model can be used directly by submitting vietnamese asr text, but is is best to use with the ontocord/wav2vec2-large-xlsr-vietnamese model.
```
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor, pipelines
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
test_dataset = load_dataset("common_voice", "vi", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("ontocord/wav2vec2-large-xlsr-53-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("ontocord/wav2vec2-large-xlsr-53-vietnamese").to(device)
mt5 = pipelines.pipeline("text2text-generation","ontocord/mt5-fix-asr-vietnamese", device=0 if device == "cuda" else -1)
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(device), attention_mask=inputs.attention_mask.to(device)).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", [aHash['generated_text'] for aHash in mt5(processor.batch_decode(predicted_ids), max_length=100)])
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Vietnamese test data of Common Voice.
```
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor, pipelines
import re
test_dataset = load_dataset("common_voice", "vi", split="test")
wer = load_metric("wer")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
processor = Wav2Vec2Processor.from_pretrained("ontocord/wav2vec2-large-xlsr-vietnamese")
model = Wav2Vec2ForCTC.from_pretrained("ontocord/wav2vec2-large-xlsr-vietnamese").to(device)
mt5 = pipelines.pipeline("text2text-generation","ontocord/mt5-fix-asr-vietnamese", device=0 if device == "cuda" else -1)
chars_to_ignore_regex = '[\\\+\@\ǀ\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# you may also want to use the decode_string from https://huggingface.co/Nhut/wav2vec2-large-xlsr-vietnamese
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(device), attention_mask=inputs.attention_mask.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)
max_length = int(pred_ids.size()[1])
txt = [aHash['generated_text'].strip() for aHash in mt5(processor.batch_decode(pred_ids), max_length=max_length)]
batch["pred_strings"] = txt
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 25.207182
## Training
The Common Voice train, validation, and FPT datasets were used for training.
The script used for training can be found here # TODO |
osanseviero/t5-finetuned-test | 66b207574c9e37d515e82cc73430f2ce5d88f685 | 2021-06-23T13:12:41.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"eng",
"dataset:Wikihow",
"transformers",
"wikihow",
"t5-small",
"lm-head",
"seq2seq",
"pipeline:summarization",
"summarization",
"autotrain_compatible"
]
| summarization | false | osanseviero | null | osanseviero/t5-finetuned-test | 10 | null | transformers | 11,705 | ---
language: "eng"
tags:
- wikihow
- t5-small
- pytorch
- lm-head
- seq2seq
- t5
- pipeline:summarization
- summarization
datasets:
- Wikihow
widget:
- max_length: 1
- text: "Lack of fluids can lead to dry mouth, which is a leading cause of bad breath. Water
can also dilute any chemicals in your mouth or gut that are causing bad breath., Studies show that
eating 6 ounces of yogurt a day reduces the level of odor-causing compounds in the mouth. In
particular, look for yogurt containing the active bacteria Streptococcus thermophilus or
Lactobacillus bulgaricus., The abrasive nature of fibrous fruits and vegetables helps to clean
teeth, while the vitamins, antioxidants, and acids they contain improve dental health.Foods that can
be particularly helpful include:Apples — Apples contain vitamin C, which is necessary for health
gums, as well as malic acid, which helps to whiten teeth.Carrots — Carrots are rich in vitamin A,
which strengthens tooth enamel.Celery — Chewing celery produces a lot of saliva, which helps to
neutralize bacteria that cause bad breath.Pineapples — Pineapples contain bromelain, an enzyme that
cleans the mouth., These teas have been shown to kill the bacteria that cause bad breath and
plaque., An upset stomach can lead to burping, which contributes to bad breath. Don’t eat foods that
upset your stomach, or if you do, use antacids. If you are lactose intolerant, try lactase tablets.,
They can all cause bad breath. If you do eat them, bring sugar-free gum or a toothbrush and
toothpaste to freshen your mouth afterwards., Diets low in carbohydrates lead to ketosis — a state
in which the body burns primarily fat instead of carbohydrates for energy. This may be good for your
waistline, but it also produces chemicals called ketones, which contribute to bad breath.To stop the
problem, you must change your diet. Or, you can combat the smell in one of these ways:Drink lots of
water to dilute the ketones.Chew sugarless gum or suck on sugarless mints.Chew mint leaves."
- text: " Bring 1/2 cup water to the boil.Add the fresh or dried rosemary to the water.Remove
from the heat. Set aside for 1/2 an hour to infuse. Added flavour can be released by pressing down
on the rosemary leaves with a spoon. Add the pieces to the blender or food processor with the
elderflower cordial. Blend or process to a purée.,, Add the lemon or lime juice and stir to
combine., Add a cover and place in the freezer.After 2 hours, remove from the freezer and break up
with a fork. This helps the ice crystals to form properly.Continue doing this every hour until the
granita freezes properly. Scoop the granita into dessert bowls and serve. Garnish with a cucumber
curl or a small sprig of rosemary."
metrics:
- Rouge1: 31.2
- RougeL: 24.5
---
# Model name
Wikihow T5-small
## Model description
This is a T5-small model trained on Wikihow All data set. The model was trained for 3 epochs using a batch size of 16 and learning rate of 3e-4. Max_input_lngth is set as 512 and max_output_length is 150. Model attained a Rouge1 score of 31.2 and RougeL score of 24.5.
We have written a blog post that covers the training procedure. Please find it [here](https://medium.com/@priya.dwivedi/fine-tuning-a-t5-transformer-for-any-summarization-task-82334c64c81).
## Usage
```
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deep-learning-analytics/wikihow-t5-small")
model = AutoModelWithLMHead.from_pretrained("deep-learning-analytics/wikihow-t5-small")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
text = """"
Lack of fluids can lead to dry mouth, which is a leading cause of bad breath. Water
can also dilute any chemicals in your mouth or gut that are causing bad breath., Studies show that
eating 6 ounces of yogurt a day reduces the level of odor-causing compounds in the mouth. In
particular, look for yogurt containing the active bacteria Streptococcus thermophilus or
Lactobacillus bulgaricus., The abrasive nature of fibrous fruits and vegetables helps to clean
teeth, while the vitamins, antioxidants, and acids they contain improve dental health.Foods that can
be particularly helpful include:Apples — Apples contain vitamin C, which is necessary for health
gums, as well as malic acid, which helps to whiten teeth.Carrots — Carrots are rich in vitamin A,
which strengthens tooth enamel.Celery — Chewing celery produces a lot of saliva, which helps to
neutralize bacteria that cause bad breath.Pineapples — Pineapples contain bromelain, an enzyme that
cleans the mouth., These teas have been shown to kill the bacteria that cause bad breath and
plaque., An upset stomach can lead to burping, which contributes to bad breath. Don’t eat foods that
upset your stomach, or if you do, use antacids. If you are lactose intolerant, try lactase tablets.,
They can all cause bad breath. If you do eat them, bring sugar-free gum or a toothbrush and
toothpaste to freshen your mouth afterwards., Diets low in carbohydrates lead to ketosis — a state
in which the body burns primarily fat instead of carbohydrates for energy. This may be good for your
waistline, but it also produces chemicals called ketones, which contribute to bad breath.To stop the
problem, you must change your diet. Or, you can combat the smell in one of these ways:Drink lots of
water to dilute the ketones.Chew sugarless gum or suck on sugarless mints.Chew mint leaves.
"""
preprocess_text = text.strip().replace("\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
","")
tokenized_text = tokenizer.encode(preprocess_text, return_tensors="pt").to(device)
summary_ids = model.generate(
tokenized_text,
max_length=150,
num_beams=2,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True
)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print ("\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Summarized text: \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
",output)
```
|
pablouribe/bertstem-copus-supercategories-overfitted | 759657722af472135e42caa6c2babecb974b140e | 2022-01-18T05:35:17.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | pablouribe | null | pablouribe/bertstem-copus-supercategories-overfitted | 10 | null | transformers | 11,706 | Entry not found |
patrickvonplaten/unispeech-sat-large-timit-ft | db46bdc038d97381a0f4de80bd1c8188750268ee | 2021-10-21T16:38:43.000Z | [
"pytorch",
"tensorboard",
"unispeech-sat",
"automatic-speech-recognition",
"dataset:timit_asr",
"transformers",
"timit_asr",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | patrickvonplaten | null | patrickvonplaten/unispeech-sat-large-timit-ft | 10 | null | transformers | 11,707 | ---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: unispeech-sat-large-timit-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-sat-large-timit-ft
This model is a fine-tuned version of [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6074
- Wer: 0.3880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.2516 | 0.69 | 100 | 5.8638 | 1.0 |
| 2.9596 | 1.38 | 200 | 2.9550 | 1.0 |
| 2.8831 | 2.07 | 300 | 2.8547 | 1.0 |
| 2.3223 | 2.76 | 400 | 2.2044 | 1.0063 |
| 1.2104 | 3.45 | 500 | 1.0845 | 0.7706 |
| 0.6779 | 4.14 | 600 | 0.7342 | 0.5663 |
| 0.6319 | 4.83 | 700 | 0.6054 | 0.4881 |
| 0.664 | 5.52 | 800 | 0.5808 | 0.4913 |
| 0.402 | 6.21 | 900 | 0.5647 | 0.4611 |
| 0.3176 | 6.9 | 1000 | 0.5211 | 0.4440 |
| 0.3392 | 7.59 | 1100 | 0.5187 | 0.4359 |
| 0.3888 | 8.28 | 1200 | 0.5501 | 0.4391 |
| 0.2874 | 8.97 | 1300 | 0.5249 | 0.4148 |
| 0.208 | 9.66 | 1400 | 0.5407 | 0.4152 |
| 0.1457 | 10.34 | 1500 | 0.5722 | 0.4155 |
| 0.2375 | 11.03 | 1600 | 0.5780 | 0.4059 |
| 0.2111 | 11.72 | 1700 | 0.5823 | 0.4094 |
| 0.1422 | 12.41 | 1800 | 0.5754 | 0.3977 |
| 0.125 | 13.1 | 1900 | 0.5784 | 0.4031 |
| 0.1996 | 13.79 | 2000 | 0.5630 | 0.3956 |
| 0.1747 | 14.48 | 2100 | 0.5880 | 0.3964 |
| 0.1263 | 15.17 | 2200 | 0.5987 | 0.3951 |
| 0.11 | 15.86 | 2300 | 0.5688 | 0.3964 |
| 0.1411 | 16.55 | 2400 | 0.6223 | 0.3906 |
| 0.1647 | 17.24 | 2500 | 0.6135 | 0.3960 |
| 0.1162 | 17.93 | 2600 | 0.6224 | 0.3960 |
| 0.098 | 18.62 | 2700 | 0.6017 | 0.3907 |
| 0.1183 | 19.31 | 2800 | 0.6121 | 0.3885 |
| 0.1717 | 20.0 | 2900 | 0.6074 | 0.3880 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps | 36f90d6c4e4c35af8234605c4a145f92817813a7 | 2021-10-25T13:15:45.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"transformers"
]
| null | false | patrickvonplaten | null | patrickvonplaten/wav2vec2-base-repro-960h-libri-85k-steps | 10 | null | transformers | 11,708 | https://wandb.ai/patrickvonplaten/test/reports/Wav2Vec2-Base--VmlldzoxMTUyODQ0?accessToken=rg6e8u9yizx964k8q47zctq1m4afpvtn1i3qi9exgdmzip6xwkfzvagfajpzj55n |
pdroberts/distilbert-base-uncased-finetuned-emotion | 8f4a96f6a7f98d626c9417f22618ef5db64c2782 | 2022-02-01T23:48:03.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | pdroberts | null | pdroberts/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,709 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
pere/norwegian-roberta-base-highlr-512 | 74ab4706350b50b1a5db1fac888d5771b73aec73 | 2021-11-25T17:54:31.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | pere | null | pere/norwegian-roberta-base-highlr-512 | 10 | null | transformers | 11,710 | Same as norwegian-roberta-base but with higher learning rate and batch size |
persiannlp/mt5-small-parsinlu-multiple-choice | 3a03b0eea4e84e42a23490bb1f8a23d1a5af3371 | 2021-09-23T16:20:33.000Z | [
"pytorch",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"multiple-choice",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
]
| text2text-generation | false | persiannlp | null | persiannlp/mt5-small-parsinlu-multiple-choice | 10 | null | transformers | 11,711 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- multiple-choice
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- accuracy
---
# Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی)
This is a mT5-based model for multiple-choice question answering.
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین")
run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن")
run_model(
"زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
philschmid/finbert-pretrain-yiyanghkust | 90f3550f37ab0a2da55c43fce5b63b5d55b7c5f9 | 2021-11-05T14:00:02.000Z | [
"pytorch",
"tf",
"bert",
"fill-mask",
"en",
"arxiv:2006.08097",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | philschmid | null | philschmid/finbert-pretrain-yiyanghkust | 10 | 1 | transformers | 11,712 | ---
pipeline_tag: "fill-mask"
language: en
---
# This repository is a fork of [yiyanghkust/finbert-pretrain](https://huggingface.co/yiyanghkust/finbert-pretrain)
> All credits to [@yiyanghkust](https://huggingface.co/yiyanghkust).
I added the TensorFlow model and a proper `tokenizer.json`
---
`FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice. It is trained on the following three financial communication corpus. The total corpora size is 4.9B tokens.
- Corporate Reports 10-K & 10-Q: 2.5B tokens
- Earnings Call Transcripts: 1.3B tokens
- Analyst Reports: 1.1B tokens
More details on `FinBERT`'s pre-training process can be found at: https://arxiv.org/abs/2006.08097
`FinBERT` can be further fine-tuned on downstream tasks. Specifically, we have fine-tuned `FinBERT` on an analyst sentiment classification task, and the fine-tuned model is shared at https://huggingface.co/yiyanghkust/finbert-tone |
princeton-nlp/densephrases-multi-query-nq | 2b1f0008d31379ca6ea832b37da9d773229a5093 | 2021-09-20T17:41:23.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | princeton-nlp | null | princeton-nlp/densephrases-multi-query-nq | 10 | null | transformers | 11,713 | Entry not found |
pszemraj/distill-pegasus-CompMath | 7cc022e75db6b0242206d68683f8f4228ca62ae0 | 2022-02-06T16:43:06.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"en",
"dataset:competition_math",
"transformers",
"math",
"autotrain_compatible"
]
| text2text-generation | false | pszemraj | null | pszemraj/distill-pegasus-CompMath | 10 | null | transformers | 11,714 | ---
language: en
tags:
- math
- pegasus
datasets:
- competition_math
metrics:
- rouge
widget:
- text: "Michael scores a 95, 87, 85, 93, and a 94 on his first 5 math tests. If he wants a 90 average, what must he score on the final math test?"
example_title: "averaging"
- text: "If the sum of the smallest and largest of three consecutive even numbers is 28, what is the value of the second largest number in the series?"
example_title: "puzzle2"
- text: "Two inlet pipes lead into a large water tank. One pipe can fill the tank in 45 minutes; the other can fill it in 40 minutes. To the nearest tenth of a minute, how long would it take the two pipes together to fill the tank if both were opened at the same time?"
example_title: "patek water"
- text: "A football team lost 5 yards and then gained 9. What is the team's progress?"
example_title: "sportsball"
- text: "Half a number plus 5 is 11.What is the number?"
example_title: "half"
inference:
parameters:
max_length: 128
no_repeat_ngram_size: 4
length_penalty: 0.7
repetition_penalty: 3.1
num_beams : 4
early_stopping: True
---
# pegasus does math?
- testing to see how feasible seq2seq math problems are
- answer: at least with 2 epochs, it is uhhhh not super feasible.
|
ramybaly/ner_nerd_fine | 3b874114caefa4d18ef8dc0bb7cd959fb8e452f0 | 2021-08-20T19:01:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:nerd",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | false | ramybaly | null | ramybaly/ner_nerd_fine | 10 | null | transformers | 11,715 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- nerd
metrics:
- precision
- recall
- f1
- accuracy
model_index:
- name: ner_nerd_fine
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: nerd
type: nerd
args: nerd
metric:
name: Accuracy
type: accuracy
value: 0.9050232835369201
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_nerd_fine
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the nerd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3373
- Precision: 0.6326
- Recall: 0.6734
- F1: 0.6524
- Accuracy: 0.9050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6219 | 1.0 | 8235 | 0.3347 | 0.6066 | 0.6581 | 0.6313 | 0.9015 |
| 0.3071 | 2.0 | 16470 | 0.3165 | 0.6349 | 0.6637 | 0.6490 | 0.9060 |
| 0.2384 | 3.0 | 24705 | 0.3311 | 0.6373 | 0.6769 | 0.6565 | 0.9068 |
| 0.1834 | 4.0 | 32940 | 0.3414 | 0.6349 | 0.6780 | 0.6557 | 0.9069 |
| 0.1392 | 5.0 | 41175 | 0.3793 | 0.6334 | 0.6775 | 0.6547 | 0.9068 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.2
|
rayschwartz/text-classification | 9eee493bab175680150af66643c4d19c215e6ade | 2021-10-14T12:56:13.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | rayschwartz | null | rayschwartz/text-classification | 10 | null | transformers | 11,716 | Entry not found |
reatiny/distilbert-base-uncased-finetuned-emotion | e2917d8975c79a4b5cb4055cf95430432d07f643 | 2022-02-14T07:44:51.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | reatiny | null | reatiny/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,717 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9217811693486851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2226
- Accuracy: 0.9215
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8235 | 1.0 | 250 | 0.3190 | 0.901 | 0.8979 |
| 0.2497 | 2.0 | 500 | 0.2226 | 0.9215 | 0.9218 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.11.0
|
reichenbach/wav2vec2-large-xls-r-300m-pa-in | c4767f1897b2e428cbc919438617aa89ae7b4bb1 | 2022-03-23T18:28:40.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"pa",
"pa-IN",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | reichenbach | null | reichenbach/wav2vec2-large-xls-r-300m-pa-in | 10 | null | transformers | 11,718 | ---
license: apache-2.0
language:
- pa
- pa-IN
tags:
- generated_from_trainer
- hf-asr-leaderboard
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-pa-in
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pa-in
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9680
- Wer: 0.7283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 180
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.2615 | 24.97 | 400 | 3.4784 | 1.0 |
| 3.366 | 49.97 | 800 | 2.3662 | 0.9917 |
| 1.1678 | 74.97 | 1200 | 1.4806 | 0.7709 |
| 0.5496 | 99.97 | 1600 | 1.7166 | 0.7476 |
| 0.4101 | 124.97 | 2000 | 1.8473 | 0.7510 |
| 0.3317 | 149.97 | 2400 | 1.9177 | 0.7322 |
| 0.2956 | 174.97 | 2800 | 1.9680 | 0.7283 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
### Evaluations Result
- WER: 0.7539
- CER: 0.2928 |
saattrupdan/xlmr-base-texas-squad-de | 0d72dfa7c2d6f7dea1f67c85035f8e34089b36b1 | 2022-01-31T21:31:12.000Z | [
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| question-answering | false | saattrupdan | null | saattrupdan/xlmr-base-texas-squad-de | 10 | null | transformers | 11,719 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr-base-texas-squad-de
results: []
widget:
- text: "Welche Ausbildung hatte Angela Merkel?"
context: "Angela Dorothea Merkel (geb. Kasner; * 17. Juli 1954 in Hamburg) ist eine deutsche Politikerin (CDU). Sie war vom 22. November 2005 bis zum 8. Dezember 2021 Bundeskanzlerin der Bundesrepublik Deutschland. Sie ist die achte Person, zugleich erste Frau, erste Person aus Ostdeutschland und erste Person, die nach der Gründung der Bundesrepublik geboren ist, die in dieses Amt gewählt wurde. Von April 2000 bis Dezember 2018 war sie Bundesvorsitzende der CDU. Merkel wuchs in der DDR auf und war dort als Physikerin am Zentralinstitut für Physikalische Chemie tätig. Erstmals politisch aktiv wurde sie während der Wendezeit in der Partei Demokratischer Aufbruch, die sich 1990 der CDU anschloss. In der ersten und gleichzeitig letzten demokratisch gewählten Regierung der DDR übte sie das Amt der stellvertretenden Regierungssprecherin aus."
---
# TExAS-SQuAD-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-de dataset.
It achieves the following results on the evaluation set:
- Exact match: 61.45%
- F1-score: 66.12%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8084 | 1.0 | 4233 | 1.5897 |
| 1.5696 | 2.0 | 8466 | 1.5478 |
| 1.4196 | 3.0 | 12699 | 1.5754 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
sail/poolformer_s36 | 1d740b215d07f480fa51efa16df83a63d5f6acf2 | 2022-04-08T07:48:39.000Z | [
"pytorch",
"poolformer",
"image-classification",
"dataset:imagenet",
"arxiv:2111.11418",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | sail | null | sail/poolformer_s36 | 10 | null | transformers | 11,720 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
---
# PoolFormer (S36 model)
PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
## Model description
PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_s36')
model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_s36')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
### Pretraining
The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | # params | URL |
|---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
| PoolFormer-S12 | 77.2 | 12M | https://huggingface.co/sail/poolformer_s12 |
| PoolFormer-S24 | 80.3 | 21M | https://huggingface.co/sail/poolformer_s24 |
| **PoolFormer-S36** | **81.4** | **31M** | **https://huggingface.co/sail/poolformer_s36** |
| PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 |
| PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 |
### BibTeX entry and citation info
```bibtex
@article{yu2021metaformer,
title={MetaFormer is Actually What You Need for Vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
journal={arXiv preprint arXiv:2111.11418},
year={2021}
}
``` |
sankhajay/bert-base-sinhala-qa | 168c5199a4e5ba0e47bfcc8e5039b7b46d7328cf | 2021-08-19T01:22:08.000Z | [
"pytorch",
"bert",
"question-answering",
"si",
"transformers",
"Sinhala",
"autotrain_compatible"
]
| question-answering | false | sankhajay | null | sankhajay/bert-base-sinhala-qa | 10 | 1 | transformers | 11,721 | \n
---
language: si
tags:
- Sinhala
widget:
- context: "ශ්රී ලංකාව යනු ඉන්දියානු සාගරයේ පිහිටි මනරම් දුපතකි."
text: "ශ්රී ලංකාව පිහිටා ඇත්තේ කොහෙද ?"
---
# bert-base-sinhala-qa
This is a Bert-based Question Answering model for the Sinhalese language. Training is done on translated SQuAD dataset of 8k questions. Translation was done by google translated API. Evaluation is still to be done. Still fine-tuning the model. |
satyaalmasian/temporal_tagger_BERTCRF_tokenclassifier | ca4804f63efc8aa3598aff6015ba1d79d6c8c51e | 2021-09-21T11:30:36.000Z | [
"pytorch",
"bert",
"transformers"
]
| null | false | satyaalmasian | null | satyaalmasian/temporal_tagger_BERTCRF_tokenclassifier | 10 | null | transformers | 11,722 | # BERT based temporal tagged
Token classifier for temporal tagging of plain text using BERT language model and CRFs. The model is introduced in the paper BERT got a Date: Introducing Transformers to Temporal Tagging and release in this [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
# Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. We use BERT for token classification to tag the tokens in text with classes:
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
On top of the BERT classification layer, we add a custom CRF layer. This is a variant of `satyaalmasian/temporal_tagger_BERT_tokenclassifier` with slightly better
performance but can not be used out of the box with huggingface models and needs the code from the accompanying [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_BERTCRF_tokenclassifier", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_BERTCRF_tokenclassifier")
```
for inference use:
```
processed_text = tokenizer(input_text, return_tensors="pt")
processed_text["inference_mode"]=True
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
#Training data
We use 3 data sources:
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html), Wikiwars, Tweets datasets. For the correct data versions please refer to our [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
#Training procedure
The model is trained from publicly available checkpoints on huggingface (`bert-base-uncased`), with a batch size of 34. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 5 different random seeds, this version of the model is the only seed=19.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
satyaalmasian/temporal_tagger_German_GELECTRA | a523f786c63a5c0542e04d22f4b42364f33ec935 | 2022-02-10T15:23:51.000Z | [
"pytorch",
"electra",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | satyaalmasian | null | satyaalmasian/temporal_tagger_German_GELECTRA | 10 | 1 | transformers | 11,723 | # BERT based temporal tagged
Token classifier for temporal tagging of plain text using German Gelectra model.
# Model description
GELECTRA is a transformer (ELECTRA) model pretrained on a large corpus of German data in a self-supervised fashion. We use GELECTRA for token classification to tag the tokens in text with classes (tags are from english timex3 format):
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output. The repo examples the english models, the german model can be used the same way.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA")
```
for inference use:
```
processed_text = tokenizer(input_text, return_tensors="pt")
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
# Training data
For pre-training we use a large corpus of automatically annotated news articles with heideltime.
We use 2 data sources for fine-tunning. :
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html),automatically translated to gemran,
[KRAUTS dataset](https://github.com/JannikStroetgen/KRAUTS).
# Training procedure
The model is trained from publicly available checkpoints on huggingface (`deepset/gelectra-large`), with a batch size of 192. We use a learning rate of 1e-07 with an Adam optimizer and linear weight decay for pretraining.
For fine-tuning we use a batch size of 16. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 3 different random seeds, this version of the model is the only seed=7.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
sentence-transformers/xlm-r-bert-base-nli-mean-tokens | 869d294b8c105b4d423e24d4012603adad3ca01d | 2022-06-16T00:42:16.000Z | [
"pytorch",
"tf",
"xlm-roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
]
| sentence-similarity | false | sentence-transformers | null | sentence-transformers/xlm-r-bert-base-nli-mean-tokens | 10 | null | sentence-transformers | 11,724 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/xlm-r-bert-base-nli-mean-tokens
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/xlm-r-bert-base-nli-mean-tokens')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/xlm-r-bert-base-nli-mean-tokens')
model = AutoModel.from_pretrained('sentence-transformers/xlm-r-bert-base-nli-mean-tokens')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/xlm-r-bert-base-nli-mean-tokens)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
sgugger/test-ner | f339bcc4fe231c1b1065a911b02545cafab0825d | 2021-09-23T22:04:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | sgugger | null | sgugger/test-ner | 10 | null | transformers | 11,725 | Entry not found |
shiyue/roberta-large-realsumm | 5c925fdcfb3c50916b857e2330b7e27b6a78cac5 | 2021-09-22T02:45:08.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | shiyue | null | shiyue/roberta-large-realsumm | 10 | null | transformers | 11,726 | Entry not found |
sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rusentiment | c5c976d2a946e0b57ba2f0014f140b58c12f21ce | 2021-02-25T23:56:23.000Z | [
"pytorch",
"mbart",
"text-classification",
"ru",
"transformers",
"sentiment analysis",
"Russian"
]
| text-classification | false | sismetanin | null | sismetanin/mbart_ru_sum_gazeta-ru-sentiment-rusentiment | 10 | null | transformers | 11,727 | ---
language:
- ru
tags:
- sentiment analysis
- Russian
---
## MBARTRuSumGazeta-ru-sentiment-RuSentiment
MBARTRuSumGazeta-ru-sentiment-RuSentiment is a [MBARTRuSumGazeta](https://huggingface.co/IlyaGusev/mbart_ru_sum_gazeta) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte.
<table>
<thead>
<tr>
<th rowspan="4">Model</th>
<th rowspan="4">Score<br></th>
<th rowspan="4">Rank</th>
<th colspan="12">Dataset</th>
</tr>
<tr>
<td colspan="6">SentiRuEval-2016<br></td>
<td colspan="2" rowspan="2">RuSentiment</td>
<td rowspan="2">KRND</td>
<td rowspan="2">LINIS Crowd</td>
<td rowspan="2">RuTweetCorp</td>
<td rowspan="2">RuReviews</td>
</tr>
<tr>
<td colspan="3">TC</td>
<td colspan="3">Banks</td>
</tr>
<tr>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>micro F1</td>
<td>macro F1</td>
<td>F1</td>
<td>wighted</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
<td>F1</td>
</tr>
</thead>
<tbody>
<tr>
<td>SOTA</td>
<td>n/s</td>
<td></td>
<td>76.71</td>
<td>66.40</td>
<td>70.68</td>
<td>67.51</td>
<td>69.53</td>
<td>74.06</td>
<td>78.50</td>
<td>n/s</td>
<td>73.63</td>
<td>60.51</td>
<td>83.68</td>
<td>77.44</td>
</tr>
<tr>
<td>XLM-RoBERTa-Large</td>
<td>76.37</td>
<td>1</td>
<td>82.26</td>
<td>76.36</td>
<td>79.42</td>
<td>76.35</td>
<td>76.08</td>
<td>80.89</td>
<td>78.31</td>
<td>75.27</td>
<td>75.17</td>
<td>60.03</td>
<td>88.91</td>
<td>78.81</td>
</tr>
<tr>
<td>SBERT-Large</td>
<td>75.43</td>
<td>2</td>
<td>78.40</td>
<td>71.36</td>
<td>75.14</td>
<td>72.39</td>
<td>71.87</td>
<td>77.72</td>
<td>78.58</td>
<td>75.85</td>
<td>74.20</td>
<td>60.64</td>
<td>88.66</td>
<td>77.41</td>
</tr>
<tr>
<td>MBARTRuSumGazeta</td>
<td>74.70</td>
<td>3</td>
<td>76.06</td>
<td>68.95</td>
<td>73.04</td>
<td>72.34</td>
<td>71.93</td>
<td>77.83</td>
<td>76.71</td>
<td>73.56</td>
<td>74.18</td>
<td>60.54</td>
<td>87.22</td>
<td>77.51</td>
</tr>
<tr>
<td>Conversational RuBERT</td>
<td>74.44</td>
<td>4</td>
<td>76.69</td>
<td>69.09</td>
<td>73.11</td>
<td>69.44</td>
<td>68.68</td>
<td>75.56</td>
<td>77.31</td>
<td>74.40</td>
<td>73.10</td>
<td>59.95</td>
<td>87.86</td>
<td>77.78</td>
</tr>
<tr>
<td>LaBSE</td>
<td>74.11</td>
<td>5</td>
<td>77.00</td>
<td>69.19</td>
<td>73.55</td>
<td>70.34</td>
<td>69.83</td>
<td>76.38</td>
<td>74.94</td>
<td>70.84</td>
<td>73.20</td>
<td>59.52</td>
<td>87.89</td>
<td>78.47</td>
</tr>
<tr>
<td>XLM-RoBERTa-Base</td>
<td>73.60</td>
<td>6</td>
<td>76.35</td>
<td>69.37</td>
<td>73.42</td>
<td>68.45</td>
<td>67.45</td>
<td>74.05</td>
<td>74.26</td>
<td>70.44</td>
<td>71.40</td>
<td>60.19</td>
<td>87.90</td>
<td>78.28</td>
</tr>
<tr>
<td>RuBERT</td>
<td>73.45</td>
<td>7</td>
<td>74.03</td>
<td>66.14</td>
<td>70.75</td>
<td>66.46</td>
<td>66.40</td>
<td>73.37</td>
<td>75.49</td>
<td>71.86</td>
<td>72.15</td>
<td>60.55</td>
<td>86.99</td>
<td>77.41</td>
</tr>
<tr>
<td>MBART-50-Large-Many-to-Many</td>
<td>73.15</td>
<td>8</td>
<td>75.38</td>
<td>67.81</td>
<td>72.26</td>
<td>67.13</td>
<td>66.97</td>
<td>73.85</td>
<td>74.78</td>
<td>70.98</td>
<td>71.98</td>
<td>59.20</td>
<td>87.05</td>
<td>77.24</td>
</tr>
<tr>
<td>SlavicBERT</td>
<td>71.96</td>
<td>9</td>
<td>71.45</td>
<td>63.03</td>
<td>68.44</td>
<td>64.32</td>
<td>63.99</td>
<td>71.31</td>
<td>72.13</td>
<td>67.57</td>
<td>72.54</td>
<td>58.70</td>
<td>86.43</td>
<td>77.16</td>
</tr>
<tr>
<td>EnRuDR-BERT</td>
<td>71.51</td>
<td>10</td>
<td>72.56</td>
<td>64.74</td>
<td>69.07</td>
<td>61.44</td>
<td>60.21</td>
<td>68.34</td>
<td>74.19</td>
<td>69.94</td>
<td>69.33</td>
<td>56.55</td>
<td>87.12</td>
<td>77.95</td>
</tr>
<tr>
<td>RuDR-BERT</td>
<td>71.14</td>
<td>11</td>
<td>72.79</td>
<td>64.23</td>
<td>68.36</td>
<td>61.86</td>
<td>60.92</td>
<td>68.48</td>
<td>74.65</td>
<td>70.63</td>
<td>68.74</td>
<td>54.45</td>
<td>87.04</td>
<td>77.91</td>
</tr>
<tr>
<td>MBART-50-Large</td>
<td>69.46</td>
<td>12</td>
<td>70.91</td>
<td>62.67</td>
<td>67.24</td>
<td>61.12</td>
<td>60.25</td>
<td>68.41</td>
<td>72.88</td>
<td>68.63</td>
<td>70.52</td>
<td>46.39</td>
<td>86.48</td>
<td>77.52</td>
</tr>
</tbody>
</table>
The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark.
## Citation
If you find this repository helpful, feel free to cite our publication:
```
@article{Smetanin2021Deep,
author = {Sergey Smetanin and Mikhail Komarov},
title = {Deep transfer learning baselines for sentiment analysis in Russian},
journal = {Information Processing & Management},
volume = {58},
number = {3},
pages = {102484},
year = {2021},
issn = {0306-4573},
doi = {0.1016/j.ipm.2020.102484}
}
```
Dataset:
```
@inproceedings{rogers2018rusentiment,
title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian},
author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex},
booktitle={Proceedings of the 27th international conference on computational linguistics},
pages={755--763},
year={2018}
}
``` |
sosuke/ease-xlm-roberta-base | c0b425e9a56764444d02eeb04f95b990d07196a0 | 2021-12-14T07:22:36.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | false | sosuke | null | sosuke/ease-xlm-roberta-base | 10 | null | transformers | 11,728 | Entry not found |
spasis/bert-finetuned-ner | eed8347cf188b4247bee5a82cda0778133f72842 | 2022-02-22T13:23:17.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | spasis | null | spasis/bert-finetuned-ner | 10 | null | transformers | 11,729 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9214944042132982
- name: Recall
type: recall
value: 0.9422753281723325
- name: F1
type: f1
value: 0.9317690131469462
- name: Accuracy
type: accuracy
value: 0.9849738034967916
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0569
- Precision: 0.9215
- Recall: 0.9423
- F1: 0.9318
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 439 | 0.0702 | 0.8847 | 0.9170 | 0.9006 | 0.9795 |
| 0.183 | 2.0 | 878 | 0.0599 | 0.9161 | 0.9391 | 0.9274 | 0.9842 |
| 0.0484 | 3.0 | 1317 | 0.0569 | 0.9215 | 0.9423 | 0.9318 | 0.9850 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
staceythompson/autonlp-new-text-classification-38319698 | bb30e13fcbf7df17f6ae79433f1763e9cb895799 | 2021-12-03T14:06:55.000Z | [
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:staceythompson/autonlp-data-new-text-classification",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | false | staceythompson | null | staceythompson/autonlp-new-text-classification-38319698 | 10 | null | transformers | 11,730 | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- staceythompson/autonlp-data-new-text-classification
co2_eq_emissions: 2.0318857468309206
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 38319698
- CO2 Emissions (in grams): 2.0318857468309206
## Validation Metrics
- Loss: 0.04461582377552986
- Accuracy: 0.9909255898366606
- Macro F1: 0.9951842095089771
- Micro F1: 0.9909255898366606
- Weighted F1: 0.9909493945587176
- Macro Precision: 0.9942196531791907
- Micro Precision: 0.9909255898366606
- Weighted Precision: 0.9911878560263526
- Macro Recall: 0.9962686567164181
- Micro Recall: 0.9909255898366606
- Weighted Recall: 0.9909255898366606
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/staceythompson/autonlp-new-text-classification-38319698
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("staceythompson/autonlp-new-text-classification-38319698", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("staceythompson/autonlp-new-text-classification-38319698", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
stefan-it/electra-base-gc4-64k-900000-cased-generator | 3013aa3f303895fa078b3a64b06b17e6c333670f | 2021-05-01T11:24:01.000Z | [
"pytorch",
"tf",
"electra",
"fill-mask",
"de",
"dataset:german-nlp-group/german_common_crawl",
"transformers",
"license:mit",
"autotrain_compatible"
]
| fill-mask | false | stefan-it | null | stefan-it/electra-base-gc4-64k-900000-cased-generator | 10 | null | transformers | 11,731 | ---
language: de
license: mit
datasets:
- german-nlp-group/german_common_crawl
widget:
- text: "Heute ist ein [MASK] Tag"
---
# GC4LM: A Colossal (Biased) language model for German
This repository presents a colossal (and biased) language model for German trained on the recently released
["German colossal, clean Common Crawl corpus"](https://german-nlp-group.github.io/projects/gc4-corpus.html) (GC4),
with a total dataset size of ~844GB.
---
**Disclaimer**: the presented and trained language models in this repository are for **research only** purposes.
The GC4 corpus - that was used for training - contains crawled texts from the internet. Thus, the language models can
be considered as highly biased, resulting in a model that encodes stereotypical associations along gender, race,
ethnicity and disability status. Before using and working with the released checkpoints, it is highly recommended
to read:
[On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?](https://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf)
from Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell.
The aim of the released checkpoints is to boost research on large pre-trained language models for German, especially
for identifying biases and how to prevent them, as most research is currently done only for English.
---
Please use the new GitHub Discussions feature in order to discuss or present further research questions.
Feel free to use `#gc4lm` on Twitter 🐦.
|
sumedh/wav2vec2-large-xlsr-marathi | 3695aa23fc16ea305b6c0ac433937242de745669 | 2021-03-29T18:40:16.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mr",
"dataset:openslr",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | sumedh | null | sumedh/wav2vec2-large-xlsr-marathi | 10 | null | transformers | 11,732 | ---
language: mr
datasets:
- openslr
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Large 53 Marathi by Sumedh Khodke
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: OpenSLR mr
type: openslr
metrics:
- name: Test WER
type: wer
value: 12.7
---
# Wav2Vec2-Large-XLSR-53-Marathi
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Marathi using the [Open SLR64](http://openslr.org/64/) dataset. When using this model, make sure that your speech input is sampled at 16kHz. This data contains only female voices but the model works well for male voices too. Trained on Google Colab Pro on Tesla P100 16GB GPU.<br>
**WER (Word Error Rate) on the Test Set**: 12.70 %
## Usage
The model can be used directly without a language model as follows, given that your dataset has Marathi `actual_text` and `path_in_folder` columns:
```python
import torch, torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
#Since marathi is not present on Common Voice, script for reading the below dataset can be picked up from the eval script below
mr_test_dataset = all_data['test']
processor = Wav2Vec2Processor.from_pretrained("sumedh/wav2vec2-large-xlsr-marathi")
model = Wav2Vec2ForCTC.from_pretrained("sumedh/wav2vec2-large-xlsr-marathi")
resampler = torchaudio.transforms.Resample(48_000, 16_000) #first arg - input sample, second arg - output sample
# Preprocessing the datasets. We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path_in_folder"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
mr_test_dataset = mr_test_dataset.map(speech_file_to_array_fn)
inputs = processor(mr_test_dataset["speech"][:5], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", mr_test_dataset["actual_text"][:5])
```
## Evaluation
Evaluated on 10% of the Marathi data on Open SLR-64.
```python
import os, re, torch, torchaudio
from datasets import Dataset, load_metric
import pandas as pd
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
#below is a custom script to be used for reading marathi dataset since its not present on the Common Voice
dataset_path = "./OpenSLR-64_Marathi/mr_in_female/" #TODO : include the path of the dataset extracted from http://openslr.org/64/
audio_df = pd.read_csv(os.path.join(dataset_path,'line_index.tsv'),sep='\t',header=None)
audio_df.columns = ['path_in_folder','actual_text']
audio_df['path_in_folder'] = audio_df['path_in_folder'].apply(lambda x: dataset_path + x + '.wav')
audio_df = audio_df.sample(frac=1, random_state=2020).reset_index(drop=True) #seed number is important for reproducibility of WER score
all_data = Dataset.from_pandas(audio_df)
all_data = all_data.train_test_split(test_size=0.10,seed=2020) #seed number is important for reproducibility of WER score
mr_test_dataset = all_data['test']
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("sumedh/wav2vec2-large-xlsr-marathi")
model = Wav2Vec2ForCTC.from_pretrained("sumedh/wav2vec2-large-xlsr-marathi")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets. We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["actual_text"] = re.sub(chars_to_ignore_regex, '', batch["actual_text"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path_in_folder"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
mr_test_dataset = mr_test_dataset.map(speech_file_to_array_fn)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = mr_test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["actual_text"])))
```
## Training
Train-Test ratio was 90:10.
The training notebook Colab link [here](https://colab.research.google.com/drive/1wX46fjExcgU5t3AsWhSPTipWg_aMDg2f?usp=sharing).
## Training Config and Summary
weights-and-biases run summary [here](https://wandb.ai/wandb/xlsr/runs/3itdhtb8/overview?workspace=user-sumedhkhodke)
|
tanyagoyal/paraphrase-reap | c5e2a0b13d159f7ee64daae916082dd0cccb319b | 2021-08-31T22:49:07.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | tanyagoyal | null | tanyagoyal/paraphrase-reap | 10 | null | transformers | 11,733 | Entry not found |
tartuNLP/gpt-4-est-large | 1912427240d32f74c9bff76aae5c24b18cd4e10d | 2022-03-10T10:03:30.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
]
| text-generation | false | tartuNLP | null | tartuNLP/gpt-4-est-large | 10 | null | transformers | 11,734 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt-4-est-large
results: []
widget:
- text: ">wiki< mis on GPT? Vastus:"
---
# gpt-4-est-large
This is GPT for Estonian. Not GPT-4 :-) This is the large-size [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2) model, trained from scratch on 2.2 billion words (Estonian National Corpus + News Crawl + Common Crawl).
[Colab demo](https://colab.research.google.com/drive/1Bp7mGEQ1vmyqXPyXHV1yj68cRZEi2mq4?usp=sharing)
### Format
For training data was prepended with a text domain tag, and it should be added as prefix when using the model: >general<, >web<, >news<, >doaj< and >wiki< (standing for general texts, web crawled texts, news, article abstracts and wikipedia texts). Use the prefixes like this, e.g: ">web< Kas tead, et".
### Model details
- num. of layers: 24
- num. of heads: 24
- embedding size: 1536
- context size: 1024
- total size: 723.58M params
Further details to be added soon.
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
tasosk/distilbert-base-uncased-airlines | d0574f79439b5c6353e8eba3ef4bbf319931b65b | 2021-12-18T19:25:39.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | tasosk | null | tasosk/distilbert-base-uncased-airlines | 10 | null | transformers | 11,735 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-airlines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-airlines
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tasosk/airlines dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3174
- Accuracy: 0.9288
- F1: 0.9289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 203 | 0.2281 | 0.9164 | 0.9164 |
| No log | 2.0 | 406 | 0.2676 | 0.9164 | 0.9164 |
| 0.2314 | 3.0 | 609 | 0.3117 | 0.9217 | 0.9217 |
| 0.2314 | 4.0 | 812 | 0.3175 | 0.9270 | 0.9271 |
| 0.08 | 5.0 | 1015 | 0.3174 | 0.9288 | 0.9289 |
### Framework versions
- Transformers 4.14.1
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
teleportHQ/predicto_css | f96ca3f673398ef7a599673cb0f4b2b6ca3a0627 | 2021-05-23T13:05:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | teleportHQ | null | teleportHQ/predicto_css | 10 | null | transformers | 11,736 | predicto css model
|
textattack/facebook-bart-large-SST-2 | 8a4baa9662afa056a97a2e9441ae665a06504979 | 2020-06-09T16:51:43.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
]
| text-classification | false | textattack | null | textattack/facebook-bart-large-SST-2 | 10 | null | transformers | 11,737 | Entry not found |
thatdramebaazguy/movie-roberta-MITmovie | 199439c92435c5b86c601d6fca08e8b5a0f3078b | 2022-07-01T18:25:52.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"token-classification",
"English",
"dataset:imdb",
"dataset:cornell_movie_dialogue",
"dataset:MIT Movie",
"transformers",
"roberta-base",
"NER",
"named-entities",
"BIO",
"movies",
"DAPT",
"license:cc-by-4.0",
"autotrain_compatible"
]
| token-classification | false | thatdramebaazguy | null | thatdramebaazguy/movie-roberta-MITmovie | 10 | 1 | transformers | 11,738 | ---
datasets:
- imdb
- cornell_movie_dialogue
- MIT Movie
language:
- English
thumbnail:
tags:
- roberta
- roberta-base
- token-classification
- NER
- named-entities
- BIO
- movies
- DAPT
license: cc-by-4.0
---
# Movie Roberta + Movies NER Task
Objective:
This is Roberta Base + Movie DAPT --> trained for the NER task using MIT Movie Dataset
https://huggingface.co/thatdramebaazguy/movie-roberta-base was used as the MovieRoberta.
```
model_name = "thatdramebaazguy/movie-roberta-MITmovieroberta-base-MITmovie"
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="ner")
```
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** NER
**Training data:** MIT Movie
**Eval data:** MIT Movie
**Infrastructure**: 2x Tesla v100
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/movieR_NER_squad.sh)
## Hyperparameters
```
Num examples = 6253
Num Epochs = 5
Instantaneous batch size per device = 64
Total train batch size (w. parallel, distributed & accumulation) = 128
```
## Performance
### Eval on MIT Movie
- epoch = 5.0
- eval_accuracy = 0.9472
- eval_f1 = 0.8876
- eval_loss = 0.2211
- eval_mem_cpu_alloc_delta = 3MB
- eval_mem_cpu_peaked_delta = 2MB
- eval_mem_gpu_alloc_delta = 0MB
- eval_mem_gpu_peaked_delta = 38MB
- eval_precision = 0.887
- eval_recall = 0.8881
- eval_runtime = 0:00:03.73
- eval_samples = 1955
- eval_samples_per_second = 523.095
Github Repo:
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
---
|
tiesan/distilbert-base-uncased-finetuned-emotion | f8cec2eec975c4e015376cfbddb237c8ad564e77 | 2022-07-28T10:08:37.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | tiesan | null | tiesan/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,739 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.928
- name: F1
type: f1
value: 0.9284093135758671
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1658
- Accuracy: 0.928
- F1: 0.9284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2188 | 1.0 | 250 | 0.1809 | 0.925 | 0.9246 |
| 0.1383 | 2.0 | 500 | 0.1658 | 0.928 | 0.9284 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
trtd56/autonlp-wrime_joy_only-117396 | 6e73a0ac7e93a307077f9328dd842b22a29dc3f4 | 2021-05-20T08:07:48.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"ja",
"dataset:trtd56/autonlp-data-wrime_joy_only",
"transformers",
"autonlp"
]
| text-classification | false | trtd56 | null | trtd56/autonlp-wrime_joy_only-117396 | 10 | 1 | transformers | 11,740 | ---
tags: autonlp
language: ja
widget:
- text: "I love AutoNLP 🤗"
datasets:
- trtd56/autonlp-data-wrime_joy_only
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 117396
## Validation Metrics
- Loss: 0.4094310998916626
- Accuracy: 0.8201678240740741
- Precision: 0.6750303520841765
- Recall: 0.7912713472485768
- AUC: 0.8927167943538512
- F1: 0.728543350076436
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/trtd56/autonlp-wrime_joy_only-117396
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("trtd56/autonlp-wrime_joy_only-117396", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("trtd56/autonlp-wrime_joy_only-117396", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
tuanle/VN-News-GPT2 | e24e74bf4b65390b59bdca6758fdf7985abc551e | 2022-03-09T19:38:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"vi",
"dataset:Private Vietnamese News dataset",
"transformers",
"News",
"Language model",
"GPT2"
]
| text-generation | false | tuanle | null | tuanle/VN-News-GPT2 | 10 | null | transformers | 11,741 | ---
language:
- vi
thumbnail: "url to a thumbnail used in social sharing"
tags:
- News
- Language model
- GPT2
datasets:
- Private Vietnamese News dataset
metrics:
- rouge
- wer
---
# GPT-2 Fine-tuning With Vietnamese News
## Model description
A Fine-tuned Vietnamese GPT2 model which can generate Vietnamese news based on context (category + headline), based on the Vietnamese Wiki GPT2 pretrained model (https://huggingface.co/danghuy1999/gpt2-viwiki)
## Github
- https://github.com/Tuan-Lee-23/Vietnamese-News-Generative-Model
## Purpose
This model has been made only for fun and experimental study. However, It gave impressive results
Most of the generated news are fake with unconfirmed information. Honestly, I feel excited about this project =))
## Dataset
The dataset is about 30k Vietnamese news dataset from website thanhnien.vn
Categories are:
- thời sự, 'thế giới', 'tài chính kinh doanh', 'đời sống', 'văn hoá', 'giải trí', 'giới trẻ', 'giáo dục', 'công nghệ', 'sức khoẻ'
## Result
- Train Loss: 2.3
- Val loss: 2.5
- Rouge F1: 0.556
- Word error rate: 1.08
## Deployment
- You can use my deployment at Huggingface' space (https://huggingface.co/spaces/tuanle/VN-News-GPT2)
- You can choose any categories and give it some text for the headline, then generate. There we go
- P/s: I've already tried to deploy my model on Streamlit's cloud, but It was always being broken due to out of memory
- Example result

- Input:
```
- Category: thế giới
- Headline: Chiến tranh biên giới
```
- Output:
- Article 1:
- Headline: Chiến tranh biên giới Nga - phương Tây căng thẳng
- Content:
Hôm qua (24.11), Mỹ và các đồng minh bày tỏ lo ngại về khả năng Nga tấn công Ukraine, trong khi Moscow bác bỏ cáo buộc này.
Reuters dẫn lời các quan chức Mỹ cho rằng Nga có thể tiến hành chiến tranh với Ukraine trong vài ngày tới. Trong khi đó, Bộ trưởng Quốc phòng Nga Sergei Shoigu tuyên bố Nga không đe dọa bất kỳ ai. Ông lặp lại lời kêu gọi NATO không mở rộng về phía đông, đồng thời cho biết Nga sẵn sàng đối thoại với các nước châu Âu và châu lục địa để giải quyết các vấn đề an ninh và ổn định ở châu Á - Thái Bình Dương (Indo-Pacific).
Trong khi các cuộc đàm phán diễn ra trong bối cảnh Nga tập trung hàng chục ngàn binh sĩ gần biên cương của mình, Moscow phủ nhận các yêu cầu của Mỹ về việc rút quân khỏi Ukraine. Tuy nhiên, Tổng thống Ukraine Volodymyr Zelensky của Ukraine cho hay nước này không có ý định nhượng bộ về lãnh thổ nước mình. “Chúng tôi không lo lắng về các lệnh cấm vận”, ông phát biểu tại cuộc họp báo ở thủ đô Kiev hôm qua. Nhà lãnh đạo Ukraine cũng nói rằng các hoạt động quân sự của Nga tại khu vực đang trở nên tồi tệ hơn so với những gì Moscow từng thực hiện gần đây. Nga đã nhiều lần cảnh báo các lực lượng vũ trang Ukraine và NATO về những hành động tương tự, dù Moscow không đưa ra bình luận chính thức. Giới quan sát trước đó lo sợ Nga sẽ đưa quân đến gần Ukraine nếu có động thái gây bất ổn. Tổng thư ký NATO Jens Stoltenberg ngày 26.10 cho thấy Nga đang chuẩn bị cho một cuộc tập trận chung với Mỹ trong tuần này, theo Reuters. Một phát ngôn viên Bộ Ngoại giao Nga Maria Zakharova ngày 24.9 nói Moscow đang xem xét việc triển khai thêm vũ khí hạt nhân và tên lửa đạn đạo tầm trung (IRBM) đến Đông Âu, châu Phi và Trung Đông để đối phó với sự trỗi dậy của biến thể Omicron. Trước đó một ngày, Nga thông báo đã thử nghiệm thành công một hệ thống IRBM mới, nhưng chưa có thông tin về phản ứng của Moscow đối với đề xuất này từ phía Mỹ. Moscow khẳng định các máy bay không người lái (UAV) do Nga chế tạo đều có hiệu lực tốt và được bảo vệ bởi các cơ quan tình báo quốc tế. Hồi tháng trước, Ukraine đã đưa thêm một số tiêm kích F-16A và Su-30 đến căn cứ không quân ở miền đông Ukraine để phản đối sự bành trướng của quân đội Nga. Đáp lại, chính quyền Kiev ngày 25.12 đã thông qua dự luật cho phép Ukraine sáp nhập bán đảo Crimea vào năm 2014 với giá 1,5 triệu USD. Ngoài ra còn có một thỏa thuận AUKUS với Nga nhằm giúp Ukraine gia nhập NATO. Theo Reuters, Mỹ cũng đã đề nghị Nga cung cấp thêm chi phí cho việc xây dựng cơ sở hạ tầng cho các tàu chiến, tàu hộ tống và tàu tuần tra của NATO tại biên viễn này trong năm nay, nhằm ngăn chặn việc Nga xâm nhập Ukraine từ các bên khác. Ngoại trưởng Mỹ Antony Blinken dự kiến sẽ có cuộc điện đàm với người đồng cấp Nga Vladimir Putin vào ngày 30.1.2022, trước khi cuộc gặp giữa ông Putin và ông Bynic giữa các bộ trưởng Nga và Ukraine sẽ được tổ chức trực tiếp tại Geneva (Thụy Sĩ) vào cuối tháng này (theo giờ địa phương). Các ngoại trưởng cũng có mặt tại Moscow từ ngày 2 - 3.2 để thảo luận về tình hình Ukraine tại Hội nghị thượng đỉnh G20 của Liên Hiệp Quốc về biến đổi khí hậu lần thứ
- Article 2:
- headline: Chiến tranh biên giới Nga - phương Tây căng thẳng
- content:
Trong một diễn biến liên quan tình hình Ukraine, Bộ trưởng Quốc phòng Nga Sergei Shoigu hôm qua thông báo nước này đã tiến hành cuộc tập trận chung với các đồng minh và đối tác ở miền đông Ukraine trong vài ngày qua, theo Reuters. Ông cảnh báo Nga có thể đưa quân lực đến gần lãnh hải của mình, nhưng Moscow đã bác bỏ thông tin này và nhấn mạnh sẽ không đe dọa bất kỳ ai. “Chúng tôi sẵn sàng làm điều đó”, ông phát biểu tại một cuộc họp báo ở thủ đô Moscow hôm 30.11, sau khi Tổng thống Ukraine Volodymyr Zelensky tuyên bố Nga không có ý định xâm lược Ukraine. Nhà lãnh đạo Nga cũng nói thêm rằng những động thái của Nga đối với Ukraine sẽ gây tổn hại đến hòa bình và ổn định ở khu vực, gây thiệt hại nặng nề cho người dân và tài sản của các nước thuộc Liên bang Nga (FSB).
Nga không đưa ra bình luận chính thức nào về những cáo buộc trên. Tuy nhiên, một phát ngôn viên Bộ Ngoại giao Nga cho hay Moscow không vi phạm các lệnh cấm vận của Mỹ và NATO. Ngoại trưởng Ukraine Dmytro Kuleba ngày 29.10 cho biết Nga đã đưa hơn 100.000 binh sĩ và thiết bị quân sự tới gần Ukraine để đối phó khả năng xảy ra xung đột với lực lượng đòi ly khai ở phía đông. Theo Reuters, Nga chưa có phản ứng cụ thể nào từ Washington về các yêu cầu của NATO về việc Ukraine gia nhập liên minh này. Trước đó, NATO đã đề nghị Nga cung cấp vũ khí và khí tài cho Ukraine ngay lập tức, dù Moscow phủ nhận kế hoạch này từ phía Mỹ. Ngày 28.9, Mỹ cho máy bay ném bom B-52 Black Hawk đến căn cứ không quân St. Petersburg để triển khai tên lửa chống tăng và pháo binh. Một quan chức cấp cao trong chính quyền Moscow cũng khẳng định Mỹ sẽ tiếp tục đối thoại với Nga trong những ngày tới, trong bối cảnh Moscow tập trung hàng chục ngàn binh lực ở Ukraine và đang tăng cường các hoạt động ở Đông Âu, châu Á, Thái Bình Dương, Đông Nam Á và châu Phi. Trong khi đó vào ngày 27.8, Tổng thư ký NATO Jens Stoltenberg nói Moscow sẽ làm mọi thứ để đảm bảo an ninh cho tất cả các thành viên của khối, đồng thời bày tỏ hy vọng NATO không mở rộng về phía Nga nếu Moscow có hành động gây bất ổn. Đáp lại, Điện Kremlin nói rằng các cuộc đàm phán về vấn đề Ukraine với Mỹ, EU và các bên khác sẽ được công bố vào tuần tới. Sau khi Nga sáp nhập bán đảo Crimea vào năm 2014, Ukraine đã rút quân khỏi lãnh thổ này trong nhiều tháng để bảo vệ chủ quyền và lợi ích quốc gia. Moscow luôn kiên quyết phản đối các đề xuất của Washington, đặc biệt là việc NATO gia tăng sự hiện diện ở châu Âu. Ngoài ra, Moscow còn kêu gọi NATO giảm các biện pháp hiện đại hóa quân đội và áp đặt các quy định mới để chống lại những nỗ lực của Moscow nhằm vào các nền kinh tế lớn của Ukraine như Trung Quốc, Ấn Độ, Pakistan và thậm chí là Trung Đông. NATO cũng đang tìm cách ngăn chặn sự trỗi dậy của một số nhóm vũ trang mới nổi, chẳng hạn như tổ chức Nhà nước Hồi giáo tự xưng (IS) và al-Qaeda, cũng như các nhóm tội phạm mạng khác. Hồi tháng 9, chính phủ Mỹ thông qua luật cấm nhập cảnh Ukraine cho đến khi có kết quả thẩm định pháp lý. Mỹ cũng đã cấm người nước ngoài đến Ukraine từ ngày 1.1.2022
## Example usage (Huggingface)
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
"""
Category includes: ['thời sự ', 'thế giới', 'tài chính kinh doanh', 'đời sống', 'văn hoá', 'giải trí', 'giới trẻ', 'giáo dục','công nghệ', 'sức khoẻ']
"""
category = "thời sự"
headline = "Nam thanh niên" # A full headline or only some text
text = f"<|startoftext|> {category} <|headline|> {headline}"
tokenizer = AutoTokenizer.from_pretrained("tuanle/VN-News-GPT2")
model= AutoModelForCausalLM.from_pretrained("tuanle/VN-News-GPT2").to(device)
input_ids = tokenizer.encode(text, return_tensors='pt').to(device)
sample_outputs = model.generate(input_ids,
do_sample=True,
max_length= 768,
min_length= 60,
# temperature = .8,
top_k= 100,
top_p = 0.7,
num_beams= 5,
early_stopping= True,
no_repeat_ngram_size= 2 ,
num_return_sequences= 3)
for i, sample_output in enumerate(sample_outputs):
temp = tokenizer.decode(sample_output.tolist())
print(f">> Generated text {i+1}\n\n{temp}")
print('\n---')
```
|
tyoyo/t5-base-TEDxJP-6body-0context | 00ceb227d44b6a6e25b9078e05f992a3e258fd6d | 2021-12-01T08:45:15.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | tyoyo | null | tyoyo/t5-base-TEDxJP-6body-0context | 10 | null | transformers | 11,742 | Entry not found |
uclanlp/plbart-multi_task-compiled | df4571a229caa665e6e01ba5bf2325d98e241f1d | 2022-03-02T07:37:22.000Z | [
"pytorch",
"plbart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | uclanlp | null | uclanlp/plbart-multi_task-compiled | 10 | null | transformers | 11,743 | Entry not found |
vera-pro/bert-mention-de | 9fe1cb431a6e23c1049e185fed47373a17035b72 | 2021-05-20T08:53:09.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | vera-pro | null | vera-pro/bert-mention-de | 10 | null | transformers | 11,744 | Entry not found |
vesteinn/german-icelandic-translation | 62bcad9c121fffa2373af3c1936df22a0a6d308c | 2021-11-26T15:24:14.000Z | [
"pytorch",
"marian",
"text2text-generation",
"de",
"is",
"transformers",
"translation",
"autotrain_compatible"
]
| translation | false | vesteinn | null | vesteinn/german-icelandic-translation | 10 | null | transformers | 11,745 | ---
language:
- de
- is
tags:
- translation
---
# Student project - temporary upload |
x-tech/mt5-translate-yue-zh | aea3f768da9d11b711709dd202d606cdeac56efe | 2022-06-04T09:32:04.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"yue",
"zh",
"dataset:x-tech/cantonese-mandarin-translations",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | x-tech | null | x-tech/mt5-translate-yue-zh | 10 | null | transformers | 11,746 | ---
language:
- yue
- zh
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: output
results: []
datasets:
- x-tech/cantonese-mandarin-translations
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on dataset [x-tech/cantonese-mandarin-translations](https://huggingface.co/datasets/x-tech/cantonese-mandarin-translations).
## Model description
The model translates Cantonese sentences to Mandarin.
## Intended uses & limitations
When you use the model, please make sure to add `translate cantonese to mandarin: <sentence>` (please note the space after colon) before the text you want to translate.
## Training and evaluation data
Training Dataset: [x-tech/cantonese-mandarin-translations](https://huggingface.co/datasets/x-tech/cantonese-mandarin-translations)
## Training procedure
Training is based on [example in transformers library](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
Since we still need to set up validation set, we do not have any training results yet.
### Framework versions
- Transformers 4.12.5
- Pytorch 1.8.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
yorko/scibert_scivocab_uncased_long_4096 | 51c6fab3d212a57c01d3cf730b6f3f8476a6507b | 2021-06-18T13:41:31.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | yorko | null | yorko/scibert_scivocab_uncased_long_4096 | 10 | 2 | transformers | 11,747 | # SciBERT Longformer
This is a Lonformer version of the [SciBERT uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) model by Allen AI. The model is slower than SciBERT (~2.5x in my benchmarks) but can allow for 8x wider `max_seq_length` (4096 vs. 512) which is handy in case of working with long texts, e.g. scientific full texts.
The conversion to Longformer was performed with a [tutorial](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) by Allen AI: see a [Google Colab Notebook](https://colab.research.google.com/drive/1NPTnMkeAYOF2MWH3_uJYesuxxdOzxrFn?usp=sharing) by [Yury](https://yorko.github.io/) which closely follows the tutorial.
Note:
- no additional MLM pretraining of the Longformer was performed, the [collab notebook](https://colab.research.google.com/drive/1NPTnMkeAYOF2MWH3_uJYesuxxdOzxrFn?usp=sharing) stops at step 3, and step 4 is not done. The model can be improved with this additional MLM pretraining, better to do so with scientific texts, e.g. [S@ORC](https://github.com/allenai/s2orc), again by Allen AI.
- no extensive benchmarks SciBERT Longformer vs. SciBERT were performed in terms of downstream task performance
Links:
- the original [SciBERT repo](https://github.com/allenai/scibert)
- the original [Longformer repo](https://github.com/allenai/longformer)
If using these models, please consider citing the following papers:
```
@inproceedings{beltagy-etal-2019-scibert,
title = "SciBERT: A Pretrained Language Model for Scientific Text",
author = "Beltagy, Iz and Lo, Kyle and Cohan, Arman",
booktitle = "EMNLP",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1371"
}
@article{Beltagy2020Longformer,
title={Longformer: The Long-Document Transformer},
author={Iz Beltagy and Matthew E. Peters and Arman Cohan},
journal={arXiv:2004.05150},
year={2020},
}
```
|
wietsedv/xlm-roberta-base-ft-udpos28-id | d672796ec1a6109dfdd0651d83e2ff284b6a3aec | 2022-02-25T09:58:50.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"id",
"dataset:universal_dependencies",
"transformers",
"part-of-speech",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | wietsedv | null | wietsedv/xlm-roberta-base-ft-udpos28-id | 10 | null | transformers | 11,748 |
---
language:
- id
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-id
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 82.4
- type: accuracy
name: Dutch Test accuracy
value: 83.4
- type: accuracy
name: German Test accuracy
value: 75.5
- type: accuracy
name: Italian Test accuracy
value: 82.7
- type: accuracy
name: French Test accuracy
value: 82.0
- type: accuracy
name: Spanish Test accuracy
value: 86.1
- type: accuracy
name: Russian Test accuracy
value: 84.1
- type: accuracy
name: Swedish Test accuracy
value: 83.2
- type: accuracy
name: Norwegian Test accuracy
value: 79.9
- type: accuracy
name: Danish Test accuracy
value: 81.9
- type: accuracy
name: Low Saxon Test accuracy
value: 36.2
- type: accuracy
name: Akkadian Test accuracy
value: 38.4
- type: accuracy
name: Armenian Test accuracy
value: 76.4
- type: accuracy
name: Welsh Test accuracy
value: 65.3
- type: accuracy
name: Old East Slavic Test accuracy
value: 68.0
- type: accuracy
name: Albanian Test accuracy
value: 73.8
- type: accuracy
name: Slovenian Test accuracy
value: 71.6
- type: accuracy
name: Guajajara Test accuracy
value: 29.6
- type: accuracy
name: Kurmanji Test accuracy
value: 76.2
- type: accuracy
name: Turkish Test accuracy
value: 74.8
- type: accuracy
name: Finnish Test accuracy
value: 79.1
- type: accuracy
name: Indonesian Test accuracy
value: 91.9
- type: accuracy
name: Ukrainian Test accuracy
value: 80.7
- type: accuracy
name: Polish Test accuracy
value: 82.5
- type: accuracy
name: Portuguese Test accuracy
value: 87.3
- type: accuracy
name: Kazakh Test accuracy
value: 78.8
- type: accuracy
name: Latin Test accuracy
value: 73.9
- type: accuracy
name: Old French Test accuracy
value: 47.0
- type: accuracy
name: Buryat Test accuracy
value: 59.3
- type: accuracy
name: Kaapor Test accuracy
value: 23.3
- type: accuracy
name: Korean Test accuracy
value: 63.5
- type: accuracy
name: Estonian Test accuracy
value: 80.0
- type: accuracy
name: Croatian Test accuracy
value: 79.6
- type: accuracy
name: Gothic Test accuracy
value: 16.8
- type: accuracy
name: Swiss German Test accuracy
value: 34.9
- type: accuracy
name: Assyrian Test accuracy
value: 17.2
- type: accuracy
name: North Sami Test accuracy
value: 36.7
- type: accuracy
name: Naija Test accuracy
value: 36.5
- type: accuracy
name: Latvian Test accuracy
value: 81.8
- type: accuracy
name: Chinese Test accuracy
value: 34.0
- type: accuracy
name: Tagalog Test accuracy
value: 73.3
- type: accuracy
name: Bambara Test accuracy
value: 31.7
- type: accuracy
name: Lithuanian Test accuracy
value: 81.3
- type: accuracy
name: Galician Test accuracy
value: 86.2
- type: accuracy
name: Vietnamese Test accuracy
value: 67.9
- type: accuracy
name: Greek Test accuracy
value: 79.0
- type: accuracy
name: Catalan Test accuracy
value: 82.9
- type: accuracy
name: Czech Test accuracy
value: 79.5
- type: accuracy
name: Erzya Test accuracy
value: 46.0
- type: accuracy
name: Bhojpuri Test accuracy
value: 54.7
- type: accuracy
name: Thai Test accuracy
value: 48.4
- type: accuracy
name: Marathi Test accuracy
value: 76.7
- type: accuracy
name: Basque Test accuracy
value: 71.9
- type: accuracy
name: Slovak Test accuracy
value: 81.3
- type: accuracy
name: Kiche Test accuracy
value: 37.3
- type: accuracy
name: Yoruba Test accuracy
value: 25.4
- type: accuracy
name: Warlpiri Test accuracy
value: 34.0
- type: accuracy
name: Tamil Test accuracy
value: 80.5
- type: accuracy
name: Maltese Test accuracy
value: 23.8
- type: accuracy
name: Ancient Greek Test accuracy
value: 56.4
- type: accuracy
name: Icelandic Test accuracy
value: 75.9
- type: accuracy
name: Mbya Guarani Test accuracy
value: 31.3
- type: accuracy
name: Urdu Test accuracy
value: 69.4
- type: accuracy
name: Romanian Test accuracy
value: 78.8
- type: accuracy
name: Persian Test accuracy
value: 77.4
- type: accuracy
name: Apurina Test accuracy
value: 39.9
- type: accuracy
name: Japanese Test accuracy
value: 21.3
- type: accuracy
name: Hungarian Test accuracy
value: 78.0
- type: accuracy
name: Hindi Test accuracy
value: 77.3
- type: accuracy
name: Classical Chinese Test accuracy
value: 18.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 44.8
- type: accuracy
name: Faroese Test accuracy
value: 69.5
- type: accuracy
name: Sanskrit Test accuracy
value: 38.8
- type: accuracy
name: Livvi Test accuracy
value: 59.7
- type: accuracy
name: Arabic Test accuracy
value: 80.3
- type: accuracy
name: Wolof Test accuracy
value: 32.8
- type: accuracy
name: Bulgarian Test accuracy
value: 82.0
- type: accuracy
name: Akuntsu Test accuracy
value: 43.7
- type: accuracy
name: Makurap Test accuracy
value: 20.5
- type: accuracy
name: Kangri Test accuracy
value: 42.4
- type: accuracy
name: Breton Test accuracy
value: 60.3
- type: accuracy
name: Telugu Test accuracy
value: 80.6
- type: accuracy
name: Cantonese Test accuracy
value: 41.0
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 45.5
- type: accuracy
name: Karelian Test accuracy
value: 61.6
- type: accuracy
name: Upper Sorbian Test accuracy
value: 60.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 66.9
- type: accuracy
name: Komi Zyrian Test accuracy
value: 37.5
- type: accuracy
name: Irish Test accuracy
value: 68.8
- type: accuracy
name: Nayini Test accuracy
value: 42.3
- type: accuracy
name: Munduruku Test accuracy
value: 25.4
- type: accuracy
name: Manx Test accuracy
value: 34.5
- type: accuracy
name: Skolt Sami Test accuracy
value: 30.1
- type: accuracy
name: Afrikaans Test accuracy
value: 77.6
- type: accuracy
name: Old Turkish Test accuracy
value: 45.7
- type: accuracy
name: Tupinamba Test accuracy
value: 38.8
- type: accuracy
name: Belarusian Test accuracy
value: 79.9
- type: accuracy
name: Serbian Test accuracy
value: 81.3
- type: accuracy
name: Moksha Test accuracy
value: 44.8
- type: accuracy
name: Western Armenian Test accuracy
value: 71.4
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 59.6
- type: accuracy
name: Khunsari Test accuracy
value: 37.8
- type: accuracy
name: Hebrew Test accuracy
value: 87.5
- type: accuracy
name: Uyghur Test accuracy
value: 75.7
- type: accuracy
name: Chukchi Test accuracy
value: 31.6
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Indonesian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-id")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-id")
```
|
Brian-M-Collins/Twitter-Abstract-Summary | a76e5dc8275d69464658300b31bf1562b8b6cf8d | 2022-02-25T06:40:53.000Z | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Brian-M-Collins | null | Brian-M-Collins/Twitter-Abstract-Summary | 10 | 1 | transformers | 11,749 | ��h e l l o
|
ghadeermobasher/BC5CDR-Disease-Modified_bluebert_pubmed_uncased_L-12_H-768_A-12_latest | ae14859b968b95ca53a7754ff0c85c50ca980de3 | 2022-02-25T15:36:45.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Disease-Modified_bluebert_pubmed_uncased_L-12_H-768_A-12_latest | 10 | null | transformers | 11,750 | Entry not found |
nsi319/bigbird-roberta-base-finetuned-app | d56606efa7df136d57cf9ac4f866899874fd7dbc | 2022-02-27T10:53:05.000Z | [
"pytorch",
"big_bird",
"text-classification",
"en",
"transformers",
"mobile app descriptions",
"playstore",
"license:mit"
]
| text-classification | false | nsi319 | null | nsi319/bigbird-roberta-base-finetuned-app | 10 | null | transformers | 11,751 | ---
language: "en"
thumbnail: "https://huggingface.co/nsi319"
tags:
- big_bird
- pytorch
- text-classification
- mobile app descriptions
- playstore
license: "mit"
inference: true
---
# Mobile App Classification
## Model description
BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. The model can handle input sequence of length up to 4,096 tokens.
The [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) model is fine-tuned to classify an mobile app description into one of **6 play store categories**.
Trained on 9000 samples of English App Descriptions and associated categories of apps available in [Google Play](https://play.google.com/store/apps).
## Fine-tuning
The model was fine-tuned for 5 epochs with a batch size of 16, a learning rate of 2e-05, and a maximum sequence length of 1024. Since this was a classification task, the model was trained with a cross-entropy loss function. The best evaluation f1 score achieved by the model was 0.8964259037209702, found after 4 epochs. The accuracy of the model on the test set was 0.8966.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("nsi319/bigbird-roberta-base-finetuned-app")
model = AutoModelForSequenceClassification.from_pretrained("nsi319/bigbird-roberta-base-finetuned-app")
classifier = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
classifier("From scores to signings, the ESPN App is here to keep you updated. Never miss another sporting moment with up-to-the-minute scores, latest news & a range of video content. Sign in and personalise the app to receive alerts for your teams and leagues. Wherever, whenever; the ESPN app keeps you connected.")
'''Output'''
[{'label': 'Sports', 'score': 0.9983325600624084}]
```
## Limitations
Training data consists of apps from 6 play store categories namely Education, Entertainment, Productivity, Sports, News & Magazines and Photography.
|
batterydata/batteryonlybert-cased-squad-v1 | a5fb832883f2dd06b310c0d987dc03461f0c2243 | 2022-03-03T20:25:04.000Z | [
"pytorch",
"bert",
"question-answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"transformers",
"question answering",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | false | batterydata | null | batterydata/batteryonlybert-cased-squad-v1 | 10 | null | transformers | 11,752 | ---
language: en
tags: question answering
license: apache-2.0
datasets:
- squad
- batterydata/battery-device-data-qa
metrics: squad
---
# BatteryOnlyBERT-cased for QA
**Language model:** batteryonlybert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 3
base_LM_model = "batteryonlybert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.61,
"f1": 87.30,
```
Evaluated on the battery device dataset.
```
"precision": 64.28,
"recall": 82.72,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement |
giggio/Far50BrBERT-base | ae7c2ecb762d0a7fbba600812235e6df37417955 | 2022-03-07T05:24:15.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | giggio | null | giggio/Far50BrBERT-base | 10 | null | transformers | 11,753 | Entry not found |
mrm8488/spanish-TinyBERT-betito-finetuned-mnli | ff2473ffe8ec3f6b850dcfb9f73406125fcca55c | 2022-03-07T16:39:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | mrm8488 | null | mrm8488/spanish-TinyBERT-betito-finetuned-mnli | 10 | null | transformers | 11,754 | Entry not found |
FinScience/FS-distilroberta-fine-tuned | 559b00d987b698b4922d7c7a8174b29072c97e0b | 2022-03-07T17:17:48.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"transformers"
]
| text-classification | false | FinScience | null | FinScience/FS-distilroberta-fine-tuned | 10 | 1 | transformers | 11,755 | ---
language:
- en
---
# FS-distilroberta-fine-tuned
The model was obtained by fine-tuning "mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis" model for sentiment analysis on financial news gathered by FinScience software. It predicts the sentiment of news with one label ("negative", "neutral" or "positive"). At the moment, the models works only in English.
## Training data
The training dataset consists of 2558 titles of news that were manually labelled by FinScience Team using doccano tool. A "neutral" label was assigned to those news for which an agreement was not reached. 70% (1790 news) of such dataset was employed as training set, while 15% (384) as validation set and the remaining 15% as test set. F1-score (macro) was selected as the evaluation metric.
| Set | Number of news | Scope |
| -------- | ----------------- | ----------------- |
| Training | 1790 | Training the model|
| Validation | 384 | Selecting the hyper-parameters |
| Test | 384 | Evaluating the performance|
## Accuracy
The table below summarizes the performance of the models that were tested on the same test set, consisting of 384 held-out titles:
| Language | Accuracy| F1-score (macro) |
| -------- | ---------------------- | ------------------- |
| FS-distilroberta-fine-tuned | 76%| 76%
|
ebrigham/EYY-Topic-Classification | d2e1d3f8534ae1c0c88a95f627faa41361826ca4 | 2022-03-13T19:20:44.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | false | ebrigham | null | ebrigham/EYY-Topic-Classification | 10 | 1 | transformers | 11,756 | Entry not found |
KoichiYasuoka/roberta-base-ukrainian-upos | 53064702793301da93896c3a100868d04e68b25b | 2022-05-07T13:34:10.000Z | [
"pytorch",
"roberta",
"token-classification",
"uk",
"dataset:universal_dependencies",
"dataset:ukr-models/Ukr-Synth",
"transformers",
"ukrainian",
"pos",
"ubertext",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
]
| token-classification | false | KoichiYasuoka | null | KoichiYasuoka/roberta-base-ukrainian-upos | 10 | 1 | transformers | 11,757 | ---
language:
- "uk"
tags:
- "ukrainian"
- "token-classification"
- "pos"
- "ubertext"
- "dependency-parsing"
datasets:
- "universal_dependencies"
- "ukr-models/Ukr-Synth"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "Свобода і незалежність – найголовніші умови успіху і процвітання."
---
# roberta-base-ukrainian-upos
## Model Description
This is a RoBERTa model pre-trained on Корпус UberText for POS-tagging and dependency-parsing, derived from [roberta-base-ukrainian](https://huggingface.co/KoichiYasuoka/roberta-base-ukrainian). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-ukrainian-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-ukrainian-upos")
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-ukrainian-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
momo/MOTOD_pre_trained | adb7fe7cd44c155cca7eef92a744184a1eae375d | 2022-03-10T07:29:29.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:apache-2.0"
]
| text-generation | false | momo | null | momo/MOTOD_pre_trained | 10 | null | transformers | 11,758 | ---
license: apache-2.0
---
|
amanm27/bert-base-uncased-sports-scouting | 50f78c3470cdbf922907b1fc6bd20a9f7effdf1b | 2022-03-10T07:12:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | amanm27 | null | amanm27/bert-base-uncased-sports-scouting | 10 | null | transformers | 11,759 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-sports-scouting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-sports-scouting
This model is a fine-tuned version of [amanm27/bert-base-uncased-sports](https://huggingface.co/amanm27/bert-base-uncased-sports) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 378 | 1.7194 |
| 2.0165 | 2.0 | 756 | 1.5709 |
| 1.6935 | 3.0 | 1134 | 1.5282 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
KoichiYasuoka/bert-large-german-upos | fb195ef67fc49d3805dfdeb30df81e6cfcf208d9 | 2022-03-11T03:06:12.000Z | [
"pytorch",
"bert",
"token-classification",
"de",
"dataset:universal_dependencies",
"transformers",
"german",
"pos",
"dependency-parsing",
"license:mit",
"autotrain_compatible"
]
| token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-large-german-upos | 10 | null | transformers | 11,760 | ---
language:
- "de"
tags:
- "german"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "mit"
pipeline_tag: "token-classification"
---
# bert-large-german-upos
## Model Description
This is a BERT model pre-trained with [UD_German-HDT](https://github.com/UniversalDependencies/UD_German-HDT) for POS-tagging and dependency-parsing, derived from [gbert-large](https://huggingface.co/deepset/gbert-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-german-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-german-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-german-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
StivenLancheros/Biobert-base-cased-v1.2-finetuned-ner-CRAFT | f3a7371011946f72450a249ded3ba546777860fe | 2022-03-12T11:49:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/Biobert-base-cased-v1.2-finetuned-ner-CRAFT | 10 | null | transformers | 11,761 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Biobert-base-cased-v1.2-finetuned-ner-CRAFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Biobert-base-cased-v1.2-finetuned-ner-CRAFT
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1878
- Precision: 0.8397
- Recall: 0.8366
- F1: 0.8382
- Accuracy: 0.9683
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.11 | 1.0 | 1360 | 0.1668 | 0.7952 | 0.7917 | 0.7934 | 0.9611 |
| 0.0484 | 2.0 | 2720 | 0.1640 | 0.8224 | 0.8371 | 0.8297 | 0.9661 |
| 0.0261 | 3.0 | 4080 | 0.1812 | 0.8143 | 0.8447 | 0.8292 | 0.9662 |
| 0.0112 | 4.0 | 5440 | 0.1878 | 0.8397 | 0.8366 | 0.8382 | 0.9683 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
anwesham/mbert_ur | 2ec0560c98840a510fe0a6ff784c7348686d8ff4 | 2022-03-13T07:46:01.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | anwesham | null | anwesham/mbert_ur | 10 | null | transformers | 11,762 | Entry not found |
ComCom/skt_kogpt2-base-v2 | a01a98c40194cd000722ab4ca05b73dfe82b0a91 | 2022-03-14T07:37:27.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"ko",
"transformers",
"license:cc-by-nc-sa-4.0"
]
| text-generation | false | ComCom | null | ComCom/skt_kogpt2-base-v2 | 10 | null | transformers | 11,763 | ---
language: ko
tags:
- gpt2
license: cc-by-nc-sa-4.0
---
- This model forked from [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2).
- You can use this model in [Teachable-NLP](https://ainize.ai/teachable-nlp).
For more details: https://github.com/SKT-AI/KoGPT2
|
keerthisaran/distilbert-base-uncased-finetuned-emotion | 36f2573d50da3813be96cfff47dc5ab0bc019285 | 2022-04-10T21:58:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | keerthisaran | null | keerthisaran/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,764 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.920435758296201
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.92
- F1: 0.9204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8464 | 1.0 | 250 | 0.3125 | 0.9085 | 0.9061 |
| 0.2476 | 2.0 | 500 | 0.2183 | 0.92 | 0.9204 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_English | e61d168f4ec22b35efb524023b80376753bf68e7 | 2022-03-14T23:42:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | StivenLancheros | null | StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_English | 10 | null | transformers | 11,765 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_English
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_English
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1614
- Precision: 0.8585
- Recall: 0.8623
- F1: 0.8604
- Accuracy: 0.9724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0725 | 1.0 | 1360 | 0.1242 | 0.8090 | 0.8698 | 0.8383 | 0.9681 |
| 0.0281 | 2.0 | 2720 | 0.1541 | 0.8497 | 0.8549 | 0.8523 | 0.9705 |
| 0.0162 | 3.0 | 4080 | 0.1510 | 0.8390 | 0.8681 | 0.8533 | 0.9711 |
| 0.0053 | 4.0 | 5440 | 0.1614 | 0.8585 | 0.8623 | 0.8604 | 0.9724 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
moralstories/roberta-large_action-context-consequence | e66732dfcac4f3a2b4e93e37812622a481624d9e | 2022-03-15T04:56:54.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:afl-3.0"
]
| text-classification | false | moralstories | null | moralstories/roberta-large_action-context-consequence | 10 | null | transformers | 11,766 | ---
license: afl-3.0
---
|
moralstories/roberta-large_action-context | 3be98e34bcabde0d5312e00a71ca17fcb530edfd | 2022-03-15T17:29:14.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:afl-3.0"
]
| text-classification | false | moralstories | null | moralstories/roberta-large_action-context | 10 | null | transformers | 11,767 | ---
license: afl-3.0
---
|
facebook/regnet-x-006 | 9655014cb4d6b4ac14c15e2d60b3ffd730e43d58 | 2022-06-28T15:41:44.000Z | [
"pytorch",
"tf",
"regnet",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"transformers",
"vision",
"license:apache-2.0"
]
| image-classification | false | facebook | null | facebook/regnet-x-006 | 10 | null | transformers | 11,768 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
l3cube-pune/marathi-gpt | 8735155f098e4921a47fa2d13cffe525d79c60d8 | 2022-06-15T12:37:55.000Z | [
"pytorch",
"gpt2",
"text-generation",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"transformers",
"license:cc-by-4.0"
]
| text-generation | false | l3cube-pune | null | l3cube-pune/marathi-gpt | 10 | null | transformers | 11,769 | ---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaGPT
MahaGPT is a Marathi GPT2 model. It is a GPT2 model pre-trained on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
```
@article{joshi2022l3cube,
title={L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2202.01159},
year={2022}
}
``` |
vpelloin/MEDIA_NLU_flaubert_uncased | 92113bf3bfa08897c81aeccaaff5c909291f8b12 | 2022-06-17T13:52:15.000Z | [
"pytorch",
"tensorboard",
"flaubert",
"token-classification",
"fr",
"transformers",
"bert",
"natural language understanding",
"NLU",
"spoken language understanding",
"SLU",
"understanding",
"MEDIA",
"autotrain_compatible"
]
| token-classification | false | vpelloin | null | vpelloin/MEDIA_NLU_flaubert_uncased | 10 | null | transformers | 11,770 | ---
language: fr
pipeline_tag: "token-classification"
widget:
- text: "je voudrais réserver une chambre à paris pour demain et lundi"
- text: "d'accord pour l'hôtel à quatre vingt dix euros la nuit"
- text: "deux nuits s'il vous plait"
- text: "dans un hôtel avec piscine à marseille"
tags:
- bert
- flaubert
- natural language understanding
- NLU
- spoken language understanding
- SLU
- understanding
- MEDIA
---
# vpelloin/MEDIA_NLU_flaubert_uncased (FBU)
This is a Natural Language Understanding (NLU) model for the French [MEDIA benchmark](https://catalogue.elra.info/en-us/repository/browse/ELRA-S0272/).
It maps each input words into outputs concepts tags (76 available).
This model is a fine-tuning of [`flaubert_base_uncased`](https://huggingface.co/flaubert/flaubert_base_uncased).
## Usage with Pipeline
```python
from transformers import pipeline
generator = pipeline(model="vpelloin/MEDIA_NLU_flaubert_finetuned", task="token-classification")
print(generator)
```
## Usage with AutoTokenizer/AutoModel
```python
from transformers import (
AutoTokenizer,
AutoModelForTokenClassification
)
tokenizer = AutoTokenizer.from_pretrained("vpelloin/MEDIA_NLU_flaubert_uncased")
model = AutoModelForTokenClassification.from_pretrained("vpelloin/MEDIA_NLU_flaubert_uncased")
sentences = [
"je voudrais réserver une chambre à paris pour demain et lundi",
"d'accord pour l'hôtel à quatre vingt dix euros la nuit",
"deux nuits s'il vous plait",
"dans un hôtel avec piscine à marseille"
]
inputs = tokenizer(sentences, padding=True, return_tensors='pt')
outptus = model(**inputs).logits
print([[model.config.id2label[i] for i in b] for b in outptus.argmax(dim=-1).tolist()])
```
|
swetava/distilbert-base-uncased-finetuned-emotion | e2349cbd1a4da137f7c1d806e5db9c1d9d10e6cf | 2022-03-17T18:42:58.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | swetava | null | swetava/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,771 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.924792312369614
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2259
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8432 | 1.0 | 250 | 0.3353 | 0.8975 | 0.8939 |
| 0.2571 | 2.0 | 500 | 0.2259 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
clapika2010/movies_predictions | 2aceb4e9a1b74b0f720a6beaab0025435e737bc9 | 2022-03-17T21:01:30.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | clapika2010 | null | clapika2010/movies_predictions | 10 | null | transformers | 11,772 | Entry not found |
brad1141/bert-finetuned-comp2 | 30e30e45129e71134c69804788a07eb9478cdec6 | 2022-03-18T16:39:56.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | brad1141 | null | brad1141/bert-finetuned-comp2 | 10 | null | transformers | 11,773 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-comp2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-comp2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9570
- Precision: 0.5169
- Recall: 0.6765
- F1: 0.5820
- Accuracy: 0.5820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8434 | 1.0 | 934 | 0.7147 | 0.4475 | 0.6252 | 0.5096 | 0.5096 |
| 0.6307 | 2.0 | 1868 | 0.5959 | 0.5058 | 0.6536 | 0.5585 | 0.5585 |
| 0.4691 | 3.0 | 2802 | 0.6555 | 0.4761 | 0.6865 | 0.5521 | 0.5521 |
| 0.334 | 4.0 | 3736 | 0.7211 | 0.5292 | 0.6682 | 0.5863 | 0.5863 |
| 0.2326 | 5.0 | 4670 | 0.8046 | 0.4886 | 0.6865 | 0.5682 | 0.5682 |
| 0.1625 | 6.0 | 5604 | 0.8650 | 0.4972 | 0.6851 | 0.5728 | 0.5728 |
| 0.1195 | 7.0 | 6538 | 0.9570 | 0.5169 | 0.6765 | 0.5820 | 0.5820 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
madhav16/finetuning-sentiment-model-3000-samples | 0393184fa167316593bed25e1d7998f748aeb198 | 2022-03-19T20:26:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | madhav16 | null | madhav16/finetuning-sentiment-model-3000-samples | 10 | null | transformers | 11,774 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8662420382165605
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3404
- Accuracy: 0.86
- F1: 0.8662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
pere/test-t5-small | 40852f0fcd7ccc23a4b514126dc96548831a44b8 | 2022-03-23T20:39:40.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"transformers",
"summarization",
"translation",
"license:apache-2.0",
"autotrain_compatible"
]
| translation | false | pere | null | pere/test-t5-small | 10 | null | transformers | 11,775 | ---
language:
- en
- fr
- ro
- de
datasets:
- c4
tags:
- summarization
- translation
license: apache-2.0
---
## Test T5 small conversion
This is a test repo for the conversion of T5X to HuggingFace Flax.
The current model is first converted from MTF to T5X using the conversion script included in the T5X library:
```bash
python3 -m t5x.scripts.convert_tf_checkpoint --gin_file=t5x/examples/t5/t5_1_0/small.gin --gin.convert_checkpoint.model=%MODEL --gin.convert_checkpoint.tf_checkpoint_path=\"gs://t5-data/pretrained_models/small/model.ckpt-1000000\" --gin.convert_checkpoint.output_dir=\"/tmp/t5x_checkpoints/t5_small\" --logtostderr
```
After creating the T5X model, the model is converted to Huggingface Flax by a modified version of the script from @stefan-it (https://gist.githubusercontent.com/stefan-it/30e4998ef159f33696e377a46f699d9f/raw/c19da5d067dc9d31d0b8115a79e8626186e11daa/convert_t5x_checkpoint_to_flax.py). The modified version is included in this repo. The modification is basically that the wi_0 and wi_1 layers are combined into wi. This might be a difference between t5_1_0 and t5_1_1
```bash
python3 convert_t5_checkpoint_to_flax.py --t5x_checkpoint_path /tmp/t5x_checkpoints/t5_small/checkpoint_1000000/ --flax_dump_folder_path /tmp/flax_dump_folder/ --config_name t5-small
```
The tokenizer.json was copied from https://huggingface.co/t5-small/blob/main/tokenizer.json.
To be able to use the widgets in HuggingFace, the model was converted to pyTorch by running:
```python
from transformers import T5ForConditionalGeneration
model =
T5ForConditionalGeneration.from_pretrained(".", from_flax=True)
model.save_pretrained(".")
```
|
Daniele/italian-spellchecker | 36d3a1ef5b79065d45697356d788e21d8f924e6f | 2022-03-23T10:19:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"it",
"transformers",
"seq2seq",
"license:mit",
"autotrain_compatible"
]
| text2text-generation | false | Daniele | null | Daniele/italian-spellchecker | 10 | null | transformers | 11,776 | ---
language:
- it
tags:
- seq2seq
license: mit
---
# Italian Contextual Spellchecker
The model is a fine-tuned version of [IT5](https://huggingface.co/models?search=it5)[1], specifically modelled for computing a spellchecking in the shape of a sequence-to-sequence task.
### USAGE
The input sequence should have the structure <b>seq: <i>your text</i>.</b>. Missing the seq token at the beginning or the final punctuation mark may lead to bad performances. |
elihoole/distilgpt2-ttds | dce0970a45cc8af90c51883d7d299fc42f37a49a | 2022-03-22T19:41:05.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-generation | false | elihoole | null | elihoole/distilgpt2-ttds | 10 | null | transformers | 11,777 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-ttds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-ttds
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3666
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 40 | 4.5807 |
| No log | 2.0 | 80 | 4.4023 |
| No log | 3.0 | 120 | 4.3666 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.7.1
- Datasets 2.0.0
- Tokenizers 0.11.6
|
blckwdw61/sysformver1 | ac87341c901933fe588f0b7fa21f8faf24d795f7 | 2022-03-22T19:46:14.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | blckwdw61 | null | blckwdw61/sysformver1 | 10 | null | transformers | 11,778 | # CES BERT sysform model
Fine-tuned BERT cased model |
krishnayogik/distilbert-base-uncased-finetuned-emotion | cad6e5ad919a0aa6c1100d377a39d8f659afa1ef | 2022-03-23T07:27:09.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | krishnayogik | null | krishnayogik/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,779 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9247696388302888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8359 | 1.0 | 250 | 0.3316 | 0.901 | 0.8967 |
| 0.2584 | 2.0 | 500 | 0.2258 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Helsinki-NLP/opus-mt-tc-base-uk-hu | 318e13e415ffc255e7fdfce3d4236b7fa26f0a25 | 2022-06-01T13:10:31.000Z | [
"pytorch",
"marian",
"text2text-generation",
"hu",
"uk",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
]
| translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-base-uk-hu | 10 | null | transformers | 11,780 | ---
language:
- hu
- uk
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-base-uk-hu
results:
- task:
name: Translation ukr-hun
type: translation
args: ukr-hun
dataset:
name: flores101-devtest
type: flores_101
args: ukr hun devtest
metrics:
- name: BLEU
type: bleu
value: 20.2
- task:
name: Translation ukr-hun
type: translation
args: ukr-hun
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: ukr-hun
metrics:
- name: BLEU
type: bleu
value: 44.0
---
# opus-mt-tc-base-uk-hu
Neural machine translation model for translating from Ukrainian (uk) to Hungarian (hu).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-08
* source language(s): ukr
* target language(s): hun
* model: transformer-align
* data: opusTCv20210807+pft ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+pft_transformer-align_2022-03-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opusTCv20210807+pft_transformer-align_2022-03-08.zip)
* more information released models: [OPUS-MT ukr-hun README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-hun/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"Я тобі винний 1000 доларів.",
"Я п'ю воду."
]
model_name = "pytorch-models/opus-mt-tc-base-uk-hu"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# 1000 dollár a te hibád.
# Vizet iszom.
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-uk-hu")
print(pipe("Я тобі винний 1000 доларів."))
# expected output: 1000 dollár a te hibád.
```
## Benchmarks
* test set translations: [opusTCv20210807+pft_transformer-align_2022-03-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opusTCv20210807+pft_transformer-align_2022-03-08.test.txt)
* test set scores: [opusTCv20210807+pft_transformer-align_2022-03-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-hun/opusTCv20210807+pft_transformer-align_2022-03-08.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ukr-hun | tatoeba-test-v2021-08-07 | 0.67544 | 44.0 | 473 | 2472 |
| ukr-hun | flores101-devtest | 0.51953 | 20.2 | 1012 | 22183 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: f084bad
* port time: Wed Mar 23 21:54:12 EET 2022
* port machine: LM0-400-22516.local
|
Luttufuttu/finetuning-sentiment-model-3000-samples | 7ec96eada69b56c0bbc7d0f41ccc415ce2858dd3 | 2022-03-24T12:59:25.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Luttufuttu | null | Luttufuttu/finetuning-sentiment-model-3000-samples | 10 | null | transformers | 11,781 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8679245283018867
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3416
- Accuracy: 0.86
- F1: 0.8679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Yaxin/electra-base-discriminator-yelp-mlm | 4ef0aed468011842c8669f1b7d39ab05b7bbe3a2 | 2022-03-24T14:22:12.000Z | [
"pytorch",
"electra",
"fill-mask",
"dataset:yelp_review_full",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | Yaxin | null | Yaxin/electra-base-discriminator-yelp-mlm | 10 | null | transformers | 11,782 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: electra-base-discriminator-yelp-mlm
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: yelp_review_full yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.678284391624956
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-yelp-mlm
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the yelp_review_full yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5550
- Accuracy: 0.6783
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
hamedkhaledi/persain-flair-upos | d663c4703df738fff7ce2a5872898da9ebf6f30a | 2022-04-03T22:15:00.000Z | [
"pytorch",
"fa",
"dataset:ontonotes",
"flair",
"token-classification",
"sequence-tagger-model"
]
| token-classification | false | hamedkhaledi | null | hamedkhaledi/persain-flair-upos | 10 | null | flair | 11,783 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language:
- fa
datasets:
- ontonotes
widget:
- text: "مقامات مصری به خاطر حفظ ثبات کشور در منطقهای پرآشوب بر خود میبالند ، در حالی که این کشور در طول ۱۶ سال گذشته تنها هشت سال آنرا بدون اعلام وضعیت اضطراری سپری کرده است ."
---
## Persian Universal Part-of-Speech Tagging in Flair
This is the universal part-of-speech tagging model for Persian that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **97,73** (UD_PERSIAN)
Predicts Universal POS tags:
| **tag** | **meaning** |
|:---------------------------------:|:-----------:|
|ADJ | adjective |
| ADP | adposition |
| ADV | adverb |
| AUX | auxiliary |
| CCONJ | coordinating conjunction |
| DET | determiner |
| INTJ | interjection |
| NOUN | noun |
| NUM | numeral |
| PART | particle |
| PRON | pronoun |
| PUNCT | punctuation |
| SCONJ | subordinating conjunction |
| VERB | verb |
| X | other |
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("hamedkhaledi/persain-flair-upos")
# make example sentence
sentence = Sentence("مقامات مصری به خاطر حفظ ثبات کشور در منطقهای پرآشوب بر خود میبالند .")
tagger.predict(sentence)
#print result
print(sentence.to_tagged_string())
```
This yields the following output:
```
مقامات <NOUN> مصری <ADJ> به <ADP> خاطر <NOUN> حفظ <NOUN> ثبات <NOUN> کشور <NOUN> در <ADP> منطقهای <NOUN> پرآشوب <ADJ> بر <ADP> خود <PRON> میبالند <VERB> . <PUNCT>
```
---
### Results
- F-score (micro) 0.9773
- F-score (macro) 0.9461
- Accuracy 0.9773
```
By class:
precision recall f1-score support
NOUN 0.9770 0.9849 0.9809 6420
ADP 0.9947 0.9916 0.9932 1909
ADJ 0.9342 0.9128 0.9234 1525
PUNCT 1.0000 1.0000 1.0000 1365
VERB 0.9840 0.9711 0.9775 1141
CCONJ 0.9912 0.9937 0.9925 794
AUX 0.9622 0.9799 0.9710 546
PRON 0.9751 0.9865 0.9808 517
SCONJ 0.9797 0.9757 0.9777 494
NUM 0.9948 1.0000 0.9974 385
ADV 0.9343 0.9033 0.9185 362
DET 0.9773 0.9711 0.9742 311
PART 0.9916 1.0000 0.9958 237
INTJ 0.8889 0.8000 0.8421 10
X 0.7143 0.6250 0.6667 8
micro avg 0.9773 0.9773 0.9773 16024
macro avg 0.9533 0.9397 0.9461 16024
weighted avg 0.9772 0.9773 0.9772 16024
samples avg 0.9773 0.9773 0.9773 16024
Loss: 0.12471389770507812
``` |
fav-kky/wav2vec2-base-cs-80k-ClTRUS | 824c86dce591f53311dae92a2c8a4dd57cc1b02e | 2022-06-20T14:27:49.000Z | [
"pytorch",
"wav2vec2",
"pretraining",
"cs",
"arxiv:2206.07627",
"arxiv:2206.07666",
"transformers",
"Czech",
"KKY",
"FAV",
"license:cc-by-nc-sa-4.0"
]
| null | false | fav-kky | null | fav-kky/wav2vec2-base-cs-80k-ClTRUS | 10 | 1 | transformers | 11,784 | ---
language: "cs"
tags:
- Czech
- KKY
- FAV
license: "cc-by-nc-sa-4.0"
---
# wav2vec2-base-cs-80k-ClTRUS
**C**zech **l**anguage **TR**ransformer from **U**nlabeled **S**peech (ClTRUS) is a monolingual Czech Wav2Vec 2.0 base model pre-trained from 80 thousand hours of Czech speech.
This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model for speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
**Note:** This is a checkpoint of the model after 4 epochs over the whole dataset. If you want some earlier or later checkpoints, please feel free to contact the author (jlehecka(at)kky.zcu.cz).
## Pretraining data
More than 80 thousand hours of unlabeled Czech speech:
- recordings from radio (22k hours),
- unlabeled data from VoxPopuli dataset (18.7k hours),
- TV shows (15k hours),
- shadow speakers (12k hours),
- sports (5k hours),
- telephone data (2k hours),
- and a smaller amount of data from several other domains including the public CommonVoice dataset.
## Usage
Inputs must be 16kHz mono audio files.
This model can be used e.g. to extract per-frame contextual embeddings from audio:
```python
from transformers import Wav2Vec2Model, Wav2Vec2FeatureExtractor
import torchaudio
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("fav-kky/wav2vec2-base-cs-80k-ClTRUS")
model = Wav2Vec2Model.from_pretrained("fav-kky/wav2vec2-base-cs-80k-ClTRUS")
speech_array, sampling_rate = torchaudio.load("/path/to/audio/file.wav")
inputs = feature_extractor(
speech_array,
sampling_rate=16_000,
return_tensors="pt"
)["input_values"][0]
output = model(inputs)
embeddings = output.last_hidden_state.detach().numpy()[0]
```
## Speech recognition results
After fine-tuning, the model scored the following results on public datasets:
- Czech portion of CommonVoice v7.0: **WER = 3.8%**
- Czech portion of VoxPopuli: **WER = 8.8%**
See our paper for details.
## Paper
The preprint of our paper (accepted to INTERSPEECH 2022) is available at http://arxiv.org/abs/2206.07627
## Citation
If you find this model useful, please cite our paper:
```
@inproceedings{wav2vec2-base-cs-80k-ClTRUS,
title = {Exploring Capabilities of Monolingual Audio Transformers using Large Datasets in Automatic Speech Recognition of {C}zech},
author = {
Jan Lehe\v{c}ka and
Jan \v{S}vec and
Ale\v{s} Pra\v{z}\'ak and
Josef V. Psutka
},
booktitle = {{I}nterspeech 2022},
publisher = {{ISCA}},
year = {2022},
note = {(in press)},
url = {https://arxiv.org/abs/2206.07627},
}
```
## Related works
- [Transformer-based Automatic Speech Recognition of Formal and Colloquial Czech in MALACH Project](https://arxiv.org/abs/2206.07666)
- [Yehor/wav2vec2-xls-r-base-uk-with-small-lm](https://huggingface.co/Yehor/wav2vec2-xls-r-base-uk-with-small-lm)
|
azizbarank/mbert-finnic-ner | ce5b0fd5041ba65dd21a9021e23782753e7a22be | 2022-03-25T13:55:16.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | azizbarank | null | azizbarank/mbert-finnic-ner | 10 | null | transformers | 11,785 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: mbert-finnic-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finnic-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the Finnish and Estonian parts of the "WikiANN" dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1427
- Precision: 0.9090
- Recall: 0.9156
- F1: 0.9123
- Accuracy: 0.9672
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1636 | 1.0 | 2188 | 0.1385 | 0.8906 | 0.9000 | 0.8953 | 0.9601 |
| 0.0991 | 2.0 | 4376 | 0.1346 | 0.9099 | 0.9095 | 0.9097 | 0.9660 |
| 0.0596 | 3.0 | 6564 | 0.1427 | 0.9090 | 0.9156 | 0.9123 | 0.9672 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
IIC/mbart-large-lfqa-es | b28c031270ce5683f935b97a26223a6d8da50fda | 2022-04-04T02:14:04.000Z | [
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"es",
"dataset:IIC/lfqa_es",
"transformers",
"seq2seq",
"abstractive question answering",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | IIC | null | IIC/mbart-large-lfqa-es | 10 | 2 | transformers | 11,786 | ---
language:
- es
tags:
# - summarization # Example: audio
- seq2seq # Example: automatic-speech-recognition
- abstractive question answering
datasets:
- IIC/lfqa_es
metrics:
- rouge2
- rouge1
- rougel
- rougelsum
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: mbart-large-lfqa-es
results:
- task:
type: question answering # Required. Example: automatic-speech-recognition
name: abstractive question answering # Optional. Example: Speech Recognition
dataset:
type: IIC/lfqa_es # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: IIC/lfqa_es # Required. Example: Common Voice zh-CN
args: es # Optional. Example: zh-CN
metrics:
- type: rouge1 # Required. Example: wer
value: 0.5107 # Required. Example: 20.90
name: rouge1 # Optional. Example: Test WER
- type: rouge2
value: 0.0042
name: rouge2
- type: rougeL
value: 0.5108
name: rougeL
- type: rougeLsum
value: 0.5106
name: rougeLsum
---
This model is a fine-tuned version of [MBART-large](https://huggingface.co/facebook/mbart-large-cc25), a multilingual text-to-text encoder-decoder transformer. It is trained on [lfqa-spanish](https://huggingface.co/datasets/IIC/lfqa_spanish), an automatically translated dataset, originally created in English in [this repository](https://huggingface.co/datasets/vblagoje/lfqa). For more details about the dataset, check its model card.
For optimizing the model, we used [Adafactor](https://paperswithcode.com/method/adafactor) optimizer, which is better suited for t5-class models than AdamW (the typically used one). We used linear decay, and the full hyperparameters for this model were:
```json
{
"learning_rate": 2e-4,
"num_train_epochs": 4,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"adam_epsilon": 1e-8,
"total_train_batch_size": 64,
"warmup_ratio": 0.06,
}
```
This model is therefore trained to provide long-form answers to open domain questions given certain context paragraphs which can be used to answer that question. Therefore the main task this model can perform is abstractive question answering.
The result it obtains on the validation set of this dataset (it doesn't have a test set), with num_beams = 8 and maximum target sequence length = 360 are:
```json
{"rouge1": 0.5107, "rouge2": 0.0042, "rougeL": 0.5108, "rougeLsum": 0.5106, "gen_len": 201.7371}
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
lkm2835/distilbert-imdb | 35f9ea573fa8403c87c390cffb1a14777481acab | 2022-07-17T14:47:59.000Z | [
"pytorch",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | lkm2835 | null | lkm2835/distilbert-imdb | 10 | null | transformers | 11,787 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 391 | 0.1849 | 0.9281 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
studio-ousia/mluke-large-lite-finetuned-conll-2003 | 77a0be94d83e5836c82add4cdddffd07112888d3 | 2022-03-28T07:59:25.000Z | [
"pytorch",
"luke",
"transformers",
"license:apache-2.0"
]
| null | false | studio-ousia | null | studio-ousia/mluke-large-lite-finetuned-conll-2003 | 10 | null | transformers | 11,788 | ---
license: apache-2.0
---
|
sophieb/electricidad-small-discriminator-finetuned-noticias-falsas-en-espaol-fakenews | 33b56c5f1b5f8fa845804703f5b379fa9d441390 | 2022-03-31T02:13:32.000Z | [
"pytorch",
"tensorboard",
"electra",
"text-classification",
"transformers"
]
| text-classification | false | sophieb | null | sophieb/electricidad-small-discriminator-finetuned-noticias-falsas-en-espaol-fakenews | 10 | null | transformers | 11,789 | Entry not found |
asafaya/hubert-large-turkish | d769a6e6ce52cce2c8fd151d8173f84d3ed71f14 | 2022-03-29T12:57:59.000Z | [
"pytorch",
"hubert",
"feature-extraction",
"transformers",
"license:cc-by-nc-4.0"
]
| feature-extraction | false | asafaya | null | asafaya/hubert-large-turkish | 10 | null | transformers | 11,790 | ---
license: cc-by-nc-4.0
---
|
tbosse/bert-base-german-cased-finetuned-subj_v1 | e810312f610286eb3af481dd8625ef38f1a33ac5 | 2022-03-29T15:59:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | tbosse | null | tbosse/bert-base-german-cased-finetuned-subj_v1 | 10 | null | transformers | 11,791 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-german-cased-finetuned-subj_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-finetuned-subj_v1
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1594
- Precision: 0.1875
- Recall: 0.0077
- F1: 0.0147
- Accuracy: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 136 | 0.1591 | 1.0 | 0.0051 | 0.0102 | 0.9523 |
| No log | 2.0 | 272 | 0.1571 | 0.375 | 0.0077 | 0.015 | 0.9518 |
| No log | 3.0 | 408 | 0.1594 | 0.1875 | 0.0077 | 0.0147 | 0.9508 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
princeton-nlp/CoFi-SST2-s95 | 931c4672d8f7c16cb50aadcc0aea585b4caaaf65 | 2022-05-01T01:19:38.000Z | [
"pytorch",
"bert",
"text-classification",
"arxiv:2204.00408",
"transformers"
]
| text-classification | false | princeton-nlp | null | princeton-nlp/CoFi-SST2-s95 | 10 | null | transformers | 11,792 | This is a model checkpoint for "[Structured Pruning Learns Compact and Accurate Models](https://arxiv.org/pdf/2204.00408.pdf)". The model is pruned from `bert-base-uncased` to a 95% sparsity on dataset SST-2. Please go to [our repository](https://github.com/princeton-nlp/CoFiPruning) for more details on how to use the model for inference. Note that you would have to use the model class specified in our repository to load the model.
|
neibla/distilbert-base-uncased-finetuned-emotion | 79b64c36b2ad9ed6f7c5d1c71769f2afd495fc8d | 2022-03-30T08:56:26.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | neibla | null | neibla/distilbert-base-uncased-finetuned-emotion | 10 | null | transformers | 11,793 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9254917237562972
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.855 | 1.0 | 250 | 0.3211 | 0.905 | 0.9017 |
| 0.2561 | 2.0 | 500 | 0.2187 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Marif/Arif_fake_news_classifier | 57eb62d20474a47b9019f2be3b7716476ba40250 | 2022-03-31T04:19:29.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"license:afl-3.0"
]
| text-classification | false | Marif | null | Marif/Arif_fake_news_classifier | 10 | null | transformers | 11,794 | ---
license: afl-3.0
---
|
deepspeechvision/wav2vec2_hindi_asr | b8c632e90d97cd6d8922517f4596ec7346c3ff70 | 2022-03-31T18:03:34.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | deepspeechvision | null | deepspeechvision/wav2vec2_hindi_asr | 10 | null | transformers | 11,795 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_hindi_asr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_hindi_asr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Teyronebigdick/DialoGPT-small-terrydavis | 9bb5fe4d7d1241356b63c0df5b8de21f7ea9fed3 | 2022-04-01T21:41:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Teyronebigdick | null | Teyronebigdick/DialoGPT-small-terrydavis | 10 | null | transformers | 11,796 | ---
tags:
- conversational
---
# Terry Davis DialoGPT Model |
hackathon-pln-es/biomedtra-small-es-squad2-es | d29137087bd9ac4cd18d41cff84069090990caf8 | 2022-04-03T14:51:12.000Z | [
"pytorch",
"electra",
"question-answering",
"es",
"dataset:squad_es",
"dataset:hackathon-pln-es/biomed_squad_es_v2",
"transformers",
"autotrain_compatible"
]
| question-answering | false | hackathon-pln-es | null | hackathon-pln-es/biomedtra-small-es-squad2-es | 10 | null | transformers | 11,797 | ---
language: es
datasets:
- squad_es
- hackathon-pln-es/biomed_squad_es_v2
metrics:
- "f1"
---
# biomedtra-small for QA
This model was trained as part of the "Extractive QA Biomedicine" project developed during the 2022 [Hackathon](https://somosnlp.org/hackathon) organized by SOMOS NLP.
## Motivation
Recent research has made available Spanish Language Models trained on Biomedical corpus. This project explores the use of these new models to generate extractive Question Answering models for Biomedicine, and compares their effectiveness with general masked language models.
The models trained during the [Hackathon](https://somosnlp.org/hackathon) were:
[hackathon-pln-es/roberta-base-bne-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-bne-squad2-es)
[hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es)
[hackathon-pln-es/roberta-base-biomedical-es-squad2-es](https://huggingface.co/hackathon-pln-es/roberta-base-biomedical-es-squad2-es)
[hackathon-pln-es/biomedtra-small-es-squad2-es](https://huggingface.co/hackathon-pln-es/biomedtra-small-es-squad2-es)
## Description
This model is a fine-tuned version of [mrm8488/biomedtra-small-es](https://huggingface.co/mrm8488/biomedtra-small-es) on the [squad_es (v2)](https://huggingface.co/datasets/squad_es) training dataset.
## Hyperparameters
The hyperparameters were chosen based on those used in [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2), an english-based model trained for similar purposes
```
--num_train_epochs 10 \
--learning_rate 1e-4 \
--max_seq_length 384 \
--doc_stride 128 \
```
## Performance
Evaluated on the [hackathon-pln-es/biomed_squad_es_v2](https://huggingface.co/datasets/hackathon-pln-es/biomed_squad_es_v2) dev set.
|Model |Base Model Domain|exact |f1 |HasAns_exact|HasAns_f1|NoAns_exact|NoAns_f1|
|--------------------------------------------------------------|-----------------|-------|-------|------------|---------|-----------|--------|
|hackathon-pln-es/roberta-base-bne-squad2-es |General |67.6341|75.6988|53.7367 |70.0526 |81.2174 |81.2174 |
|hackathon-pln-es/roberta-base-biomedical-clinical-es-squad2-es|Biomedical |66.8426|75.2346|53.0249 |70.0031 |80.3478 |80.3478 |
|hackathon-pln-es/roberta-base-biomedical-es-squad2-es |Biomedical |67.6341|74.5612|47.6868 |61.7012 |87.1304 | 87.1304|
|hackathon-pln-es/biomedtra-small-es-squad2-es |Biomedical |34.4767|44.3294|45.3737 |65.307 |23.8261 |23.8261 |
## Team
Santiago Maximo: [smaximo](https://huggingface.co/smaximo) |
VikasMani/wikineural-multilingual-ner | 929b58a0a68dd34a3a4355e97c7a138ee5eefebd | 2022-04-02T10:57:38.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | VikasMani | null | VikasMani/wikineural-multilingual-ner | 10 | null | transformers | 11,798 | Entry not found |
crazypegasus/GPT-JonSnow | d46c81a5ea9550f5ae999a9ce61032a40bb7b073 | 2022-04-03T15:38:04.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | crazypegasus | null | crazypegasus/GPT-JonSnow | 10 | null | transformers | 11,799 | ---
tags:
- conversational
---
# JonSnow GPT model
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.