modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
speechbrain/asr-wav2vec2-commonvoice-rw | 126572edaabc71a69e1ac004b6c444a2aa5d58db | 2022-05-25T12:34:08.000Z | [
"wav2vec2",
"feature-extraction",
"rw",
"dataset:commonvoice",
"arxiv:2106.04624",
"speechbrain",
"CTC",
"Attention",
"pytorch",
"Transformer",
"hf-asr-leaderboard",
"license:apache-2.0",
"automatic-speech-recognition"
] | automatic-speech-recognition | false | speechbrain | null | speechbrain/asr-wav2vec2-commonvoice-rw | 32 | 1 | speechbrain | 7,000 | ---
language: "rw"
thumbnail:
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- Attention
- pytorch
- speechbrain
- Transformer
- hf-asr-leaderboard
license: "apache-2.0"
datasets:
- commonvoice
metrics:
- wer
- cer
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# wav2vec 2.0 with CTC/Attention trained on CommonVoice Kinyarwanda (No LM)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on CommonVoice (Kinyarwanda Language) within
SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Test WER | GPUs |
|:--------------:|:--------------:| :--------:|
| 03-06-21 | 18.91 | 2xV100 32GB |
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions (train.tsv) of CommonVoice (RW).
- Acoustic model (wav2vec2.0 + CTC/Attention). A pretrained wav2vec 2.0 model ([wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)) is combined with two DNN layers and finetuned on CommonVoice En.
The obtained final acoustic representation is given to the CTC and attention decoders.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *transcribe_file* if needed.
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in Kinyarwanda)
```python
from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-wav2vec2-commonvoice-rw", savedir="pretrained_models/asr-wav2vec2-commonvoice-rw")
asr_model.transcribe_file("speechbrain/asr-wav2vec2-commonvoice-rw/example.mp3")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/CommonVoice/ASR/seq2seq
python train_with_wav2vec.py hparams/train_rw_with_wav2vec.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1tjz6IZmVRkuRE97E7h1cXFoGTer7pT73?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
superb/hubert-large-superb-sid | f1c93f39c0efc6d2c4f8fc690daed85a4d5b6efc | 2021-11-04T16:03:32.000Z | [
"pytorch",
"hubert",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/hubert-large-superb-sid | 32 | null | transformers | 7,001 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- hubert
- audio-classification
license: apache-2.0
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
---
# Hubert-Large for Speaker Identification
## Model description
This is a ported version of
[S3PRL's Hubert for the SUPERB Speaker Identification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/voxceleb1).
The base model is [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class
classification, where speakers are in the same predefined set for both training and testing. The widely
used [VoxCeleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html) dataset is adopted
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification).
## Usage examples
You can use the model via the Audio Classification pipeline:
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
classifier = pipeline("audio-classification", model="superb/hubert-large-superb-sid")
labels = classifier(dataset[0]["file"], top_k=5)
```
Or use the model directly:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import HubertForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "si", split="test")
dataset = dataset.map(map_to_array)
model = HubertForSequenceClassification.from_pretrained("superb/hubert-large-superb-sid")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/hubert-large-superb-sid")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:2]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
predicted_ids = torch.argmax(logits, dim=-1)
labels = [model.config.id2label[_id] for _id in predicted_ids.tolist()]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9033` | `0.9035` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
susumu2357/bert-base-swedish-squad2 | 8287e4e54efca8d0e6973e94529d1ba1019732d4 | 2021-05-20T07:20:04.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"question-answering",
"sv",
"dataset:susumu2357/squad_v2_sv",
"transformers",
"squad",
"license:apache-2.0",
"autotrain_compatible"
] | question-answering | false | susumu2357 | null | susumu2357/bert-base-swedish-squad2 | 32 | 1 | transformers | 7,002 | ---
language:
- sv
tags:
- squad
license: apache-2.0
datasets:
- susumu2357/squad_v2_sv
metrics:
- squad_v2
---
# Swedish BERT Fine-tuned on SQuAD v2
This model is a fine-tuning checkpoint of Swedish BERT on SQuAD v2.
## Training data
Fine-tuning was done based on the pre-trained model [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased).
Training and dev datasets are our
[Swedish translation of SQuAD v2](https://github.com/susumu2357/SQuAD_v2_sv).
[Here](https://huggingface.co/datasets/susumu2357/squad_v2_sv) is the HuggingFace Datasets.
## Hyperparameters
```
batch_size = 16
n_epochs = 2
max_seq_len = 386
learning_rate = 3e-5
warmup_steps = 2900 # warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Eval results
```
'exact': 66.72642524202223
'f1': 70.11149581003404
'total': 11156
'HasAns_exact': 55.574745730186144
'HasAns_f1': 62.821693965983044
'HasAns_total': 5211
'NoAns_exact': 76.50126156433979
'NoAns_f1': 76.50126156433979
'NoAns_total': 5945
```
## Limitations and bias
This model may contain biases due to mistranslations of the SQuAD dataset.
## BibTeX entry and citation info
```bibtex
@misc{svSQuADbert,
author = {Susumu Okazawa},
title = {Swedish BERT Fine-tuned on Swedish SQuAD 2.0},
year = {2021},
howpublished = {\url{https://huggingface.co/susumu2357/bert-base-swedish-squad2}},
}
```
|
timm/eca_nfnet_l0 | d1c0fff069f8cb8d83c1f42767cc4fcc8b21a3f3 | 2021-09-07T18:35:59.000Z | [
"pytorch",
"dataset:imagenet",
"arxiv:2102.06171",
"arxiv:1910.03151",
"arxiv:1903.10520",
"arxiv:1906.02659",
"arxiv:2010.15052",
"arxiv:1909.13719",
"timm",
"image-classification",
"normalization-free",
"efficient-channel-attention",
"license:apache-2.0"
] | image-classification | false | timm | null | timm/eca_nfnet_l0 | 32 | 1 | timm | 7,003 | ---
tags:
- image-classification
- timm
- normalization-free
- efficient-channel-attention
license: apache-2.0
datasets:
- imagenet
library_tag: timm
---
# ECA-NFNet-L0
Pretrained model on [ImageNet](http://www.image-net.org/), this is a variant of the [NFNet (Normalization Free)](https://arxiv.org/abs/2102.06171) model family.
## Model description
This model variant was slimmed down from the original F0 variant in the paper for improved runtime characteristics (throughput, memory use) in PyTorch, on a GPU accelerator. It utilizes [Efficient Channel Attention (ECA)](https://arxiv.org/abs/1910.03151) instead of Squeeze-Excitation. It also features SiLU activations instead of the usual GELU.
Like other models in the NF family, this model contains no normalization layers (batch, group, etc). The models make use of [Weight Standardized](https://arxiv.org/abs/1903.10520) convolutions with additional scaling values in lieu of normalization layers.
## Intended uses & limitations
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
### How to use
You can use this model with the usual factory method in [`timm`](https://github.com/rwightman/pytorch-image-models):
```python
import PIL
import timm
import torch
model = timm.create_model("hf_hub:timm/eca_nfnet_l0")
config = model.default_cfg
img_size = config["test_input_size"][-1] if "test_input_size" in config else config["input_size"][-1]
transform = timm.data.transforms_factory.transforms_imagenet_eval(
img_size=img_size,
interpolation=config["interpolation"],
mean=config["mean"],
std=config["std"],
crop_pct=config["crop_pct"],
)
img = PIL.Image.open(path_to_an_image)
img = img.convert("RGB")
input_tensor = transform(cat_img)
input_tensor = input_tensor.unsqueeze(0)
# ^ batch size = 1
with torch.no_grad():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
### Limitations and bias
The training images in the dataset are usually photos clearly representing one of the 1,000 labels. The model will
probably not generalize well on drawings or images containing multiple objects with different labels.
The training images in the dataset come mostly from the US (45.4%) and Great Britain (7.6%). As such the model or
models created by fine-tuning this model will work better on images picturing scenes from these countries (see
[this paper](https://arxiv.org/abs/1906.02659) for examples).
More generally, [recent research](https://arxiv.org/abs/2010.15052) has shown that even models trained in an
unsupervised fashion on ImageNet (i.e. without using the labels) will pick up racial and gender bias represented in
the training images.
## Training data
This model was pretrained on [ImageNet](http://www.image-net.org/), a dataset consisting of 14 millions of
hand-annotated images with 1,000 categories.
## Training procedure
For stability during training it is highly recommended to train all NFNet variants with gradient clipping enabled. This model was trained with an Adaptive Gradient Clipping (AGC) factor of 0.015 as described in [the paper](https://arxiv.org/abs/2102.06171). Similar to the paper, a cosine learning rate decay was employed using SGD w/ nesterov. Moderate to heavy augmentation ([RandAugment](https://arxiv.org/abs/1909.13719)) and regularization (dropout, stochastic depth) is recommended for training.
### Preprocessing
The images are resized using bicubic interpolation to 288x288 and normalized with the usual ImageNet statistics.
## Evaluation results
This model has a top1-accuracy of 82.6% and a top-5 accuracy of 96.5% on the ImageNet evaluation set.
### BibTeX entry and citation info
NFNet model architecture:
```bibtex
@article{brock2021high,
author={Andrew Brock and Soham De and Samuel L. Smith and Karen Simonyan},
title={High-Performance Large-Scale Image Recognition Without Normalization},
journal={arXiv preprint arXiv:2102.06171},
year={2021}
}
```
L0 model variant & pretraining:
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
uer/chinese_roberta_L-12_H-128 | dfbf1a17cb00693e63f97ddd65393a3b3cbaa6e1 | 2022-07-15T08:15:32.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"dataset:CLUECorpusSmall",
"arxiv:1909.05658",
"arxiv:1908.08962",
"transformers",
"autotrain_compatible"
] | fill-mask | false | uer | null | uer/chinese_roberta_L-12_H-128 | 32 | null | transformers | 7,004 | ---
language: zh
datasets: CLUECorpusSmall
widget:
- text: "北京是[MASK]国的首都。"
---
# Chinese RoBERTa Miniatures
## Model description
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
| | H=128 | H=256 | H=512 | H=768 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
Here are scores on the devlopment set of six Chinese tasks:
| Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
| -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
| RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
| RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
| RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
| RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
| RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
- epochs: 3, 5, 8
- batch sizes: 32, 64
- learning rates: 3e-5, 1e-4, 3e-4
## How to use
You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
>>> unmasker("中国的首都是[MASK]京。")
[
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
'score': 0.8701988458633423,
'token': 1266,
'token_str': '北'},
{'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
'score': 0.1194809079170227,
'token': 1298,
'token_str': '南'},
{'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
'score': 0.0037803512532263994,
'token': 691,
'token_str': '东'},
{'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
'score': 0.0017127094324678183,
'token': 3249,
'token_str': '普'},
{'sequence': '[CLS] 中 国 的 首 都 是 望 京 。 [SEP]',
'score': 0.001687526935711503,
'token': 3307,
'token_str': '望'}
]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
## Training procedure
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
Taking the case of RoBERTa-Medium
Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq128_dataset.pt \
--processes_num 32 --seq_length 128 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
--learning_rate 1e-4 --batch_size 64 \
--data_processor mlm --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
--vocab_path models/google_zh_vocab.txt \
--dataset_path cluecorpussmall_seq512_dataset.pt \
--processes_num 32 --seq_length 512 \
--dynamic_masking --data_processor mlm
```
```
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
--vocab_path models/google_zh_vocab.txt \
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
--config_path models/bert/medium_config.json \
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
--learning_rate 5e-5 --batch_size 16 \
--data_processor mlm --target mlm
```
Finally, we convert the pre-trained model into Huggingface's format:
```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
--output_model_path pytorch_model.bin \
--layers_num 8 --type mlm
```
### BibTeX entry and citation info
```
@article{devlin2018bert,
title={Bert: Pre-training of deep bidirectional transformers for language understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
@article{zhao2019uer,
title={UER: An Open-Source Toolkit for Pre-training Models},
author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
journal={EMNLP-IJCNLP 2019},
pages={241},
year={2019}
}
```
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768 |
segments-tobias/segformer-b0-finetuned-segments-sidewalk | d801a52243bef0170b8676c76fa21b86b8eadeb7 | 2022-03-08T17:31:37.000Z | [
"pytorch",
"segformer",
"dataset:segments/sidewalk-semantic",
"arxiv:2105.15203",
"transformers",
"vision",
"image-segmentation"
] | image-segmentation | false | segments-tobias | null | segments-tobias/segformer-b0-finetuned-segments-sidewalk | 32 | 1 | transformers | 7,005 | ---
tags:
- vision
- image-segmentation
datasets:
- segments/sidewalk-semantic
widget:
- src: https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg
example_title: Brugge
---
# SegFormer (b0-sized) model fine-tuned on Segments.ai sidewalk-semantic.
SegFormer model fine-tuned on [Segments.ai](https://segments.ai) [`sidewalk-semantic`](https://huggingface.co/datasets/segments/sidewalk-semantic). It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
### How to use
Here is how to use this model to classify an image of the sidewalk dataset:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained("segments-tobias/segformer-b0-finetuned-segments-sidewalk")
url = "https://segmentsai-prod.s3.eu-west-2.amazonaws.com/assets/admin-tobias/439f6843-80c5-47ce-9b17-0b2a1d54dbeb.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
tomofi/trocr-captcha | 75b8f10563bc8e3067f0322f86a1dc021da96a04 | 2022-03-11T23:59:35.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers",
"license:mit"
] | null | false | tomofi | null | tomofi/trocr-captcha | 32 | null | transformers | 7,006 | ---
license: mit
---
CER: 0.0019
training code
https://colab.research.google.com/drive/14MfFkhgPS63RJcP7rpBOK6OII_y34jx_?usp=sharing |
mrp/simcse-model-wangchanberta | 86fe48b74c8496a599f7fbd6028cd5d8becd7a51 | 2022-03-20T09:00:47.000Z | [
"pytorch",
"camembert",
"feature-extraction",
"arxiv:2104.08821",
"transformers"
] | feature-extraction | false | mrp | null | mrp/simcse-model-wangchanberta | 32 | null | transformers | 7,007 | # {mrp/simcse-model-wangchanberta}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) by using mBERT as the baseline model and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["ฉันนะคือคนรักชาติยังไงละ!", "พวกสามกีบล้มเจ้า!"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
``` |
abdelhalim/Rec_Business_Names | 5b1ca6af49085b25a2e5403da0194a3c47e6afbb | 2022-04-04T01:39:41.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:BSD-1",
"transformers",
"Text2Text Generation",
"Business names",
"Recommendation system",
"autotrain_compatible"
] | text2text-generation | false | abdelhalim | null | abdelhalim/Rec_Business_Names | 32 | null | transformers | 7,008 | ---
datasets:
- BSD-1
tags:
- Text2Text Generation
- Business names
- Recommendation system
metrics:
- Rouge
---
**Context**
Most of the business name generator systems based on Rule based approach and only take as input a name or keyword not context. The present trained model its aim is to take in a summary for a business idea (1-2 sentences, could be even keywords) and generate a viable business name for users.
**Introduction**
The goal is to create an AI service which is helpful to people and yet could turn into a small business. After fiddling around with T5, I have realized it has an immense creative potential that could prove useful in creative text generation. So, after scraping around 350.000 websites from different Domain list, I have fine-tuned T5 small parameter on this dataset. Results are much depends to the context and creative at the same time.
T5 small is already pre-trained language model which is capable of creating text with a near human quality. It's able to understand the context of a given prefix to generate text. When fine tuned based on the domain names and their meta context, it was able to understand the relation between domain name and the content of the website.
**Dataset**
t5 small needs lots of data to be trained properly. Quality of the data that we will use for fine tuning will have a direct effect on the model quality therefore we need to make sure the data we are scraping from the websites are as clean as possible. The dateset will be under request.
# Usage
In order to use the model in your Python script just copy the following code:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("abdelhalim/Rec_Business_Names")
model = AutoModelForSeq2SeqLM.from_pretrained("abdelhalim/Rec_Business_Names")
encoder_input_str = "fourniture and decor brand"
number_of_business_names = 10
input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
num_beams=number_of_business_names,
num_return_sequences=number_of_business_names,
no_repeat_ngram_size=1,
remove_invalid_values=True,
)
for i in range(len(outputs)):
print(tokenizer.decode(outputs[i], skip_special_tokens=True))
#Output
edgy.com
Furnace.com
Decorsy.com
Furnacea.com
Decorse.com
Furniture.com
edgys.com
Furnishing.com
Lavender.com
edgya.com
``` |
KoichiYasuoka/bert-large-slavic-cyrillic-upos | cab4969506ada96ee0c87e2544bf6fdc40b29368 | 2022-03-24T05:50:12.000Z | [
"pytorch",
"bert",
"token-classification",
"be",
"bg",
"ru",
"sr",
"uk",
"dataset:universal_dependencies",
"transformers",
"belarusian",
"bulgarian",
"russian",
"serbian",
"ukrainian",
"pos",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | token-classification | false | KoichiYasuoka | null | KoichiYasuoka/bert-large-slavic-cyrillic-upos | 32 | null | transformers | 7,009 | ---
language:
- "be"
- "bg"
- "ru"
- "sr"
- "uk"
tags:
- "belarusian"
- "bulgarian"
- "russian"
- "serbian"
- "ukrainian"
- "token-classification"
- "pos"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# bert-large-slavic-cyrillic-upos
## Model Description
This is a BERT model pre-trained with Slavic-Cyrillic ([UD_Belarusian](https://universaldependencies.org/be/) [UD_Bulgarian](https://universaldependencies.org/bg/) [UD_Russian](https://universaldependencies.org/ru/) [UD_Serbian](https://universaldependencies.org/treebanks/sr_set/) [UD_Ukrainian](https://universaldependencies.org/treebanks/uk_iu/)) for POS-tagging and dependency-parsing, derived from [ruBert-large](https://huggingface.co/sberbank-ai/ruBert-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-slavic-cyrillic-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-slavic-cyrillic-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-slavic-cyrillic-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa models
|
JavierIA/es-en | da995058ff53af4d47db10c4927afb4480e6cb7a | 2022-03-24T21:40:13.000Z | [
"pytorch",
"jax",
"marian",
"text2text-generation",
"en",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | JavierIA | null | JavierIA/es-en | 32 | null | transformers | 7,010 | ---
language:
- en
- es
tags:
- translation
license: apache-2.0
---
### eng-spa
* source group: English
* target group: Spanish
* OPUS readme: [eng-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md)
* model: transformer
* source language(s): eng
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-08-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip)
* test set translations: [opus-2020-08-18.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt)
* test set scores: [opus-2020-08-18.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-engspa.eng.spa | 31.0 | 0.583 |
| news-test2008-engspa.eng.spa | 29.7 | 0.564 |
| newstest2009-engspa.eng.spa | 30.2 | 0.578 |
| newstest2010-engspa.eng.spa | 36.9 | 0.620 |
| newstest2011-engspa.eng.spa | 38.2 | 0.619 |
| newstest2012-engspa.eng.spa | 39.0 | 0.625 |
| newstest2013-engspa.eng.spa | 35.0 | 0.598 |
| Tatoeba-test.eng.spa | 54.9 | 0.721 |
### System Info:
- hf_name: eng-spa
- source_languages: eng
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'es']
- src_constituents: {'eng'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-spa/opus-2020-08-18.test.txt
- src_alpha3: eng
- tgt_alpha3: spa
- short_pair: en-es
- chrF2_score: 0.721
- bleu: 54.9
- brevity_penalty: 0.978
- ref_len: 77311.0
- src_name: English
- tgt_name: Spanish
- train_date: 2020-08-18 00:00:00
- src_alpha2: en
- tgt_alpha2: es
- prefer_old: False
- long_pair: eng-spa
- helsinki_git_sha: d2f0910c89026c34a44e331e785dec1e0faa7b82
- transformers_git_sha: f7af09b4524b784d67ae8526f0e2fcc6f5ed0de9
- port_machine: brutasse
- port_time: 2020-08-24-18:20 |
stanford-crfm/pubmed_gpt | 968133097a8a1a91ce7c878c4d668d232c4c4fc2 | 2022-04-07T19:52:37.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stanford-crfm | null | stanford-crfm/pubmed_gpt | 32 | 1 | transformers | 7,011 | Entry not found |
veddm/paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT-10_epochs | 9439dc98269b7261511bcbb6fba2ee3e38a55757 | 2022-04-13T11:21:29.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | veddm | null | veddm/paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT-10_epochs | 32 | null | transformers | 7,012 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT-10_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paraphrase-multilingual-MiniLM-L12-v2-finetuned-DIT-10_epochs
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 91 | 9.1280 |
| No log | 2.0 | 182 | 7.7624 |
| No log | 3.0 | 273 | 6.8875 |
| No log | 4.0 | 364 | 6.2064 |
| No log | 5.0 | 455 | 5.6836 |
| 7.584 | 6.0 | 546 | 5.2978 |
| 7.584 | 7.0 | 637 | 5.0191 |
| 7.584 | 8.0 | 728 | 4.8337 |
| 7.584 | 9.0 | 819 | 4.7284 |
| 7.584 | 10.0 | 910 | 4.6933 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Helsinki-NLP/opus-mt-tc-big-bg-en | d30722fcd22c3239ecfc796e6c45a330ba575207 | 2022-06-01T13:01:41.000Z | [
"pytorch",
"marian",
"text2text-generation",
"bg",
"en",
"transformers",
"translation",
"opus-mt-tc",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-tc-big-bg-en | 32 | null | transformers | 7,013 | ---
language:
- bg
- en
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-bg-en
results:
- task:
name: Translation bul-eng
type: translation
args: bul-eng
dataset:
name: flores101-devtest
type: flores_101
args: bul eng devtest
metrics:
- name: BLEU
type: bleu
value: 42.9
- task:
name: Translation bul-eng
type: translation
args: bul-eng
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: bul-eng
metrics:
- name: BLEU
type: bleu
value: 60.5
---
# opus-mt-tc-big-bg-en
Neural machine translation model for translating from Bulgarian (bg) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): bul
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT bul-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"2001 е годината, с която започва 21-ви век.",
"Това е Copacabana!"
]
model_name = "pytorch-models/opus-mt-tc-big-bg-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# 2001 was the year the 21st century began.
# It's Copacabana!
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-bg-en")
print(pipe("2001 е годината, с която започва 21-ви век."))
# expected output: 2001 was the year the 21st century began.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| bul-eng | tatoeba-test-v2021-08-07 | 0.73687 | 60.5 | 10000 | 71872 |
| bul-eng | flores101-devtest | 0.67938 | 42.9 | 1012 | 24721 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:23:56 EEST 2022
* port machine: LM0-400-22516.local
|
kabelomalapane/test_model1.2_updated | 5b6e8597f28a3ba6f7dc180ef41a8c3d76a00ffe | 2022-04-14T15:27:44.000Z | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | translation | false | kabelomalapane | null | kabelomalapane/test_model1.2_updated | 32 | null | transformers | 7,014 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: test_model1.2_updated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_model1.2_updated
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6856
- Bleu: 12.3864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
dennishe97/longformer-code-mlm-v3 | eaff95080bb17a5f3642920565bf4c6e2ae41445 | 2022-04-22T09:21:10.000Z | [
"pytorch",
"longformer",
"feature-extraction",
"transformers"
] | feature-extraction | false | dennishe97 | null | dennishe97/longformer-code-mlm-v3 | 32 | null | transformers | 7,015 | Entry not found |
Intel/xlm-roberta-base-mrpc | ea3ab2fbd0e5c6c160d50d397cbbab91ee2eff58 | 2022-04-21T07:08:18.000Z | [
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | Intel | null | Intel/xlm-roberta-base-mrpc | 32 | null | transformers | 7,016 | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8578431372549019
- name: F1
type: f1
value: 0.901023890784983
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-mrpc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3703
- Accuracy: 0.8578
- F1: 0.9010
- Combined Score: 0.8794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu102
- Datasets 2.1.0
- Tokenizers 0.11.6
|
mikeadimech/longformer-qmsum-meeting-summarization | d3799539e4559b4531be09eaa3b0e296893e36c0 | 2022-05-01T01:25:15.000Z | [
"pytorch",
"led",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | mikeadimech | null | mikeadimech/longformer-qmsum-meeting-summarization | 32 | null | transformers | 7,017 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: longformer-qmsum-meeting-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-qmsum-meeting-summarization
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2055
- Rouge1: 20.5333
- Rouge2: 7.6756
- Rougel: 16.2531
- Rougelsum: 19.0336
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 200
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 5.4071 | 1.09 | 100 | 5.2910 | 6.012 | 0.5556 | 4.936 | 5.6141 | 20.0 |
| 5.269 | 2.17 | 200 | 5.1446 | 6.7419 | 0.9713 | 5.2774 | 6.3003 | 20.0 |
| 5.1153 | 3.26 | 300 | 4.9976 | 8.1369 | 1.2365 | 6.391 | 7.5911 | 20.0 |
| 4.9888 | 4.35 | 400 | 4.8763 | 9.9113 | 1.4239 | 8.0574 | 9.3442 | 20.0 |
| 4.8687 | 5.43 | 500 | 4.7889 | 10.504 | 1.5638 | 8.1191 | 9.817 | 20.0 |
| 4.7936 | 6.52 | 600 | 4.7226 | 12.6475 | 2.4733 | 9.968 | 11.541 | 20.0 |
| 4.713 | 7.61 | 700 | 4.6770 | 15.2998 | 3.6209 | 11.8629 | 14.2323 | 20.0 |
| 4.6843 | 8.7 | 800 | 4.6428 | 15.8299 | 4.4128 | 12.7301 | 14.8795 | 20.0 |
| 4.6453 | 9.78 | 900 | 4.6105 | 16.3702 | 4.7356 | 13.1566 | 15.4497 | 20.0 |
| 4.6212 | 10.87 | 1000 | 4.5849 | 16.9765 | 5.1101 | 13.617 | 15.9401 | 20.0 |
| 4.5761 | 11.96 | 1100 | 4.5649 | 17.3024 | 5.2494 | 13.79 | 16.3173 | 20.0 |
| 4.564 | 13.04 | 1200 | 4.5447 | 18.7699 | 6.2331 | 14.8264 | 17.645 | 20.0 |
| 4.5393 | 14.13 | 1300 | 4.5277 | 19.1495 | 6.6082 | 15.1392 | 18.2546 | 20.0 |
| 4.5069 | 15.22 | 1400 | 4.5132 | 20.3648 | 7.3895 | 16.018 | 19.1503 | 20.0 |
| 4.4985 | 16.3 | 1500 | 4.4973 | 20.165 | 7.3477 | 16.1161 | 18.7585 | 20.0 |
| 4.4476 | 17.39 | 1600 | 4.4859 | 20.4691 | 7.5734 | 16.438 | 19.1045 | 20.0 |
| 4.4421 | 18.48 | 1700 | 4.4758 | 20.4402 | 7.7674 | 16.3998 | 19.1045 | 20.0 |
| 4.4554 | 19.57 | 1800 | 4.4648 | 20.5992 | 7.3522 | 16.185 | 19.2869 | 20.0 |
| 4.4138 | 20.65 | 1900 | 4.4560 | 20.497 | 7.1732 | 16.2177 | 19.0912 | 20.0 |
| 4.4447 | 21.74 | 2000 | 4.4465 | 21.2936 | 7.8856 | 16.8994 | 19.7994 | 20.0 |
| 4.3636 | 22.83 | 2100 | 4.4373 | 21.1015 | 7.6466 | 16.787 | 19.6918 | 20.0 |
| 4.3647 | 23.91 | 2200 | 4.4288 | 21.3408 | 7.8052 | 17.1431 | 20.1456 | 20.0 |
| 4.3707 | 25.0 | 2300 | 4.4217 | 21.523 | 8.017 | 17.1586 | 20.2724 | 20.0 |
| 4.3503 | 26.09 | 2400 | 4.4145 | 21.485 | 8.015 | 17.064 | 20.209 | 20.0 |
| 4.3295 | 27.17 | 2500 | 4.4069 | 21.5167 | 7.6749 | 16.9976 | 20.265 | 20.0 |
| 4.3444 | 28.26 | 2600 | 4.4004 | 21.748 | 7.8808 | 17.1592 | 20.4054 | 20.0 |
| 4.3135 | 29.35 | 2700 | 4.3958 | 21.5523 | 7.5449 | 17.2103 | 20.5405 | 20.0 |
| 4.3028 | 30.43 | 2800 | 4.3880 | 21.3016 | 7.6531 | 17.1515 | 20.3301 | 20.0 |
| 4.3406 | 31.52 | 2900 | 4.3834 | 21.4169 | 7.5647 | 16.9477 | 20.3379 | 20.0 |
| 4.286 | 32.61 | 3000 | 4.3760 | 21.4684 | 7.4776 | 17.1018 | 20.5254 | 20.0 |
| 4.2717 | 33.7 | 3100 | 4.3736 | 21.596 | 7.514 | 17.164 | 20.6272 | 20.0 |
| 4.285 | 34.78 | 3200 | 4.3666 | 21.3495 | 7.676 | 17.0703 | 20.3182 | 20.0 |
| 4.2496 | 35.87 | 3300 | 4.3628 | 21.5539 | 7.6574 | 17.1393 | 20.5116 | 20.0 |
| 4.2618 | 36.96 | 3400 | 4.3591 | 21.08 | 7.6814 | 16.6941 | 20.2386 | 20.0 |
| 4.255 | 38.04 | 3500 | 4.3522 | 21.1979 | 7.7334 | 16.8281 | 20.3095 | 20.0 |
| 4.2353 | 39.13 | 3600 | 4.3502 | 21.1162 | 8.0427 | 16.9948 | 20.3903 | 20.0 |
| 4.2556 | 40.22 | 3700 | 4.3462 | 21.3417 | 7.7851 | 16.6548 | 20.5316 | 20.0 |
| 4.207 | 41.3 | 3800 | 4.3401 | 21.4329 | 7.948 | 16.944 | 20.5075 | 20.0 |
| 4.234 | 42.39 | 3900 | 4.3388 | 21.6109 | 8.033 | 16.9375 | 20.6668 | 20.0 |
| 4.2118 | 43.48 | 4000 | 4.3347 | 21.5051 | 7.9239 | 16.7403 | 20.6123 | 20.0 |
| 4.1898 | 44.57 | 4100 | 4.3319 | 21.2644 | 7.8222 | 16.7109 | 20.3999 | 20.0 |
| 4.1951 | 45.65 | 4200 | 4.3265 | 21.3383 | 7.997 | 16.7605 | 20.4542 | 20.0 |
| 4.1851 | 46.74 | 4300 | 4.3248 | 21.3509 | 7.9038 | 16.9098 | 20.4593 | 20.0 |
| 4.1674 | 47.83 | 4400 | 4.3223 | 21.3516 | 8.0058 | 17.0061 | 20.4199 | 20.0 |
| 4.1785 | 48.91 | 4500 | 4.3182 | 21.4118 | 8.0755 | 16.959 | 20.5154 | 20.0 |
| 4.1599 | 50.0 | 4600 | 4.3175 | 21.2748 | 7.8562 | 16.8107 | 20.3536 | 20.0 |
| 4.1564 | 51.09 | 4700 | 4.3141 | 21.1811 | 7.8563 | 16.7687 | 20.2242 | 20.0 |
| 4.1513 | 52.17 | 4800 | 4.3101 | 21.1557 | 7.6616 | 16.8105 | 19.8191 | 20.0 |
| 4.1234 | 53.26 | 4900 | 4.3083 | 21.0718 | 7.8625 | 16.7849 | 20.0014 | 20.0 |
| 4.1532 | 54.35 | 5000 | 4.3041 | 21.4241 | 7.984 | 16.6561 | 20.3073 | 20.0 |
| 4.1371 | 55.43 | 5100 | 4.3035 | 21.259 | 7.6476 | 16.9931 | 20.3421 | 20.0 |
| 4.1342 | 56.52 | 5200 | 4.3009 | 21.0745 | 7.386 | 16.7976 | 20.1148 | 20.0 |
| 4.1146 | 57.61 | 5300 | 4.2985 | 21.0796 | 7.6743 | 16.5062 | 19.8702 | 20.0 |
| 4.0774 | 58.7 | 5400 | 4.2965 | 21.2129 | 7.2871 | 17.0019 | 20.3176 | 20.0 |
| 4.1726 | 59.78 | 5500 | 4.2930 | 21.159 | 7.4045 | 16.7762 | 19.9886 | 20.0 |
| 4.0931 | 60.87 | 5600 | 4.2900 | 20.957 | 7.2307 | 16.784 | 19.8402 | 20.0 |
| 4.0838 | 61.96 | 5700 | 4.2887 | 21.13 | 7.2664 | 16.7837 | 19.951 | 20.0 |
| 4.0878 | 63.04 | 5800 | 4.2853 | 21.0281 | 7.2664 | 16.6847 | 19.7843 | 20.0 |
| 4.1067 | 64.13 | 5900 | 4.2848 | 20.941 | 7.2307 | 16.74 | 19.8262 | 20.0 |
| 4.0743 | 65.22 | 6000 | 4.2817 | 21.1234 | 7.4612 | 16.755 | 20.027 | 20.0 |
| 4.103 | 66.3 | 6100 | 4.2807 | 21.2852 | 7.4802 | 16.8037 | 20.2316 | 20.0 |
| 4.0434 | 67.39 | 6200 | 4.2777 | 21.236 | 7.3169 | 16.7967 | 20.0534 | 20.0 |
| 4.0829 | 68.48 | 6300 | 4.2793 | 20.947 | 7.3164 | 16.8597 | 19.7938 | 20.0 |
| 4.0619 | 69.57 | 6400 | 4.2736 | 21.4626 | 7.7245 | 16.8395 | 20.2035 | 20.0 |
| 4.079 | 70.65 | 6500 | 4.2729 | 21.163 | 7.6397 | 16.7826 | 20.0295 | 20.0 |
| 4.0411 | 71.74 | 6600 | 4.2721 | 20.8673 | 7.3841 | 16.6784 | 19.6854 | 20.0 |
| 4.046 | 72.83 | 6700 | 4.2697 | 20.9774 | 7.3325 | 16.7779 | 19.761 | 20.0 |
| 4.0384 | 73.91 | 6800 | 4.2684 | 21.0736 | 7.6569 | 16.7631 | 19.992 | 20.0 |
| 4.0401 | 75.0 | 6900 | 4.2670 | 21.2708 | 7.8224 | 16.5649 | 20.2364 | 20.0 |
| 4.0153 | 76.09 | 7000 | 4.2669 | 21.3638 | 7.7586 | 16.765 | 19.9744 | 20.0 |
| 4.0227 | 77.17 | 7100 | 4.2652 | 21.0611 | 7.709 | 16.3201 | 20.0516 | 20.0 |
| 4.0264 | 78.26 | 7200 | 4.2634 | 21.3766 | 7.7666 | 16.7508 | 20.0938 | 20.0 |
| 4.0475 | 79.35 | 7300 | 4.2615 | 21.2356 | 7.5533 | 16.6339 | 19.9254 | 20.0 |
| 4.0145 | 80.43 | 7400 | 4.2580 | 20.7689 | 7.3386 | 16.287 | 19.7335 | 20.0 |
| 4.0087 | 81.52 | 7500 | 4.2580 | 20.9816 | 7.343 | 16.4598 | 19.701 | 20.0 |
| 3.9835 | 82.61 | 7600 | 4.2577 | 21.1001 | 7.5887 | 16.5226 | 19.714 | 20.0 |
| 4.0029 | 83.7 | 7700 | 4.2562 | 21.1875 | 7.7333 | 16.4799 | 19.9907 | 20.0 |
| 3.9912 | 84.78 | 7800 | 4.2549 | 20.8265 | 7.3897 | 16.2191 | 19.4398 | 20.0 |
| 4.008 | 85.87 | 7900 | 4.2541 | 21.4955 | 7.7602 | 16.4989 | 20.1402 | 20.0 |
| 3.9659 | 86.96 | 8000 | 4.2523 | 21.687 | 7.9463 | 16.5832 | 20.1598 | 20.0 |
| 3.9923 | 88.04 | 8100 | 4.2505 | 21.4615 | 7.817 | 16.3628 | 19.9159 | 20.0 |
| 3.9811 | 89.13 | 8200 | 4.2498 | 21.1917 | 7.5813 | 16.3066 | 19.4905 | 20.0 |
| 3.9819 | 90.22 | 8300 | 4.2488 | 21.239 | 7.4585 | 16.4297 | 19.5213 | 20.0 |
| 3.9889 | 91.3 | 8400 | 4.2456 | 21.5052 | 7.7994 | 16.3783 | 19.8739 | 20.0 |
| 3.942 | 92.39 | 8500 | 4.2468 | 21.3482 | 7.7517 | 16.34 | 19.764 | 20.0 |
| 3.9959 | 93.48 | 8600 | 4.2446 | 21.4615 | 7.817 | 16.3628 | 19.9159 | 20.0 |
| 3.987 | 94.57 | 8700 | 4.2438 | 21.1265 | 7.6497 | 16.4132 | 19.5981 | 20.0 |
| 3.9803 | 95.65 | 8800 | 4.2420 | 21.2956 | 7.7796 | 16.3643 | 19.8607 | 20.0 |
| 3.9415 | 96.74 | 8900 | 4.2410 | 20.8332 | 7.5468 | 16.1678 | 19.316 | 20.0 |
| 3.97 | 97.83 | 9000 | 4.2407 | 21.4223 | 7.8688 | 16.533 | 19.8081 | 20.0 |
| 3.9495 | 98.91 | 9100 | 4.2400 | 21.5678 | 7.9698 | 16.5492 | 19.9404 | 20.0 |
| 3.9489 | 100.0 | 9200 | 4.2391 | 21.3928 | 7.8416 | 16.3595 | 19.7579 | 20.0 |
| 3.9194 | 101.09 | 9300 | 4.2394 | 21.2216 | 7.8416 | 16.2499 | 19.5661 | 20.0 |
| 3.966 | 102.17 | 9400 | 4.2372 | 21.2756 | 7.8798 | 16.3124 | 19.6303 | 20.0 |
| 3.934 | 103.26 | 9500 | 4.2367 | 21.3106 | 7.8585 | 16.3937 | 19.7289 | 20.0 |
| 3.9316 | 104.35 | 9600 | 4.2349 | 21.3296 | 7.9392 | 16.3574 | 19.8031 | 20.0 |
| 3.9586 | 105.43 | 9700 | 4.2366 | 21.0662 | 7.771 | 16.2242 | 19.4813 | 20.0 |
| 3.9189 | 106.52 | 9800 | 4.2338 | 21.1348 | 7.8414 | 16.2757 | 19.7301 | 20.0 |
| 3.937 | 107.61 | 9900 | 4.2350 | 21.2434 | 7.7611 | 16.4693 | 19.6923 | 20.0 |
| 3.911 | 108.7 | 10000 | 4.2331 | 21.2697 | 7.8282 | 16.3636 | 19.6627 | 20.0 |
| 3.8956 | 109.78 | 10100 | 4.2312 | 21.2697 | 7.8117 | 16.3636 | 19.6321 | 20.0 |
| 3.9396 | 110.87 | 10200 | 4.2303 | 21.0842 | 7.7105 | 16.221 | 19.4378 | 20.0 |
| 3.9058 | 111.96 | 10300 | 4.2290 | 21.1633 | 7.8117 | 16.3196 | 19.5575 | 20.0 |
| 3.9198 | 113.04 | 10400 | 4.2278 | 21.1633 | 7.8117 | 16.3196 | 19.5311 | 20.0 |
| 3.9104 | 114.13 | 10500 | 4.2276 | 21.0784 | 7.6899 | 16.3248 | 19.5625 | 20.0 |
| 3.915 | 115.22 | 10600 | 4.2282 | 20.9369 | 7.6522 | 16.1615 | 19.4826 | 20.0 |
| 3.8748 | 116.3 | 10700 | 4.2268 | 20.9369 | 7.6522 | 16.1615 | 19.4826 | 20.0 |
| 3.9341 | 117.39 | 10800 | 4.2252 | 21.0067 | 7.7263 | 16.3314 | 19.5589 | 20.0 |
| 3.8713 | 118.48 | 10900 | 4.2253 | 20.7028 | 7.5712 | 16.0398 | 19.2212 | 20.0 |
| 3.8861 | 119.57 | 11000 | 4.2243 | 20.7075 | 7.6844 | 16.0626 | 19.2959 | 20.0 |
| 3.8905 | 120.65 | 11100 | 4.2252 | 20.6546 | 7.5642 | 15.9451 | 19.1838 | 20.0 |
| 3.8682 | 121.74 | 11200 | 4.2238 | 20.8809 | 7.6536 | 16.1667 | 19.4217 | 20.0 |
| 3.904 | 122.83 | 11300 | 4.2241 | 20.6916 | 7.5324 | 15.9692 | 19.1791 | 20.0 |
| 3.8577 | 123.91 | 11400 | 4.2231 | 20.9271 | 7.6536 | 16.2314 | 19.4695 | 20.0 |
| 3.8851 | 125.0 | 11500 | 4.2230 | 20.8097 | 7.6891 | 16.1087 | 19.3872 | 20.0 |
| 3.8725 | 126.09 | 11600 | 4.2219 | 20.8965 | 7.6891 | 16.197 | 19.4319 | 20.0 |
| 3.8918 | 127.17 | 11700 | 4.2210 | 20.8203 | 7.6562 | 16.1283 | 19.388 | 20.0 |
| 3.845 | 128.26 | 11800 | 4.2210 | 20.7633 | 7.6883 | 16.0813 | 19.3537 | 20.0 |
| 3.8812 | 129.35 | 11900 | 4.2197 | 20.6605 | 7.6351 | 15.9703 | 19.2425 | 20.0 |
| 3.8734 | 130.43 | 12000 | 4.2208 | 20.6164 | 7.601 | 15.9703 | 19.1967 | 20.0 |
| 3.8704 | 131.52 | 12100 | 4.2201 | 20.533 | 7.5141 | 15.941 | 19.1898 | 20.0 |
| 3.8302 | 132.61 | 12200 | 4.2194 | 20.6164 | 7.601 | 15.9703 | 19.1967 | 20.0 |
| 3.8793 | 133.7 | 12300 | 4.2178 | 20.5427 | 7.5674 | 15.9591 | 19.2078 | 20.0 |
| 3.8631 | 134.78 | 12400 | 4.2181 | 20.6953 | 7.6549 | 16.0402 | 19.2734 | 20.0 |
| 3.8565 | 135.87 | 12500 | 4.2173 | 20.6168 | 7.5808 | 16.0402 | 19.2734 | 20.0 |
| 3.8842 | 136.96 | 12600 | 4.2163 | 20.6525 | 7.5782 | 16.0402 | 19.3124 | 20.0 |
| 3.8183 | 138.04 | 12700 | 4.2165 | 20.6168 | 7.5808 | 16.0402 | 19.2734 | 20.0 |
| 3.8482 | 139.13 | 12800 | 4.2155 | 20.6953 | 7.6154 | 16.0402 | 19.2734 | 20.0 |
| 3.8689 | 140.22 | 12900 | 4.2158 | 20.8264 | 7.7844 | 16.1396 | 19.4834 | 20.0 |
| 3.8361 | 141.3 | 13000 | 4.2144 | 20.8264 | 7.6986 | 16.2466 | 19.5192 | 20.0 |
| 3.8336 | 142.39 | 13100 | 4.2148 | 20.7613 | 7.7027 | 16.2516 | 19.4307 | 20.0 |
| 3.8532 | 143.48 | 13200 | 4.2155 | 20.6905 | 7.6695 | 16.1708 | 19.3584 | 20.0 |
| 3.8424 | 144.57 | 13300 | 4.2137 | 20.7613 | 7.7027 | 16.2516 | 19.4307 | 20.0 |
| 3.8781 | 145.65 | 13400 | 4.2128 | 20.6905 | 7.6695 | 16.1708 | 19.3584 | 20.0 |
| 3.8693 | 146.74 | 13500 | 4.2128 | 20.5395 | 7.4561 | 16.1388 | 19.1866 | 20.0 |
| 3.8304 | 147.83 | 13600 | 4.2123 | 20.6345 | 7.7324 | 16.1761 | 19.2764 | 20.0 |
| 3.8434 | 148.91 | 13700 | 4.2123 | 20.7145 | 7.6768 | 16.1729 | 19.3787 | 20.0 |
| 3.8348 | 150.0 | 13800 | 4.2123 | 20.7859 | 7.7023 | 16.2986 | 19.4932 | 20.0 |
| 3.8375 | 151.09 | 13900 | 4.2126 | 20.6319 | 7.5676 | 16.325 | 19.2512 | 20.0 |
| 3.8421 | 152.17 | 14000 | 4.2120 | 20.6665 | 7.5619 | 16.3257 | 19.2911 | 20.0 |
| 3.831 | 153.26 | 14100 | 4.2110 | 20.609 | 7.4912 | 16.2881 | 19.2953 | 20.0 |
| 3.8172 | 154.35 | 14200 | 4.2112 | 20.7352 | 7.6588 | 16.2115 | 19.3408 | 20.0 |
| 3.7853 | 155.43 | 14300 | 4.2107 | 20.6635 | 7.5987 | 16.2131 | 19.2667 | 20.0 |
| 3.8274 | 156.52 | 14400 | 4.2109 | 20.7352 | 7.7559 | 16.3035 | 19.3408 | 20.0 |
| 3.8362 | 157.61 | 14500 | 4.2099 | 20.7559 | 7.6865 | 16.325 | 19.4191 | 20.0 |
| 3.8561 | 158.7 | 14600 | 4.2098 | 20.6225 | 7.6943 | 16.3448 | 19.1425 | 20.0 |
| 3.7832 | 159.78 | 14700 | 4.2098 | 20.6307 | 7.6684 | 16.2469 | 19.269 | 20.0 |
| 3.8409 | 160.87 | 14800 | 4.2092 | 20.683 | 7.7924 | 16.2986 | 19.2414 | 20.0 |
| 3.821 | 161.96 | 14900 | 4.2092 | 20.5235 | 7.6721 | 16.2191 | 18.9879 | 20.0 |
| 3.8343 | 163.04 | 15000 | 4.2089 | 20.5235 | 7.6721 | 16.2191 | 18.9879 | 20.0 |
| 3.8279 | 164.13 | 15100 | 4.2087 | 20.5304 | 7.5448 | 16.2106 | 19.0909 | 20.0 |
| 3.7874 | 165.22 | 15200 | 4.2083 | 20.6319 | 7.6145 | 16.3035 | 19.2294 | 20.0 |
| 3.8316 | 166.3 | 15300 | 4.2076 | 20.5759 | 7.6145 | 16.2528 | 19.1508 | 20.0 |
| 3.7817 | 167.39 | 15400 | 4.2084 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.8338 | 168.48 | 15500 | 4.2075 | 20.5375 | 7.614 | 16.2509 | 19.1047 | 20.0 |
| 3.8515 | 169.57 | 15600 | 4.2069 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.7895 | 170.65 | 15700 | 4.2074 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.8129 | 171.74 | 15800 | 4.2076 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.8582 | 172.83 | 15900 | 4.2073 | 20.4845 | 7.5473 | 16.2067 | 19.0683 | 20.0 |
| 3.7716 | 173.91 | 16000 | 4.2073 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8142 | 175.0 | 16100 | 4.2069 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8186 | 176.09 | 16200 | 4.2068 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8323 | 177.17 | 16300 | 4.2065 | 20.5333 | 7.6281 | 16.2531 | 19.0336 | 20.0 |
| 3.774 | 178.26 | 16400 | 4.2064 | 20.5724 | 7.677 | 16.2545 | 19.0747 | 20.0 |
| 3.8123 | 179.35 | 16500 | 4.2062 | 20.5333 | 7.6281 | 16.2531 | 19.0336 | 20.0 |
| 3.7914 | 180.43 | 16600 | 4.2066 | 20.5333 | 7.6281 | 16.2531 | 19.0336 | 20.0 |
| 3.7988 | 181.52 | 16700 | 4.2063 | 20.5724 | 7.6287 | 16.2545 | 19.0747 | 20.0 |
| 3.8331 | 182.61 | 16800 | 4.2059 | 20.6225 | 7.7265 | 16.3103 | 19.1036 | 20.0 |
| 3.8125 | 183.7 | 16900 | 4.2061 | 20.494 | 7.5897 | 16.2303 | 18.9697 | 20.0 |
| 3.8069 | 184.78 | 17000 | 4.2059 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7933 | 185.87 | 17100 | 4.2058 | 20.5333 | 7.6281 | 16.2531 | 19.0336 | 20.0 |
| 3.807 | 186.96 | 17200 | 4.2058 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8 | 188.04 | 17300 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.776 | 189.13 | 17400 | 4.2057 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7976 | 190.22 | 17500 | 4.2057 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8293 | 191.3 | 17600 | 4.2057 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7807 | 192.39 | 17700 | 4.2057 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8246 | 193.48 | 17800 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7719 | 194.57 | 17900 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8055 | 195.65 | 18000 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.7803 | 196.74 | 18100 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8287 | 197.83 | 18200 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8066 | 198.91 | 18300 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
| 3.8011 | 200.0 | 18400 | 4.2055 | 20.5333 | 7.6756 | 16.2531 | 19.0336 | 20.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cfilt/HiNER-original-muril-base-cased | 57623b7631f81492feb271b696cf7d51cd811d26 | 2022-07-22T06:21:29.000Z | [
"pytorch",
"bert",
"token-classification",
"dataset:cfilt/HiNER-original",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | cfilt | null | cfilt/HiNER-original-muril-base-cased | 32 | null | transformers | 7,018 | ---
tags:
- generated_from_trainer
datasets:
- cfilt/HiNER-original
metrics:
- precision
- recall
- f1
widget:
- text: "बैंगलोर यूनिवर्सिटी में सेमेस्टर जुलाई से शुरू हो रही है ।"
model-index:
- name: HiNER-original-muril-base-cased
results:
- task:
name: Named Entity Recognition
type: Named Entity Recognition
dataset:
type: cfilt/HiNER-original
name: HiNER Original
metrics:
- name: Precision
type: precision
value: 0.8874067587220668
- name: Recall
type: recall
value: 0.880125938333643
- name: F1
type: f1
value: 0.8837513529507954
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HiNER-original-muril-base-cased
This model was trained from scratch on the cfilt/HiNER-original dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.14.0
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ArthurZ/jukebox-dummy | e03afff150a81e761d5e1a153ed6d2a3e1b8c2b1 | 2022-05-31T07:17:02.000Z | [
"pytorch",
"jukebox",
"transformers"
] | null | false | ArthurZ | null | ArthurZ/jukebox-dummy | 32 | null | transformers | 7,019 | Entry not found |
K024/shiki-mt5-streaming | a5026bec7fa545d5ebbee7c0f06862223350c037 | 2022-05-20T06:07:56.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"zh",
"ja",
"en",
"transformers",
"translation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | translation | false | K024 | null | K024/shiki-mt5-streaming | 32 | 2 | transformers | 7,020 | ---
language:
- zh
- ja
- en
tags:
- translation
license: cc-by-nc-sa-4.0
---
# K024/shiki-mt5-streaming
This model is finetuned from [K024/mt5-zh-ja-en-trimmed](https://huggingface.co/K024/mt5-zh-ja-en-trimmed) with context-aware back-translation. "Streaming" means the model is updated from time to time.
Visit huggingface space [Shiki Translation](https://huggingface.co/spaces/K024/shiki-translation) for the basic usage and some inference codes.
Training data contains a large amount of private data or works protected by copyrights and is therefore not listed here.
License: [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
|
sahn/distilbert-base-uncased-finetuned-imdb-blur | 45bf40d8a657fbef02e1f452dbc38f012da0fa81 | 2022-05-30T04:48:36.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | sahn | null | sahn/distilbert-base-uncased-finetuned-imdb-blur | 32 | null | transformers | 7,021 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-imdb-blur
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9776
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-blur
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1484
- Accuracy: 0.9776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Added `...` at the end of all the sentences with the label 1, and `;` with the label 0.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0662 | 1.0 | 1250 | 0.0524 | 0.9762 |
| 0.0365 | 2.0 | 2500 | 0.0683 | 0.9756 |
| 0.012 | 3.0 | 3750 | 0.0455 | 0.9906 |
| 0.0051 | 4.0 | 5000 | 0.1425 | 0.9742 |
| 0.001 | 5.0 | 6250 | 0.1484 | 0.9776 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
tinkoff-ai/response-quality-classifier-tiny | deb23997f1c61ec9aa292847d226557b979e6f3e | 2022-06-01T06:34:56.000Z | [
"pytorch",
"bert",
"text-classification",
"ru",
"transformers",
"conversational",
"license:mit"
] | text-classification | false | tinkoff-ai | null | tinkoff-ai/response-quality-classifier-tiny | 32 | 0 | transformers | 7,022 | ---
license: mit
widget:
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]супер, вот только проснулся, у тебя как?"
example_title: "Dialog example 1"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм"
example_title: "Dialog example 2"
- text: "[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?"
example_title: "Dialog example 3"
language:
- ru
tags:
- conversational
---
This classification model is based on [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2).
The model should be used to produce relevance and specificity of the last message in the context of a dialogue.
The labels explanation:
- `relevance`: is the last message in the dialogue relevant in the context of the full dialogue.
- `specificity`: is the last message in the dialogue interesting and promotes the continuation of the dialogue.
It is pretrained on a large corpus of dialog data in unsupervised manner: the model is trained to predict whether last response was in a real dialog, or it was pulled from some other dialog at random.
Then it was finetuned on manually labelled examples (dataset will be posted soon).
The model was trained with three messages in the context and one response. Each message was tokenized separately with ``` max_length = 32 ```.
The performance of the model on validation split (dataset will be posted soon) (with the best thresholds for validation samples):
| | threshold | f0.5 | ROC AUC |
|:------------|------------:|-------:|----------:|
| relevance | 0.51 | 0.82 | 0.74 |
| specificity | 0.54 | 0.81 | 0.8 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/response-quality-classifier-tiny')
model = AutoModelForSequenceClassification.from_pretrained('tinkoff-ai/response-quality-classifier-tiny')
inputs = tokenizer('[CLS]привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]норм, у тя как?', max_length=128, add_special_tokens=False, return_tensors='pt')
with torch.inference_mode():
logits = model(**inputs).logits
probas = torch.sigmoid(logits)[0].cpu().detach().numpy()
relevance, specificity = probas
```
The [app](https://huggingface.co/spaces/tinkoff-ai/response-quality-classifiers) where you can easily interact with this model.
The work was done during internship at Tinkoff by [egoriyaa](https://github.com/egoriyaa), mentored by [solemn-leader](https://huggingface.co/solemn-leader). |
KM4STfulltext/SSCI-BERT-e2 | 0a9aec5f090c7ebe4361164723a46fcfbb785cc4 | 2022-06-01T09:24:51.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | KM4STfulltext | null | KM4STfulltext/SSCI-BERT-e2 | 32 | 1 | transformers | 7,023 | ---
license: apache-2.0
---
# SSCI-BERT: A pretrained language model for social scientific text
## Introduction
The research for social science texts needs the support natural language processing tools.
The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of scientific texts in social science.
We used the abstract of social science research as the training set. Based on the deep language model framework of BERT, we constructed [SSCI-BERT and SSCI-SciBERT](https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) pre-training language models by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py).
We designed four downstream tasks of Text Classification on different social scientific article corpus to verify the performance of the model.
- SSCI-BERT and SSCI-SciBERT are trained on the abstract of articles published in SSCI journals from 1986 to 2021. The training set involved in the experiment included a total of `503910614 words`.
- Based on the idea of Domain-Adaptive Pretraining, `SSCI-BERT` and `SSCI-SciBERT` combine a large amount of abstracts of scientific articles based on the BERT structure, and continue to train the BERT and SSCI-SciBERT models respectively to obtain pre-training models for the automatic processing of Social science research texts.
## News
- 2022-03-24 : SSCIBERT and SSCI-SciBERT has been put forward for the first time.
## How to use
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain SSCI-BERT and SSCI-SciBERT models online.
- SSCI-BERT
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
model = AutoModel.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
```
- SSCI-SciBERT
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
model = AutoModel.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
```
### Download Models
- The version of the model we provide is `PyTorch`.
### From Huggingface
- Download directly through Huggingface's official website.
- [KM4STfulltext/SSCI-BERT-e2](https://huggingface.co/KM4STfulltext/SSCI-BERT-e2)
- [KM4STfulltext/SSCI-SciBERT-e2](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e2)
- [KM4STfulltext/SSCI-BERT-e4 ](https://huggingface.co/KM4STfulltext/SSCI-BERT-e4)
- [KM4STfulltext/SSCI-SciBERT-e4](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e4)
### From Google Drive
We have put the model on Google Drive for users.
| Model | DATASET(year) | Base Model |
| ------------------------------------------------------------ | ------------- | ---------------------- |
| [SSCI-BERT-e2](https://drive.google.com/drive/folders/1xEDnovlwGO2JxqCaf3rdjS2cB6DOxhj4?usp=sharing) | 1986-2021 | Bert-base-cased |
| [SSCI-SciBERT-e2](https://drive.google.com/drive/folders/16DtIvnHvbrR_92MwgthRRsULW6An9te1?usp=sharing) (recommended) | 1986-2021 | Scibert-scivocab-cased |
| [SSCI-BERT-e4](https://drive.google.com/drive/folders/1sr6Av8p904Jrjps37g7E8aj4HnAHXSxW?usp=sharing) | 1986-2021 | Bert-base-cased |
| [SSCI-SciBERT-e4](https://drive.google.com/drive/folders/1ty-b4TIFu8FbilgC4VcI7Bgn_O5MDMVe?usp=sharing) | 1986-2021 | Scibert-scivocab-cased |
## Evaluation & Results
- We use SSCI-BERT and SSCI-SciBERT to perform Text Classificationon different social science research corpus. The experimental results are as follows. Relevant data sets are available for download in the **Verification task datasets** folder of this project.
#### JCR Title Classify Dataset
| Model | accuracy | macro avg | weighted avg |
| ---------------------- | -------- | --------- | ------------ |
| Bert-base-cased | 28.43 | 22.06 | 21.86 |
| Scibert-scivocab-cased | 38.48 | 33.89 | 33.92 |
| SSCI-BERT-e2 | 40.43 | 35.37 | 35.33 |
| SSCI-SciBERT-e2 | 41.35 | 37.27 | 37.25 |
| SSCI-BERT-e4 | 40.65 | 35.49 | 35.40 |
| SSCI-SciBERT-e4 | 41.13 | 36.96 | 36.94 |
| Support | 2300 | 2300 | 2300 |
#### JCR Abstract Classify Dataset
| Model | accuracy | macro avg | weighted avg |
| ---------------------- | -------- | --------- | ------------ |
| Bert-base-cased | 48.59 | 42.8 | 42.82 |
| Scibert-scivocab-cased | 55.59 | 51.4 | 51.81 |
| SSCI-BERT-e2 | 58.05 | 53.31 | 53.73 |
| SSCI-SciBERT-e2 | 59.95 | 56.51 | 57.12 |
| SSCI-BERT-e4 | 59.00 | 54.97 | 55.59 |
| SSCI-SciBERT-e4 | 60.00 | 56.38 | 56.90 |
| Support | 2200 | 2200 | 2200 |
#### JCR Mixed Titles and Abstracts Dataset
| **Model** | **accuracy** | **macro avg** | **weighted avg** |
| ---------------------- | ------------ | -------------- | ----------------- |
| Bert-base-cased | 58.24 | 57.27 | 57.25 |
| Scibert-scivocab-cased | 59.58 | 58.65 | 58.68 |
| SSCI-BERT-e2 | 60.89 | 60.24 | 60.30 |
| SSCI-SciBERT-e2 | 60.96 | 60.54 | 60.51 |
| SSCI-BERT-e4 | 61.00 | 60.48 | 60.43 |
| SSCI-SciBERT-e4 | 61.24 | 60.71 | 60.75 |
| Support | 4500 | 4500 | 4500 |
#### SSCI Abstract Structural Function Recognition (Classify Dataset)
| | Bert-base-cased | SSCI-BERT-e2 | SSCI-BERT-e4 | support |
| ------------ | -------------------------- | ------------------- | ------------------- | ----------- |
| B | 63.77 | 64.29 | 64.63 | 224 |
| P | 53.66 | 57.14 | 57.99 | 95 |
| M | 87.63 | 88.43 | 89.06 | 323 |
| R | 86.81 | 88.28 | **88.47** | 419 |
| C | 78.32 | 79.82 | 78.95 | 316 |
| accuracy | 79.59 | 80.9 | 80.97 | 1377 |
| macro avg | 74.04 | 75.59 | 75.82 | 1377 |
| weighted avg | 79.02 | 80.32 | 80.44 | 1377 |
| | **Scibert-scivocab-cased** | **SSCI-SciBERT-e2** | **SSCI-SciBERT-e4** | **support** |
| B | 69.98 | **70.95** | **70.95** | 224 |
| P | 58.89 | **60.12** | 58.96 | 95 |
| M | 89.37 | **90.12** | 88.11 | 323 |
| R | 87.66 | 88.07 | 87.44 | 419 |
| C | 80.7 | 82.61 | **82.94** | 316 |
| accuracy | 81.63 | **82.72** | 82.06 | 1377 |
| macro avg | 77.32 | **78.37** | 77.68 | 1377 |
| weighted avg | 81.6 | **82.58** | 81.92 | 1377 |
## Cited
- If our content is helpful for your research work, please quote our research in your article.
- If you want to quote our research, you can use this url (https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) as an alternative before our paper is published.
## Disclaimer
- The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment.
- **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## Acknowledgment
- SSCI-BERT was trained based on [BERT-Base-Cased]([google-research/bert: TensorFlow code and pre-trained models for BERT (github.com)](https://github.com/google-research/bert)).
- SSCI-SciBERT was trained based on [scibert-scivocab-cased]([allenai/scibert: A BERT model for scientific text. (github.com)](https://github.com/allenai/scibert))
|
asdc/roberta-base-biomedical-clinical-es-finetuned-text_classification | 44665c9ae5656e6e3cf72d5c28812b44b6c591c2 | 2022-06-15T10:14:53.000Z | [
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | asdc | null | asdc/roberta-base-biomedical-clinical-es-finetuned-text_classification | 32 | null | transformers | 7,024 | Entry not found |
facebook/roberta-hate-speech-dynabench-r1-target | 64b34ed9222de68a47908642251def8a88c83938 | 2022-06-10T22:36:34.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"arxiv:2012.15761",
"transformers"
] | text-classification | false | facebook | null | facebook/roberta-hate-speech-dynabench-r1-target | 32 | null | transformers | 7,025 | ---
language: en
---
# LFTW R1 Target
The R1 Target model from [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection](https://arxiv.org/abs/2012.15761)
## Citation Information
```bibtex
@inproceedings{vidgen2021lftw,
title={Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection},
author={Bertie Vidgen and Tristan Thrush and Zeerak Waseem and Douwe Kiela},
booktitle={ACL},
year={2021}
}
```
Thanks to Kushal Tirumala and Adina Williams for helping the authors put the model on the hub! |
waboucay/camembert-large-finetuned-xnli_fr_3_classes-finetuned-repnum_wl-rua_wl_3_classes | 5aca238b6f97a45842eb58f02f68f9cae323174a | 2022-06-20T12:40:57.000Z | [
"pytorch",
"camembert",
"text-classification",
"fr",
"transformers",
"nli"
] | text-classification | false | waboucay | null | waboucay/camembert-large-finetuned-xnli_fr_3_classes-finetuned-repnum_wl-rua_wl_3_classes | 32 | null | transformers | 7,026 | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 75.4 | 75.4 |
| test | 76.1 | 76.0 | |
KoichiYasuoka/bert-large-japanese-wikipedia-ud-head | 0067f7c14b99922b78c017ef0f5b5e5dbfa5a061 | 2022-07-20T03:51:48.000Z | [
"pytorch",
"bert",
"question-answering",
"ja",
"dataset:universal_dependencies",
"transformers",
"japanese",
"wikipedia",
"dependency-parsing",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | question-answering | false | KoichiYasuoka | null | KoichiYasuoka/bert-large-japanese-wikipedia-ud-head | 32 | null | transformers | 7,027 | ---
language:
- "ja"
tags:
- "japanese"
- "wikipedia"
- "question-answering"
- "dependency-parsing"
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# bert-large-japanese-wikipedia-ud-head
## Model Description
This is a BERT model pretrained on Japanese Wikipedia texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model)
print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.file_utils import hf_bucket_url
c=AutoConfig.from_pretrained(hf_bucket_url(bert,"deprel/config.json"))
d=x(hf_bucket_url(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(hf_bucket_url(bert,"tagger/config.json"))
t=x(hf_bucket_url(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
|
Sayan01/tiny-bert-qqp-distilled | 17f75e1c83b5db90afc168341ea90efb5c180ce7 | 2022-07-24T01:52:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Sayan01 | null | Sayan01/tiny-bert-qqp-distilled | 32 | null | transformers | 7,028 | Entry not found |
hf-internal-testing/tiny-random-bloom | e8ce66e3837114693d99ab121fbf96951684ce42 | 2022-06-27T18:38:43.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"eng",
"transformers",
"integration",
"text-generation"
] | text-generation | false | hf-internal-testing | null | hf-internal-testing/tiny-random-bloom | 32 | null | transformers | 7,029 | ---
language:
- eng
tags:
- integration
pipeline_tag: text-generation
---
# BigScience - testing model
This model aims to test the conversion between Megatron-LM and transformers. It is a small ```GPT-2```-like model that has been used to debug the script. Use it only for integration tests |
domenicrosati/deberta-mlm-test | 94d7b7819a8d1a33991c2f396cff22c2119dab62 | 2022-06-29T05:17:09.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
] | fill-mask | false | domenicrosati | null | domenicrosati/deberta-mlm-test | 32 | null | transformers | 7,030 | ---
license: mit
tags:
- fill-mask
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-mlm-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-mlm-test
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2792
- Accuracy: 0.4766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.4466 | 1.0 | 2067 | 4.1217 | 0.3847 |
| 3.9191 | 2.0 | 4134 | 3.6562 | 0.4298 |
| 3.6397 | 3.0 | 6201 | 3.4417 | 0.4550 |
| 3.522 | 4.0 | 8268 | 3.3239 | 0.4692 |
| 3.4504 | 5.0 | 10335 | 3.2792 | 0.4766 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aktsvigun/bart-base-aeslc-23419 | 2208d22648160b08f252f6c6b4a26147e31f858e | 2022-07-01T15:20:09.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base-aeslc-23419 | 32 | null | transformers | 7,031 | Entry not found |
anneke/finetuning-distilbert-base-uncased-5000-samples | bfe2377c15873c4fa8941aadc7f3235726cc7222 | 2022-07-05T14:05:17.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | anneke | null | anneke/finetuning-distilbert-base-uncased-5000-samples | 32 | null | transformers | 7,032 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-base-uncased-5000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-base-uncased-5000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1147
- Accuracy: 0.982
- F1: 0.9904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SushantGautam/LogClassification | cbf587c7938109589b55be35d227eb1766ce9bdb | 2022-07-09T14:21:33.000Z | [
"pytorch",
"canine",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-classification | false | SushantGautam | null | SushantGautam/LogClassification | 32 | null | transformers | 7,033 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LogClassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LogClassification
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
tbboukhari/wav2vec2-base-timit-demo-google-colab | c8d45cd05187486896e095643319178159c38164 | 2022-07-20T13:44:07.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | tbboukhari | null | tbboukhari/wav2vec2-base-timit-demo-google-colab | 32 | null | transformers | 7,034 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5261
- Wer: 0.3351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5764 | 1.0 | 500 | 2.3358 | 1.0 |
| 0.9494 | 2.01 | 1000 | 0.6086 | 0.5448 |
| 0.4527 | 3.01 | 1500 | 0.4731 | 0.4685 |
| 0.307 | 4.02 | 2000 | 0.4432 | 0.4341 |
| 0.2366 | 5.02 | 2500 | 0.4343 | 0.4025 |
| 0.1934 | 6.02 | 3000 | 0.4284 | 0.4105 |
| 0.154 | 7.03 | 3500 | 0.4709 | 0.3936 |
| 0.14 | 8.03 | 4000 | 0.4296 | 0.3889 |
| 0.1189 | 9.04 | 4500 | 0.4864 | 0.3862 |
| 0.1057 | 10.04 | 5000 | 0.4903 | 0.3776 |
| 0.1034 | 11.04 | 5500 | 0.4889 | 0.3838 |
| 0.0899 | 12.05 | 6000 | 0.4680 | 0.3701 |
| 0.0864 | 13.05 | 6500 | 0.4981 | 0.3608 |
| 0.0714 | 14.06 | 7000 | 0.4608 | 0.3589 |
| 0.0673 | 15.06 | 7500 | 0.4970 | 0.3754 |
| 0.0606 | 16.06 | 8000 | 0.5344 | 0.3618 |
| 0.0603 | 17.07 | 8500 | 0.4980 | 0.3675 |
| 0.0588 | 18.07 | 9000 | 0.5339 | 0.3601 |
| 0.0453 | 19.08 | 9500 | 0.4973 | 0.3526 |
| 0.0433 | 20.08 | 10000 | 0.5359 | 0.3572 |
| 0.0421 | 21.08 | 10500 | 0.4885 | 0.3532 |
| 0.0359 | 22.09 | 11000 | 0.5184 | 0.3471 |
| 0.032 | 23.09 | 11500 | 0.5230 | 0.3483 |
| 0.0333 | 24.1 | 12000 | 0.5512 | 0.3474 |
| 0.0279 | 25.1 | 12500 | 0.5102 | 0.3437 |
| 0.0232 | 26.1 | 13000 | 0.5195 | 0.3384 |
| 0.0237 | 27.11 | 13500 | 0.5350 | 0.3355 |
| 0.0209 | 28.11 | 14000 | 0.5432 | 0.3368 |
| 0.023 | 29.12 | 14500 | 0.5261 | 0.3351 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Fagen/TrueNeuromiron1 | e297e1ce01e794b447edb33b172ca63230f1efb0 | 2022-07-14T20:10:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:unknown"
] | text-generation | false | Fagen | null | Fagen/TrueNeuromiron1 | 32 | null | transformers | 7,035 | ---
license: unknown
---
|
GeniusVoice/tinybertje-v2 | 69833f79c74ec941254c577a17a5c124c1770cc8 | 2022-07-20T09:00:25.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | GeniusVoice | null | GeniusVoice/tinybertje-v2 | 32 | null | transformers | 7,036 | Entry not found |
christofid/pgt | d73cf7457c1029a773f6030b193ee699bad2ea09 | 2022-07-27T09:00:11.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:mit"
] | text-generation | false | christofid | null | christofid/pgt | 32 | 0 | transformers | 7,037 | ---
license: mit
---
### PGT
PGT is a GPT-2 prompt-based model trained to facilitate 3 patent generation-related tasks, namely: *part-of-patent generation*, *part-of-patent editing* and *patent coherence check*. For more information about the dataset and the training procedure with refer the reader to [our paper](https://openreview.net/pdf?id=dLHtwZKvJmE).
The task specification is taken place by appending a short sentence at the end of a given input. The general format is:
`input <|sep|> task specific prompt <|sep|>`
In all cases, the generated output ends with the special token <|endoftext|> to facilitate postprocessing.
### Supported tasks
**Part-of-patent generation** attempts to generate a part of a patent given as input another, already existing part of it. The model has been trained to perform title-to-abstract, abstract-to-claim as well as their inverse generations. For the claim case, the model was only exposed to independent claims during the training. Input example for part-of-patent generation for the abstract-to-title case:
`An interesting patent abstract. <|sep|> Given the above abstract, suggest a title <|sep|>`
**Part-of-patent editing** attempts to suggest alternatives for some highlighted parts of a patent abstract or claim. These parts are defined in the input with the special [MASK] token. The expected size of these masked parts can be from a single word to a small phrase. If more than one masks are given in the input, then the generated suggestions are distinguished in the output but the special <|mask_sep|> token. Input example for part-of-patent editing working on a claim input:
`An interesting patent claim with a [MASK] part. <|sep|> Replace the [MASK] tokens in the above claim <|sep|>`
The **coherence check** assesses the quality of a patent by examining whether to given parts of a patent could belong to the same patent in terms of content and syntax. The input patent parts can be title, abstract or claim. The expected output is Yes or No. Input example for the coherence check task having as input a title and a claim:
`A patent title <|sep|> An interesting patent claim. <|sep|> Do the above title and claim belong to the same patent? <|sep|>"`
Further prompts and tasks can be tried in a zero-shot fashion.
The model and the tasks are also integrated and available via the [GT4SD python library](https://github.com/GT4SD/gt4sd-core/blob/main/notebooks/explore-pgt.ipynb).
### Example
A full example of part-of-patent generation
```
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("christofid/pgt")
model = AutoModelForCausalLM.from_pretrained("christofid/pgt")
text = "Automated patent generation <|sep|> Given the above title, suggest an abstract <|sep|>"
text_encoded = tokenizer.encode(text, return_tensors="pt")
generated = model.generate(text_encoded, do_sample=True, top_k=50, num_return_sequences = 3, max_length=512)
generated_text = [tokenizer.decode(case).split("<|endoftext|>")[0].strip() for case in generated]
```
### BibTeX entry and citation info
```
@inproceedings{christofidellis2022pgt,
title={PGT: a prompt based generative transformer for the patent domain},
author={Christofidellis, Dimitrios and Torres, Antonio Berrios and Dave, Ashish and Roveri, Manuel and Schmidt, Kristin and Swaminathan, Sarath and Vandierendonck, Hans and Zubarev, Dmitry and Manica, Matteo},
booktitle={ICML 2022 Workshop on Knowledge Retrieval and Language Models},
year={2022}
}
```
|
rufimelo/Legal-BERTimbau-large | 6513127ccfa4c94213a86c86f79fbd68c82e7be6 | 2022-07-25T13:49:27.000Z | [
"pytorch",
"bert",
"fill-mask",
"pt",
"transformers",
"license:mit",
"autotrain_compatible"
] | fill-mask | false | rufimelo | null | rufimelo/Legal-BERTimbau-large | 32 | 1 | transformers | 7,038 | ---
language:
- pt
thumbnail: "Portugues BERT for the Legal Domain"
tags:
- bert
- pytorch
license: "mit"
widget:
- text: "O advogado apresentou [MASK] ao juíz."
---
# Legal_BERTimbau
## Introduction
Legal_BERTimbau Large is a fine-tuned BERT model based on [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
"BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/)."
The performance of Language Models can change drastically when there is a domain shift between training and test data. In order create a Portuguese Language Model adapted to a Legal domain, the original BERTimbau model was submitted to a fine-tuning stage where it was performed 1 "PreTraining" epoch over 30 000 legal Portuguese Legal documents available online.
## Available models
| Model | Arch. | #Layers | #Params |
| ---------------------------------------- | ---------- | ------- | ------- |
| `rufimelo/Legal-BERTimbau-large` | BERT-Large | 24 | 335M |
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large")
```
### Masked language modeling prediction example
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large")
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
pipe('O advogado apresentou [MASK] para o juíz')
# [{'score': 0.5034703612327576,
#'token': 8190,
#'token_str': 'recurso',
#'sequence': 'O advogado apresentou recurso para o juíz'},
#{'score': 0.07347951829433441,
#'token': 21973,
#'token_str': 'petição',
#'sequence': 'O advogado apresentou petição para o juíz'},
#{'score': 0.05165359005331993,
#'token': 4299,
#'token_str': 'resposta',
#'sequence': 'O advogado apresentou resposta para o juíz'},
#{'score': 0.04611917585134506,
#'token': 5265,
#'token_str': 'exposição',
#'sequence': 'O advogado apresentou exposição para o juíz'},
#{'score': 0.04068068787455559,
#'token': 19737, 'token_str':
#'alegações',
#'sequence': 'O advogado apresentou alegações para o juíz'}]
```
### For BERT embeddings
```python
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-large')
input_ids = tokenizer.encode('O advogado apresentou recurso para o juíz', return_tensors='pt')
with torch.no_grad():
outs = model(input_ids)
encoded = outs[0][0, 1:-1]
#tensor([[ 0.0328, -0.4292, -0.6230, ..., -0.3048, -0.5674, 0.0157],
#[-0.3569, 0.3326, 0.7013, ..., -0.7778, 0.2646, 1.1310],
#[ 0.3169, 0.4333, 0.2026, ..., 1.0517, -0.1951, 0.7050],
#...,
#[-0.3648, -0.8137, -0.4764, ..., -0.2725, -0.4879, 0.6264],
#[-0.2264, -0.1821, -0.3011, ..., -0.5428, 0.1429, 0.0509],
#[-1.4617, 0.6281, -0.0625, ..., -1.2774, -0.4491, 0.3131]])
```
## Citation
If you use this work, please cite BERTimbau's work:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
```
|
anzorq/kbd_lat-835k_ru-3M_t5-small | 134aa6d7cf2e76d9f5c0bfeb81174785f6e400a7 | 2022-07-26T21:21:20.000Z | [
"pytorch",
"jax",
"t5",
"text2text-generation",
"kbd",
"ru",
"dataset:anzorq/kbd_lat-835k_ru-3M",
"transformers",
"circassian",
"kabardian",
"license:unknown",
"autotrain_compatible"
] | text2text-generation | false | anzorq | null | anzorq/kbd_lat-835k_ru-3M_t5-small | 32 | null | transformers | 7,039 | ---
language:
- kbd
- ru
tags:
- circassian
- kabardian
license: unknown
datasets:
- anzorq/kbd_lat-835k_ru-3M
---
t5-v1_1-small pretrained with mlm task on
• kbd (custom latin script) 835K lines: a pile of scraped text from news sites, books etc.
• ru 3M lines: wiki corpus from OPUS
tokenizer: sentencepiece unigram, 8K, shared vocabulary |
AnonymousSub/recipes-roberta-base-tokenwise-token-and-step-losses_with_ingr | 3740a1be4fa7952e0d7fa32d129fd4c2b0bdcd8c | 2022-07-28T02:01:28.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | false | AnonymousSub | null | AnonymousSub/recipes-roberta-base-tokenwise-token-and-step-losses_with_ingr | 32 | null | transformers | 7,040 | Entry not found |
aware-ai/wav2vec2-xls-r-300m | e3f05d57fee07844432244aaf29cb581d2ffa698 | 2022-07-30T09:39:20.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | aware-ai | null | aware-ai/wav2vec2-xls-r-300m | 32 | null | transformers | 7,041 | Entry not found |
xlm-roberta-large-finetuned-conll02-spanish | 9224ae48fe1128e0bd8c5b43738a144d6ce5e335 | 2022-07-22T08:07:22.000Z | [
"pytorch",
"rust",
"xlm-roberta",
"fill-mask",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:1911.02116",
"arxiv:1910.09700",
"transformers",
"autotrain_compatible"
] | fill-mask | false | null | null | xlm-roberta-large-finetuned-conll02-spanish | 31 | null | transformers | 7,042 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- no
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
---
# xlm-roberta-large-finetuned-conll02-spanish
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
The XLM-RoBERTa model was proposed in [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. It is based on Facebook's RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. This model is [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) fine-tuned with the [CoNLL-2002](https://huggingface.co/datasets/conll2002) dataset in Spanish.
- **Developed by:** See [associated paper](https://arxiv.org/abs/1911.02116)
- **Model type:** Multi-lingual language model
- **Language(s) (NLP):** XLM-RoBERTa is a multilingual model trained on 100 different languages; see [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr) for full list; model is fine-tuned on a dataset in Spanish.
- **License:** More information needed
- **Related Models:** [RoBERTa](https://huggingface.co/roberta-base), [XLM](https://huggingface.co/docs/transformers/model_doc/xlm)
- **Parent Model:** [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large)
- **Resources for more information:**
-[GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/xlmr)
-[Associated Paper](https://arxiv.org/abs/1911.02116)
-[CoNLL-2002 data card](https://huggingface.co/datasets/conll2002)
# Uses
## Direct Use
The model is a language model. The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.
## Downstream Use
Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
**CONTENT WARNING: Readers should be made aware that language generated by this model may be disturbing or offensive to some and may propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
See the following resources for training data and training procedure details:
- [XLM-RoBERTa-large model card](https://huggingface.co/xlm-roberta-large)
- [CoNLL-2002 data card](https://huggingface.co/datasets/conll2002)
- [Associated paper](https://arxiv.org/pdf/1911.02116.pdf)
# Evaluation
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for evaluation details.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 500 32GB Nvidia V100 GPUs (from the [associated paper](https://arxiv.org/pdf/1911.02116.pdf))
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
See the [associated paper](https://arxiv.org/pdf/1911.02116.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{conneau2019unsupervised,
title={Unsupervised Cross-lingual Representation Learning at Scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
```
**APA:**
- Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., ... & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly within a pipeline for NER.
<details>
<summary> Click to expand </summary>
```python
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> from transformers import pipeline
>>> tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll02-spanish")
>>> model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll02-spanish")
>>> classifier = pipeline("ner", model=model, tokenizer=tokenizer)
>>> classifier("Efectuaba un vuelo entre bombay y nueva york.")
[{'end': 30,
'entity': 'B-LOC',
'index': 7,
'score': 0.95703226,
'start': 25,
'word': '▁bomba'},
{'end': 39,
'entity': 'B-LOC',
'index': 10,
'score': 0.9771854,
'start': 34,
'word': '▁nueva'},
{'end': 43,
'entity': 'I-LOC',
'index': 11,
'score': 0.9914097,
'start': 40,
'word': '▁yor'}]
```
</details>
|
BramVanroy/gpt-neo-125M_finetuned-tolkien | 3c91dbac23866d12ec91bf1c73a92087b1d272c9 | 2021-10-04T09:30:59.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"en",
"transformers"
] | text-generation | false | BramVanroy | null | BramVanroy/gpt-neo-125M_finetuned-tolkien | 31 | 1 | transformers | 7,043 | ---
language:
- en
---
*First attempt. Likely poor quality!*
Finetuned version of GPT-Neo 125M on some of Tolkien's works, namely Beren and Lúthien, The Lord of The Rings (+ appendices), and The Hobbit.
Trained with a strided sliding window. Paragraphs were separated by new lines.
- batch size: 32
- train epochs: 10
- context window size: 128
- input chunk size: 2048
- current revision: chkpt 6300
|
Cameron/BERT-mdgender-wizard | 10d96a9c252b6f11aa594e693dea08e30db38a99 | 2021-05-18T17:33:48.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | Cameron | null | Cameron/BERT-mdgender-wizard | 31 | null | transformers | 7,044 | Entry not found |
DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | f0aca5ea2d7bef0378bcbd0a6a00ed1072d83b5e | 2021-05-18T18:06:20.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"multilingual",
"arxiv:2104.09947",
"transformers",
"Dutch",
"French",
"English",
"Tweets",
"Sentiment analysis"
] | text-classification | false | DTAI-KULeuven | null | DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support | 31 | null | transformers | 7,045 | ---
language: "multilingual"
tags:
- Dutch
- French
- English
- Tweets
- Sentiment analysis
widget:
- text: "I really wish I could leave my house after midnight, this makes no sense!"
---
# Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT
[Blog post »](https://people.cs.kuleuven.be/~pieter.delobelle/attitudes-towards-covid-19-measures/?utm_source=huggingface&utm_medium=social&utm_campaign=corona_tweets) · [paper »](http://arxiv.org/abs/2104.09947)
This model can be used to determine if a tweet expresses support or not for a curfew. The model was trained on manually labeled tweets from Belgium in Dutch, French and English.
We categorized several months worth of these Tweets by topic (government COVID measure) and opinion expressed. Below is a timeline of the relative number of Tweets on the curfew topic (middle) and the fraction of those Tweets that find the curfew too strict, too loose, or a suitable measure (bottom), with the number of daily cases in Belgium to give context on the pandemic situation (top).

Models used in this paper are on HuggingFace:
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-curfew-support
- https://huggingface.co/DTAI-KULeuven/mbert-corona-tweets-belgium-topics
|
DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 | 6e74d16e5521c8f0ae3773eaf299a2d6155ff208 | 2022-03-24T11:52:49.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hi",
"dataset:mozilla-foundation/common_voice_7_0",
"transformers",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | DrishtiSharma | null | DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 | 31 | null | transformers | 7,046 | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- hi
- robust-speech-event
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-large-xls-r-300m-hi-CV7
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: hi
metrics:
- name: Test WER
type: wer
value: 0.3531946325249292
- name: Test CER
type: cer
value: 0.11310803379493076
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: vot
metrics:
- name: Test WER
type: wer
value: NA
- name: Test CER
type: cer
value: NA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-CV7
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6588
- Wer: 0.2987
### Evaluation Commands
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-hi-CV7 --dataset mozilla-foundation/common_voice_7_0 --config hi --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
NA
### Training hyperparameters
The following hyperparameters were used during training:
#
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 12.809 | 1.36 | 200 | 6.2066 | 1.0 |
| 4.3402 | 2.72 | 400 | 3.5184 | 1.0 |
| 3.4365 | 4.08 | 600 | 3.2779 | 1.0 |
| 1.8643 | 5.44 | 800 | 0.9875 | 0.6270 |
| 0.7504 | 6.8 | 1000 | 0.6382 | 0.4666 |
| 0.5328 | 8.16 | 1200 | 0.6075 | 0.4505 |
| 0.4364 | 9.52 | 1400 | 0.5785 | 0.4215 |
| 0.3777 | 10.88 | 1600 | 0.6279 | 0.4227 |
| 0.3374 | 12.24 | 1800 | 0.6536 | 0.4192 |
| 0.3236 | 13.6 | 2000 | 0.5911 | 0.4047 |
| 0.2877 | 14.96 | 2200 | 0.5955 | 0.4097 |
| 0.2643 | 16.33 | 2400 | 0.5923 | 0.3744 |
| 0.2421 | 17.68 | 2600 | 0.6307 | 0.3814 |
| 0.2218 | 19.05 | 2800 | 0.6036 | 0.3764 |
| 0.2046 | 20.41 | 3000 | 0.6286 | 0.3797 |
| 0.191 | 21.77 | 3200 | 0.6517 | 0.3889 |
| 0.1856 | 23.13 | 3400 | 0.6193 | 0.3661 |
| 0.1721 | 24.49 | 3600 | 0.7034 | 0.3727 |
| 0.1656 | 25.85 | 3800 | 0.6293 | 0.3591 |
| 0.1532 | 27.21 | 4000 | 0.6075 | 0.3611 |
| 0.1507 | 28.57 | 4200 | 0.6313 | 0.3565 |
| 0.1381 | 29.93 | 4400 | 0.6564 | 0.3578 |
| 0.1359 | 31.29 | 4600 | 0.6724 | 0.3543 |
| 0.1248 | 32.65 | 4800 | 0.6789 | 0.3512 |
| 0.1198 | 34.01 | 5000 | 0.6442 | 0.3539 |
| 0.1125 | 35.37 | 5200 | 0.6676 | 0.3419 |
| 0.1036 | 36.73 | 5400 | 0.7017 | 0.3435 |
| 0.0982 | 38.09 | 5600 | 0.6828 | 0.3319 |
| 0.0971 | 39.45 | 5800 | 0.6112 | 0.3351 |
| 0.0968 | 40.81 | 6000 | 0.6424 | 0.3252 |
| 0.0893 | 42.18 | 6200 | 0.6707 | 0.3304 |
| 0.0878 | 43.54 | 6400 | 0.6432 | 0.3236 |
| 0.0827 | 44.89 | 6600 | 0.6696 | 0.3240 |
| 0.0788 | 46.26 | 6800 | 0.6564 | 0.3180 |
| 0.0753 | 47.62 | 7000 | 0.6574 | 0.3130 |
| 0.0674 | 48.98 | 7200 | 0.6698 | 0.3175 |
| 0.0676 | 50.34 | 7400 | 0.6441 | 0.3142 |
| 0.0626 | 51.7 | 7600 | 0.6642 | 0.3121 |
| 0.0617 | 53.06 | 7800 | 0.6615 | 0.3117 |
| 0.0599 | 54.42 | 8000 | 0.6634 | 0.3059 |
| 0.0538 | 55.78 | 8200 | 0.6464 | 0.3033 |
| 0.0571 | 57.14 | 8400 | 0.6503 | 0.3018 |
| 0.0491 | 58.5 | 8600 | 0.6625 | 0.3025 |
| 0.0511 | 59.86 | 8800 | 0.6588 | 0.2987 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
FremyCompany/xls-r-2b-nl-v2_lm-5gram-os | 94327a35a5dd57fef3d038fbe605923f297c9c1d | 2022-03-23T18:28:14.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"nl_BE",
"nl_NL",
"robust-speech-event",
"model-index"
] | automatic-speech-recognition | false | FremyCompany | null | FremyCompany/xls-r-2b-nl-v2_lm-5gram-os | 31 | 1 | transformers | 7,047 | ---
language:
- nl
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- model_for_talk
- mozilla-foundation/common_voice_8_0
- nl
- nl_BE
- nl_NL
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: xls-r-nl-v1-cv8-lm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: nl
metrics:
- name: Test WER
type: wer
value: 4.06
- name: Test CER
type: cer
value: 1.22
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: nl
metrics:
- name: Test WER
type: wer
value: 17.77
- name: Test CER
type: cer
value: 9.77
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: nl
metrics:
- name: Test WER
type: wer
value: 16.32
---
# XLS-R-based CTC model with 5-gram language model from Open Subtitles
This model is a version of [facebook/wav2vec2-xls-r-2b-22-to-16](https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16) fine-tuned mainly on the [CGN dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/), as well as the [MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - NL](https://commonvoice.mozilla.org) dataset (see details below), on which a large 5-gram language model is added based on the Open Subtitles Dutch corpus. This model achieves the following results on the evaluation set (of Common Voice 8.0):
- Wer: 0.04057
- Cer: 0.01222
## Model description
The model takes 16kHz sound input, and uses a Wav2Vec2ForCTC decoder with 48 letters to output the letter-transcription probabilities per frame.
To improve accuracy, a beam-search decoder based on `pyctcdecode` is then used; it reranks the most promising alignments based on a 5-gram language model trained on the Open Subtitles Dutch corpus.
## Intended uses & limitations
This model can be used to transcribe Dutch or Flemish spoken dutch to text (without punctuation).
## Training and evaluation data
The model was:
0. initialized with [the 2B parameter model from Facebook](facebook/wav2vec2-xls-r-2b-22-to-16).
1. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
2. trained `1` epoch (36000 iterations of batch size 32) on [the `cgn` dataset](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
3. trained `5` epochs (6000 iterations of batch size 32) on [the `cv8/nl` dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0).
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0 |
GKLMIP/electra-khmer-small-uncased | 6e1ef7197c46c929ff90f78cbb0453deb60abb5d | 2021-07-31T05:39:36.000Z | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | GKLMIP | null | GKLMIP/electra-khmer-small-uncased | 31 | null | transformers | 7,048 | https://github.com/GKLMIP/Pretrained-Models-For-Khmer
If you use our model, please consider citing our paper:
```
@article{,
author="Jiang, Shengyi
and Fu, Sihui
and Lin, Nankai
and Fu, Yingwen",
title="Pre-trained Models and Evaluation Data for the Khmer Language",
year="2021",
publisher="Tsinghua Science and Technology",
}
``` |
Geotrend/bert-base-tr-cased | f3827b90bd2393dc83292addd95dc1598c66edc0 | 2021-05-18T20:12:30.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"tr",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-tr-cased | 31 | null | transformers | 7,049 | ---
language: tr
datasets: wikipedia
license: apache-2.0
---
# bert-base-tr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-tr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-tr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Hellisotherpeople/T5_Reassuring_Parables | 40c8c1be8c3cf65c557644c0e3c222ca5ebfb283 | 2021-12-25T06:48:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | Hellisotherpeople | null | Hellisotherpeople/T5_Reassuring_Parables | 31 | null | transformers | 7,050 | https://imgs.xkcd.com/comics/reassuring.png
|
Helsinki-NLP/opus-mt-en-sq | b2365d93766da5d5bb7570a2289491bf9db40a44 | 2021-09-09T21:39:15.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"sq",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-sq | 31 | null | transformers | 7,051 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-sq
* source languages: en
* target languages: sq
* OPUS readme: [en-sq](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-sq/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-sq/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.sq | 46.5 | 0.669 |
|
Helsinki-NLP/opus-mt-gl-es | 1acbbb2ecac07dfcacd9e1cbcaa3a40e4db23a0c | 2021-01-18T08:52:42.000Z | [
"pytorch",
"marian",
"text2text-generation",
"gl",
"es",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-gl-es | 31 | null | transformers | 7,052 | ---
language:
- gl
- es
tags:
- translation
license: apache-2.0
---
### glg-spa
* source group: Galician
* target group: Spanish
* OPUS readme: [glg-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-spa/README.md)
* model: transformer-align
* source language(s): glg
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.glg.spa | 72.2 | 0.836 |
### System Info:
- hf_name: glg-spa
- source_languages: glg
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/glg-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gl', 'es']
- src_constituents: {'glg'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/glg-spa/opus-2020-06-16.test.txt
- src_alpha3: glg
- tgt_alpha3: spa
- short_pair: gl-es
- chrF2_score: 0.836
- bleu: 72.2
- brevity_penalty: 0.982
- ref_len: 17443.0
- src_name: Galician
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: gl
- tgt_alpha2: es
- prefer_old: False
- long_pair: glg-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-sv-th | f3466880baa26cf96f27dd60599b86b09fde36e4 | 2021-09-10T14:09:49.000Z | [
"pytorch",
"marian",
"text2text-generation",
"sv",
"th",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-sv-th | 31 | null | transformers | 7,053 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-th
* source languages: sv
* target languages: th
* OPUS readme: [sv-th](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-th/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-th/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-th/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-th/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.th | 21.2 | 0.373 |
|
Jorgeutd/bert-large-uncased-finetuned-ner | 9fac416c8e7631a311d169f7827502af9b521e52 | 2022-02-16T16:05:14.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | token-classification | false | Jorgeutd | null | Jorgeutd/bert-large-uncased-finetuned-ner | 31 | null | transformers | 7,054 | ---
license: apache-2.0
tags:
- generated_from_trainer
language: en
widget:
- text: "My name is Scott and I live in Columbus."
- text: "My name is Scott and I am calling from Buffalo, NY. I would like to file a complain with United Airlines."
- text: "Apple was founded in 1976 by Steve Jobs, Steve Wozniak and Ronald Wayne."
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-large-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9504719600222099
- name: Recall
type: recall
value: 0.9574896520863632
- name: F1
type: f1
value: 0.9539679001337494
- name: Accuracy
type: accuracy
value: 0.9885618059637473
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-ner
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0778
- Precision: 0.9505
- Recall: 0.9575
- F1: 0.9540
- Accuracy: 0.9886
## Model description
More information needed
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Jorgeutd/bert-large-uncased-finetuned-ner")
model = AutoModelForTokenClassification.from_pretrained("Jorgeutd/bert-large-uncased-finetuned-ner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Scott and I live in Ohio"
ner_results = nlp(example)
print(ner_results)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1997 | 1.0 | 878 | 0.0576 | 0.9316 | 0.9257 | 0.9286 | 0.9837 |
| 0.04 | 2.0 | 1756 | 0.0490 | 0.9400 | 0.9513 | 0.9456 | 0.9870 |
| 0.0199 | 3.0 | 2634 | 0.0557 | 0.9436 | 0.9540 | 0.9488 | 0.9879 |
| 0.0112 | 4.0 | 3512 | 0.0602 | 0.9443 | 0.9569 | 0.9506 | 0.9881 |
| 0.0068 | 5.0 | 4390 | 0.0631 | 0.9451 | 0.9589 | 0.9520 | 0.9882 |
| 0.0044 | 6.0 | 5268 | 0.0638 | 0.9510 | 0.9567 | 0.9538 | 0.9885 |
| 0.003 | 7.0 | 6146 | 0.0722 | 0.9495 | 0.9560 | 0.9527 | 0.9885 |
| 0.0016 | 8.0 | 7024 | 0.0762 | 0.9491 | 0.9595 | 0.9543 | 0.9887 |
| 0.0018 | 9.0 | 7902 | 0.0769 | 0.9496 | 0.9542 | 0.9519 | 0.9883 |
| 0.0009 | 10.0 | 8780 | 0.0778 | 0.9505 | 0.9575 | 0.9540 | 0.9886 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
KBLab/bart-base-swedish-cased | 5e2e4b3d0b5a34f6c3e152f1b7d11fc944e27fa0 | 2022-04-14T10:55:32.000Z | [
"pytorch",
"bart",
"text2text-generation",
"sv",
"arxiv:1910.13461",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | KBLab | null | KBLab/bart-base-swedish-cased | 31 | 1 | transformers | 7,055 | ---
language: sv
widget:
- text: "Jag har ätit en <mask>"
---
## KB-BART
A [BART](https://arxiv.org/abs/1910.13461) model trained on a Swedish corpus consisting of 15 billion tokens (about 80GB of text). The model was trained with [Fairseq](https://github.com/pytorch/fairseq), and converted to be compatible with Huggingface.
Training code can be found [here](https://github.com/kb-labb/kb_bart).
## Usage
```python
from transformers import BartForConditionalGeneration, PreTrainedTokenizerFast, AutoTokenizer
model = BartForConditionalGeneration.from_pretrained("KBLab/bart-base-swedish-cased")
tok = AutoTokenizer.from_pretrained("KBLab/bart-base-swedish-cased")
model.eval()
input_ids = tok.encode(
"Jag har ätit en utsökt <mask> på restaurang vid <mask> .", return_tensors="pt"
)
# Simple greedy search
output_ids = model.generate(
input_ids,
min_length=15,
max_length=25,
num_beams=1,
do_sample=False,
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet på restaurang vid havet på restaurang vid havet.</s>'
# Sampling
output_ids = model.generate(
input_ids,
min_length=15,
max_length=20,
num_beams=1,
do_sample=True,
)
tok.decode(output_ids[0])
#'</s><s> Jag har ätit en utsökt god mat som de tagit in på restaurang vid avröjda</s>'
# Beam search
output_ids = model.generate(
input_ids,
min_length=15,
max_length=25,
no_repeat_ngram_size=3,
num_beams=8,
early_stopping=True,
do_sample=True,
num_return_sequences=6
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet. Jag har varit ute och gått en sväng.</s><pad><pad>'
# Diverse beam generation
output_ids = model.generate(
input_ids,
min_length=50,
max_length=100,
no_repeat_ngram_size=3,
num_beams=8,
early_stopping=True,
do_sample=False,
num_return_sequences=6,
num_beam_groups=8,
diversity_penalty=2.0,
)
tok.decode(output_ids[0])
# '</s><s> Jag har ätit en utsökt middag på restaurang vid havet på restaurang. Jag har varit på restaurang i två dagar... Jag..,..!!!.. Så.. Nu.. Hej.. Vi.. Här.</s>'
```
## Acknowledgements
We gratefully acknowledge the HPC RIVR consortium ([www.hpc-rivr.si](https://www.hpc-rivr.si/)) and EuroHPC JU ([eurohpc-ju.europa.eu/](https://eurohpc-ju.europa.eu/)) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science ([www.izum.si](https://www.izum.si/)). |
M47Labs/spanish_news_classification_headlines | 83d2420324598f7a4fe69b1122d00660992fb147 | 2021-09-07T11:56:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | M47Labs | null | M47Labs/spanish_news_classification_headlines | 31 | null | transformers | 7,056 | ---
widget:
- text: "El dólar se dispara tras la reunión de la Fed"
---
# Spanish News Classification Headlines
SNCH: this model was develop by [M47Labs](https://www.m47labs.com/es/) the goal is text classification, the base model use was [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased), it was fine-tuned on 1000 example dataset.
## Dataset Sample
Dataset size : 1000
Columns: idTask,task content 1,idTag,tag.
|idTask|task content 1|idTag|tag|
|------|------|------|------|
|3637d9ac-119c-4a8f-899c-339cf5b42ae0|Alcalá de Guadaíra celebra la IV Semana de la Diversidad Sexual con acciones de sensibilización|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|d56bab52-0029-45dd-ad90-5c17d4ed4c88|El Archipiélago Chinijo Graciplus se impone en el Trofeo Centro Comercial Rubicón|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes|
|dec70bc5-4932-4fa2-aeac-31a52377be02|Un total de 39 personas padecen ELA actualmente en la provincia|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|fb396ba9-fbf1-4495-84d9-5314eb731405|Eurocopa 2021 : Italia vence a Gales y pasa a octavos con su candidatura reforzada|ed198b6d-a5b9-4557-91ff-c0be51707dec|deportes|
|bc5a36ca-4e0a-422e-9167-766b41008c01|Resolución de 10 de junio de 2021, del Ayuntamiento de Tarazona de La Mancha (Albacete), referente a la convocatoria para proveer una plaza.|81b36360-6cbf-4ffa-b558-9ef95c136714|sociedad|
|a87f8703-ce34-47a5-9c1b-e992c7fe60f6|El primer ministro sueco pierde una moción de censura|209ae89e-55b4-41fd-aac0-5400feab479e|politica|
|d80bdaad-0ad5-43a0-850e-c473fd612526|El dólar se dispara tras la reunión de la Fed|11925830-148e-4890-a2bc-da9dc059dc17|economia|
## Labels:
* ciencia_tecnologia
* clickbait
* cultura
* deportes
* economia
* educacion
* medio_ambiente
* opinion
* politica
* sociedad
## Example of Use
### Pipeline
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
review_text = 'los vehiculos que esten esperando pasajaeros deberan estar apagados para reducir emisiones'
path = "M47Labs/spanish_news_classification_headlines"
tokenizer = AutoTokenizer.from_pretrained(path)
model = BertForSequenceClassification.from_pretrained(path)
nlp = TextClassificationPipeline(task = "text-classification",
model = model,
tokenizer = tokenizer)
print(nlp(review_text))
```
```[{'label': 'medio_ambiente', 'score': 0.5648820996284485}]```
### Pytorch
```{python}
import torch
from transformers import AutoTokenizer, BertForSequenceClassification,TextClassificationPipeline
from numpy import np
model_name = 'M47Labs/spanish_news_classification_headlines'
MAX_LEN = 32
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
texto = "las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno"
encoded_review = tokenizer.encode_plus(
texto,
max_length=MAX_LEN,
add_special_tokens=True,
#return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
input_ids = encoded_review['input_ids']
attention_mask = encoded_review['attention_mask']
output = model(input_ids, attention_mask)
_, prediction = torch.max(output['logits'], dim=1)
print(f'Review text: {texto}')
print(f'Sentiment : {model.config.id2label[prediction.detach().cpu().numpy()[0]]}')
```
```Review text: las emisiones estan bajando, debido a las medidas ambientales tomadas por el gobierno```
```Sentiment : medio_ambiente```
A more in depth example on how to use the model can be found in this colab notebook: https://colab.research.google.com/drive/1XsKea6oMyEckye2FePW_XN7Rf8v41Cw_?usp=sharing
## Finetune Hyperparameters
* MAX_LEN = 32
* TRAIN_BATCH_SIZE = 8
* VALID_BATCH_SIZE = 4
* EPOCHS = 5
* LEARNING_RATE = 1e-05
## Train Results
|n_example|epoch|loss|acc|
|------|------|------|------|
|100|0|2.286327266693115|12.5|
|100|1|2.018876111507416|40.0|
|100|2|1.8016730904579163|43.75|
|100|3|1.6121837735176086|46.25|
|100|4|1.41565443277359|68.75|
|n_example|epoch|loss|acc|
|------|------|------|------|
|500|0|2.0770938420295715|24.5|
|500|1|1.6953029704093934|50.25|
|500|2|1.258900796175003|64.25|
|500|3|0.8342628020048142|78.25|
|500|4|0.5135736921429634|90.25|
|n_example|epoch|loss|acc|
|------|------|------|------|
|1000|0|1.916002897115854|36.1997226074896|
|1000|1|1.2941598492664295|62.2746185852982|
|1000|2|0.8201534710415117|76.97642163661581|
|1000|3|0.524806430051615|86.9625520110957|
|1000|4|0.30662027455784463|92.64909847434119|
## Validation Results
|n_examples|100|
|------|------|
|Accuracy Score|0.35|
|Precision (Macro)|0.35|
|Recall (Macro)|0.16|
|n_examples|500|
|------|------|
|Accuracy Score|0.62|
|Precision (Macro)|0.60|
|Recall (Macro)|0.47|
|n_examples|1000|
|------|------|
|Accuracy Score|0.68|
|Precision(Macro)|0.68|
|Recall (Macro)|0.64|

|
MMG/mlm-spanish-roberta-base | 60432d13849407dfab272feca531865d57989279 | 2021-08-06T09:18:26.000Z | [
"pytorch",
"roberta",
"fill-mask",
"es",
"transformers",
"autotrain_compatible"
] | fill-mask | false | MMG | null | MMG/mlm-spanish-roberta-base | 31 | 1 | transformers | 7,057 | ---
language:
- es
widget:
- text: "MMG se dedica a la <mask> artificial."
---
# mlm-spanish-roberta-base
This model has a RoBERTa base architecture and was trained from scratch with 3.6 GB of raw text over 10 epochs. 4 Tesla V-100 GPUs were used for the training.
To test the quality of the resulting model we evaluate it over the [GLUES](https://github.com/dccuchile/GLUES) benchmark for Spanish NLU. The results are the following:
| Task | Score (metric) |
|:-----------------------:|:---------------------:|
| XNLI | 71.99 (accuracy) |
| Paraphrasing | 74.85 (accuracy) |
| NER | 85.34 (F1) |
| POS | 97.49 (accuracy) |
| Dependency Parsing | 85.14/81.08 (UAS/LAS) |
| Document Classification | 93.00 (accuracy) |
|
NTUYG/SOTitle-csharp-BART | 03cd470aea8744f0af23ece2c24b2e30bb64db03 | 2021-06-13T17:33:05.000Z | [
"pytorch",
"jax",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | NTUYG | null | NTUYG/SOTitle-csharp-BART | 31 | null | transformers | 7,058 | Entry not found |
Narsil/tiny-distilbert-sequence-classification | 39367a0b6b79c45362261fb6dfc738a910d06ce0 | 2021-07-29T13:20:56.000Z | [
"pytorch",
"tf",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | Narsil | null | Narsil/tiny-distilbert-sequence-classification | 31 | 1 | transformers | 7,059 | Entry not found |
NbAiLab/roberta_NCC_des_128_decayfrom200 | c26e2875ad97beec22cb1984cf0a64a3a2ff08d6 | 2022-01-15T00:11:52.000Z | [
"pytorch",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | NbAiLab | null | NbAiLab/roberta_NCC_des_128_decayfrom200 | 31 | null | transformers | 7,060 | Just for performing some experiments. Do not use.
|
Noricum/wav2vec2-large-xlsr-53-german | 53f748d205e5eda1c055555a6a408e5902ee17b3 | 2022-03-08T13:44:49.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
] | automatic-speech-recognition | false | Noricum | null | Noricum/wav2vec2-large-xlsr-53-german | 31 | null | transformers | 7,061 | # Wav2vec2 German Model
This model has been fine-tuned on the wav2vec-large-xlsr-53 with the German CommonVoice dataset.
It achieves a 11.26 WER on the full test dataset.
It was basically trained with the code provided by [Max Idahl](https://huggingface.co/maxidl/wav2vec2-large-xlsr-german) with small adjustments in data preprocessing and on training parameters.
You can use it to transcribe your own files by the following code. Please note, that your input file must be *.wav, encoded in 16 kHz and be single channel. To convert an audio file using ffmpeg use: "ffmpeg -i input.wav -ar 16000 -ac 1 output.wav". The transcribe process is very memory consuming (around 10GB per 10 seconds). If the script ends with "Killed" it means the Python interpreter ran out of memory. In this case, try with a shorter audio file.
```python
# !pip3 install transformers torch soundfile
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer
# load pretrained model
tokenizer = Wav2Vec2Tokenizer.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german")
model = Wav2Vec2ForCTC.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german")
#load audio
audio_input, _ = sf.read("/path/to/your/audio.wav")
# transcribe
input_values = tokenizer(audio_input, return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.batch_decode(predicted_ids)[0]
print(str(transcription))
```
To evaluate the model on the full CommonVoice test dataset, run this script:
```python
import re
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "de", split="test") # use "test[:1%]" for 1% sample
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german")
model = Wav2Vec2ForCTC.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=4) # batch_size=8 -> requires ~14.5GB GPU memory
# Chunked version, see https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/5:
import jiwer
def chunked_wer(targets, predictions, chunk_size=None):
if chunk_size is None: return jiwer.wer(targets, predictions)
start = 0
end = chunk_size
H, S, D, I = 0, 0, 0, 0
while start < len(targets):
chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
H = H + chunk_metrics["hits"]
S = S + chunk_metrics["substitutions"]
D = D + chunk_metrics["deletions"]
I = I + chunk_metrics["insertions"]
start += chunk_size
end += chunk_size
return float(S + D + I) / float(H + S + D)
print("Total (chunk_size=1000), WER: {:2f}".format(100 * chunked_wer(result["pred_strings"], result["sentence"], chunk_size=1000)))
```
Output: Total (chunk_size=1000), WER: 11.256522
|
Ulto/pythonCoPilot3 | adb0517ece979b3a5bf18414652a6184be54e935 | 2021-11-22T01:24:16.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"model-index"
] | text-generation | false | Ulto | null | Ulto/pythonCoPilot3 | 31 | null | transformers | 7,062 | ---
tags:
- generated_from_trainer
model-index:
- name: pythonCoPilot3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythonCoPilot3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm | e1926c3fa9f86395a3ded0adb6e576de9c199bb7 | 2022-03-23T18:28:35.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"sk",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anuragshas | null | anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm | 31 | null | transformers | 7,063 | ---
language:
- sk
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Slovak
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: sk
metrics:
- name: Test WER
type: wer
value: 18.609
- name: Test CER
type: cer
value: 5.488
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: sk
metrics:
- name: Test WER
type: wer
value: 40.548
- name: Test CER
type: cer
value: 17.733
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: sk
metrics:
- name: Test WER
type: wer
value: 44.1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Slovak
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - SK dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3067
- Wer: 0.2678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.175 | 2.41 | 400 | 4.6909 | 1.0 |
| 3.3785 | 4.82 | 800 | 3.3080 | 1.0 |
| 2.6964 | 7.23 | 1200 | 2.0651 | 1.1055 |
| 1.3008 | 9.64 | 1600 | 0.5845 | 0.6207 |
| 1.1185 | 12.05 | 2000 | 0.4195 | 0.4193 |
| 1.0252 | 14.46 | 2400 | 0.3824 | 0.3570 |
| 0.935 | 16.87 | 2800 | 0.3693 | 0.3462 |
| 0.8818 | 19.28 | 3200 | 0.3587 | 0.3318 |
| 0.8534 | 21.69 | 3600 | 0.3420 | 0.3180 |
| 0.8137 | 24.1 | 4000 | 0.3426 | 0.3130 |
| 0.7968 | 26.51 | 4400 | 0.3349 | 0.3102 |
| 0.7558 | 28.92 | 4800 | 0.3216 | 0.3019 |
| 0.7313 | 31.33 | 5200 | 0.3451 | 0.3060 |
| 0.7358 | 33.73 | 5600 | 0.3272 | 0.2967 |
| 0.718 | 36.14 | 6000 | 0.3315 | 0.2882 |
| 0.6991 | 38.55 | 6400 | 0.3299 | 0.2830 |
| 0.6529 | 40.96 | 6800 | 0.3140 | 0.2836 |
| 0.6225 | 43.37 | 7200 | 0.3128 | 0.2751 |
| 0.633 | 45.78 | 7600 | 0.3211 | 0.2774 |
| 0.5876 | 48.19 | 8000 | 0.3162 | 0.2764 |
| 0.588 | 50.6 | 8400 | 0.3082 | 0.2722 |
| 0.5915 | 53.01 | 8800 | 0.3120 | 0.2681 |
| 0.5798 | 55.42 | 9200 | 0.3133 | 0.2709 |
| 0.5736 | 57.83 | 9600 | 0.3086 | 0.2676 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4.dev0
- Tokenizers 0.11.0
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config sk --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config sk --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-xls-r-300m-sk-cv8-with-lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "sk", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => ""
```
### Eval results on Common Voice 8 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 26.707 | 18.609 | |
arianpasquali/distilbert-base-multilingual-cased-toxicity | b9f9177a7b8da0154817fe02cb7d3da511104838 | 2022-01-25T14:31:56.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
] | text-classification | false | arianpasquali | null | arianpasquali/distilbert-base-multilingual-cased-toxicity | 31 | 1 | transformers | 7,064 | Entry not found |
asahi417/tner-xlm-roberta-large-multiconer-multi | aedcade5597fd3e989bcca24831cb83f6c4e5b4c | 2022-01-25T22:56:45.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | asahi417 | null | asahi417/tner-xlm-roberta-large-multiconer-multi | 31 | null | transformers | 7,065 | Entry not found |
beomi/kykim-gpt3-kor-small_based_on_gpt2 | 92f2c3e2aeec328af28f87143ed8fef05a54dc1f | 2021-11-16T15:21:35.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"ko",
"transformers"
] | text-generation | false | beomi | null | beomi/kykim-gpt3-kor-small_based_on_gpt2 | 31 | 2 | transformers | 7,066 | ---
language: ko
---
# Bert base model for Korean
## Update
- Update at 2021.11.17 : Add Native Support for BERT Tokenizer (works with AutoTokenizer, pipeline)
---
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import pipeline
pipe = pipeline('text-generation', model='beomi/kykim-gpt3-kor-small_based_on_gpt2')
print(pipe("안녕하세요! 오늘은"))
# [{'generated_text': '안녕하세요! 오늘은 제가 요즘 사용하고 있는 클렌징워터를 소개해드리려고 해요! 바로 이 제품!! 바로 이'}]
```
|
cardiffnlp/bertweet-base-stance-abortion | 0198c45ea89fe77f2acf1d5931635309b35ab04a | 2021-05-20T14:52:02.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/bertweet-base-stance-abortion | 31 | null | transformers | 7,067 | |
cardiffnlp/twitter-roberta-base-stance-feminist | 6e738d1ec8a26e17722e75fda280cfacb82340f7 | 2021-05-20T15:11:14.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"text-classification",
"transformers"
] | text-classification | false | cardiffnlp | null | cardiffnlp/twitter-roberta-base-stance-feminist | 31 | null | transformers | 7,068 | |
finiteautomata/bert-contextualized-hate-speech-es | 047fa04d69dd9461c733c34c4e1b59432c9e5c91 | 2021-05-19T16:51:14.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | finiteautomata | null | finiteautomata/bert-contextualized-hate-speech-es | 31 | null | transformers | 7,069 | Entry not found |
gooohjy/suicidal-electra | 41b4e633358dff27934ebce3aed500d2a940e8bf | 2022-03-30T12:18:23.000Z | [
"pytorch",
"electra",
"text-classification",
"transformers"
] | text-classification | false | gooohjy | null | gooohjy/suicidal-electra | 31 | 1 | transformers | 7,070 | # Suicidal-ELECTRA
This text classification model predicts whether a sequence of words are suicidal (1) or non-suicidal (0).
## Data
The model was trained on the [Suicide and Depression Dataset](https://www.kaggle.com/nikhileswarkomati/suicide-watch) obtained from Kaggle. The dataset was scraped from Reddit and consists of 232,074 rows equally distributed between 2 classes - suicide and non-suicide.
## Parameters
The model fine-tuning was conducted on 1 epoch, with batch size of 6, and learning rate of 0.00001. Due to limited computing resources and time, we were unable to scale up the number of epochs and batch size.
## Performance
The model has achieved the following results after fine-tuning on the aforementioned dataset:
- Accuracy: 0.9792
- Recall: 0.9788
- Precision: 0.9677
- F1 Score: 0.9732
## How to Use
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("gooohjy/suicidal-electra")
model = AutoModel.from_pretrained("gooohjy/suicidal-electra")
```
## Resources
For more resources, including the source code, please refer to the GitHub repository [gohjiayi/suicidal-text-detection](https://github.com/gohjiayi/suicidal-text-detection/). |
huggingtweets/mattjope | e033072cd1c82b68d7cd0e4f0521f3fd9868922d | 2021-05-22T13:47:09.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/mattjope | 31 | null | transformers | 7,071 | ---
language: en
thumbnail: https://www.huggingtweets.com/mattjope/1616749400584/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1359792499104047106/Wur41M8Q_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Matt Jope 🤖 AI Bot </div>
<div style="font-size: 15px">@mattjope bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@mattjope's tweets](https://twitter.com/mattjope).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 827 |
| Retweets | 104 |
| Short tweets | 95 |
| Tweets kept | 628 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8z6lpq25/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mattjope's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2386axt1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2386axt1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mattjope')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/microsoft | 689c89d778efea163d0fc103428886c6d5660b50 | 2021-05-22T14:32:22.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/microsoft | 31 | null | transformers | 7,072 | ---
language: en
thumbnail: https://www.huggingtweets.com/microsoft/1609714866268/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1334505837147029504/dg_Twuy0_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Microsoft 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@microsoft bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@microsoft's tweets](https://twitter.com/microsoft).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3243</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>431</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>730</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2082</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3l9quqlq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @microsoft's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3nxetoau) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3nxetoau/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/microsoft'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
imvladikon/wav2vec2-xls-r-1b-hebrew | 027e6d57e40b4a5e9a29b67ad68f085a6a15c433 | 2022-03-24T11:51:10.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"he",
"transformers",
"robust-speech-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | imvladikon | null | imvladikon/wav2vec2-xls-r-1b-hebrew | 31 | null | transformers | 7,073 | ---
language:
- he
license: apache-2.0
tags:
- automatic-speech-recognition
- robust-speech-event
- he
- generated_from_trainer
- hf-asr-leaderboard
model-index:
- name: wav2vec2-xls-r-1b-hebrew
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-hebrew
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3533
- Wer: 0.2251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.3587 | 0.47 | 400 | 1.1883 | 0.8392 |
| 1.8377 | 0.95 | 800 | 0.8831 | 0.6852 |
| 1.7118 | 1.42 | 1200 | 0.8031 | 0.6566 |
| 1.6741 | 1.89 | 1600 | 0.7518 | 0.6104 |
| 1.6163 | 2.36 | 2000 | 0.6888 | 0.5591 |
| 1.5782 | 2.84 | 2400 | 0.6580 | 0.5165 |
| 1.5548 | 3.31 | 2800 | 0.6506 | 0.5184 |
| 1.5249 | 3.78 | 3200 | 0.6198 | 0.5028 |
| 1.5078 | 4.26 | 3600 | 0.5992 | 0.4932 |
| 1.4836 | 4.73 | 4000 | 0.5705 | 0.4651 |
| 1.4505 | 5.2 | 4400 | 0.5489 | 0.4508 |
| 1.4481 | 5.67 | 4800 | 0.5577 | 0.4562 |
| 1.4136 | 6.15 | 5200 | 0.5452 | 0.4371 |
| 1.3861 | 6.62 | 5600 | 0.5101 | 0.4087 |
| 1.3772 | 7.09 | 6000 | 0.4933 | 0.3951 |
| 1.3478 | 7.56 | 6400 | 0.4849 | 0.3922 |
| 1.3394 | 8.04 | 6800 | 0.4805 | 0.3892 |
| 1.3095 | 8.51 | 7200 | 0.4839 | 0.3834 |
| 1.306 | 8.98 | 7600 | 0.4611 | 0.3587 |
| 1.2707 | 9.46 | 8000 | 0.4545 | 0.3730 |
| 1.2626 | 9.93 | 8400 | 0.4516 | 0.3524 |
| 1.2412 | 10.4 | 8800 | 0.4314 | 0.3310 |
| 1.2456 | 10.87 | 9200 | 0.4401 | 0.3459 |
| 1.2081 | 11.35 | 9600 | 0.4399 | 0.3356 |
| 1.1998 | 11.82 | 10000 | 0.4195 | 0.3215 |
| 1.1826 | 12.29 | 10400 | 0.4221 | 0.3178 |
| 1.1573 | 12.77 | 10800 | 0.4098 | 0.3084 |
| 1.1416 | 13.24 | 11200 | 0.4086 | 0.3119 |
| 1.1174 | 13.71 | 11600 | 0.3854 | 0.2910 |
| 1.1048 | 14.18 | 12000 | 0.3859 | 0.2824 |
| 1.0748 | 14.66 | 12400 | 0.3854 | 0.2757 |
| 1.0697 | 15.13 | 12800 | 0.3740 | 0.2724 |
| 1.0477 | 15.6 | 13200 | 0.3693 | 0.2643 |
| 1.0356 | 16.08 | 13600 | 0.3727 | 0.2561 |
| 1.0083 | 16.55 | 14000 | 0.3652 | 0.2501 |
| 1.0 | 17.02 | 14400 | 0.3641 | 0.2457 |
| 0.9779 | 17.49 | 14800 | 0.3568 | 0.2409 |
| 0.9596 | 17.97 | 15200 | 0.3558 | 0.2376 |
| 0.946 | 18.44 | 15600 | 0.3591 | 0.2311 |
| 0.9389 | 18.91 | 16000 | 0.3540 | 0.2283 |
| 0.9173 | 19.39 | 16400 | 0.3552 | 0.2265 |
| 0.9122 | 19.86 | 16800 | 0.3535 | 0.2250 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese | fbb87b1aedfc9232dc50f7d5f230d3bd14943e52 | 2021-07-06T06:20:06.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"as",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | infinitejoy | null | infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese | 31 | null | transformers | 7,074 | ---
language: as
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Joydeep Bhattacharjee XLSR Wav2Vec2 Large 53 Assamese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice as
type: common_voice
args: as
metrics:
- name: Test WER
type: wer
value: 69.63
---
# Wav2Vec2-Large-XLSR-53-Assamese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "as", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Assamese test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "as", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model = Wav2Vec2ForCTC.from_pretrained("infinitejoy/Wav2Vec2-Large-XLSR-53-Assamese")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\।]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub('’ ',' ',batch["sentence"])
batch["sentence"] = re.sub(' ‘',' ',batch["sentence"])
batch["sentence"] = re.sub('’|‘','\'',batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 69.63 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
iocust/horos_gpt_neo | c667b5e6c71f696a75a8a4099a82fab52fd8427b | 2021-07-13T11:41:17.000Z | [
"pytorch",
"gpt_neo",
"text-generation",
"transformers"
] | text-generation | false | iocust | null | iocust/horos_gpt_neo | 31 | null | transformers | 7,075 | Entry not found |
jky594176/recipe_BART1 | a9f20d1134f5682e3a1b078476ccce35e9675eb7 | 2021-05-30T15:15:52.000Z | [
"pytorch",
"bart",
"text-generation",
"transformers"
] | text-generation | false | jky594176 | null | jky594176/recipe_BART1 | 31 | null | transformers | 7,076 | Entry not found |
jonatasgrosman/wav2vec2-large-xlsr-53-greek | fd831ad49d7bef2a461a6e46536989bca94e5489 | 2022-07-27T23:34:34.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"el",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/wav2vec2-large-xlsr-53-greek | 31 | null | transformers | 7,077 | ---
language: el
datasets:
- common_voice
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Greek by Jonatas Grosman
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 11.62
- name: Test CER
type: cer
value: 3.36
---
# Fine-tuned XLSR-53 large model for speech recognition in Greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-greek")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "el"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-greek"
SAMPLES = 5
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| ΤΟ ΒΑΣΙΛΌΠΟΥΛΟ, ΠΟΥ ΜΟΙΆΖΕΙ ΛΕΟΝΤΑΡΆΚΙ ΚΑΙ ΑΕΤΟΥΔΆΚΙ | ΤΟ ΒΑΣΙΛΌΠΟΥΛΟ ΠΟΥ ΜΙΑΣΕ ΛΙΟΝΤΑΡΑΚΉ ΚΑΙ ΑΪΤΟΥΔΆΚΙ |
| ΣΥΝΆΜΑ ΞΕΠΡΌΒΑΛΑΝ ΑΠΌ ΜΈΣΑ ΑΠΌ ΤΑ ΔΈΝΤΡΑ, ΔΕΞΙΆ, ΑΡΜΑΤΩΜΈΝΟΙ ΚΑΒΑΛΑΡΈΟΙ. | ΣΥΝΆΜΑ ΚΑΙ ΤΡΌΒΑΛΑΝ ΑΠΌ ΜΈΣΑ ΑΠΌ ΤΑ ΔΈΝΤΡΑ ΔΕΞΙΆ ΑΡΜΑΤΩΜΈΝΟΙ ΚΑΒΑΛΑΡΈΟΙ |
| ΤΑ ΣΥΣΚΕΥΑΣΜΈΝΑ ΒΙΟΛΟΓΙΚΆ ΛΑΧΑΝΙΚΆ ΔΕΝ ΠΕΡΙΈΧΟΥΝ ΣΥΝΤΗΡΗΤΙΚΆ ΚΑΙ ΟΡΜΌΝΕΣ | ΤΑ ΣΥΣΚΕΦΑΣΜΈΝΑ ΒΙΟΛΟΓΙΚΆ ΛΑΧΑΝΙΚΆ ΔΕΝ ΠΕΡΙΈΧΟΥΝ ΣΙΔΗΡΗΤΙΚΆ ΚΑΙ ΟΡΜΌΝΕΣ |
| ΑΚΟΛΟΥΘΉΣΕΤΕ ΜΕ! | ΑΚΟΛΟΥΘΉΣΤΕ ΜΕ |
| ΚΑΙ ΠΟΎ ΜΠΟΡΏ ΝΑ ΤΟΝ ΒΡΩ; | Ε ΠΟΎ ΜΠΟΡΏ ΝΑ ΤΙ ΕΒΡΩ |
| ΝΑΙ! ΑΠΟΚΡΊΘΗΚΕ ΤΟ ΠΑΙΔΊ | ΝΑΙ ΑΠΟΚΡΊΘΗΚΕ ΤΟ ΠΑΙΔΊ |
| ΤΟ ΠΑΛΆΤΙ ΜΟΥ ΤΟ ΠΡΟΜΉΘΕΥΕ. | ΤΟ ΠΑΛΆΤΙ ΜΟΥ ΤΟ ΠΡΟΜΉΘΕΥΕ |
| ΉΛΘΕ ΜΉΝΥΜΑ ΑΠΌ ΤΟ ΘΕΊΟ ΒΑΣΙΛΙΆ; | ΉΛΘΑ ΜΕΊΝΕΙ ΜΕ ΑΠΌ ΤΟ ΘΕΊΟ ΒΑΣΊΛΙΑ |
| ΠΑΡΑΚΆΤΩ, ΈΝΑ ΡΥΆΚΙ ΜΟΥΡΜΟΎΡΙΖΕ ΓΛΥΚΆ, ΚΥΛΏΝΤΑΣ ΤΑ ΚΡΥΣΤΑΛΛΈΝΙΑ ΝΕΡΆ ΤΟΥ ΑΝΆΜΕΣΑ ΣΤΑ ΠΥΚΝΆ ΧΑΜΌΔΕΝΤΡΑ. | ΠΑΡΑΚΆΤΩ ΈΝΑ ΡΥΆΚΙ ΜΟΥΡΜΟΎΡΙΖΕ ΓΛΥΚΆ ΚΥΛΏΝΤΑΣ ΤΑ ΚΡΥΣΤΑΛΛΈΝΙΑ ΝΕΡΆ ΤΟΥ ΑΝΆΜΕΣΑ ΣΤΑ ΠΥΚΡΆ ΧΑΜΌΔΕΝΤΡΑ |
| ΠΡΆΓΜΑΤΙ, ΕΊΝΑΙ ΑΣΤΕΊΟ ΝΑ ΠΆΡΕΙ Ο ΔΙΆΒΟΛΟΣ | ΠΡΆΓΜΑΤΗ ΕΊΝΑΙ ΑΣΤΕΊΟ ΝΑ ΠΆΡΕΙ Ο ΔΙΆΒΟΛΟΣ |
## Evaluation
The model can be evaluated as follows on the Greek test data of Common Voice.
```python
import torch
import re
import librosa
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "el"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-greek"
DEVICE = "cuda"
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\\\", "º", "−", "^", "ʻ", "ˆ"]
test_dataset = load_dataset("common_voice", LANG_ID, split="test")
wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py
cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
model.to(DEVICE)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
predictions = [x.upper() for x in result["pred_strings"]]
references = [x.upper() for x in result["sentence"]]
print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
```
**Test Result**:
In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2021-04-22). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used.
| Model | WER | CER |
| ------------- | ------------- | ------------- |
| lighteternal/wav2vec2-large-xlsr-53-greek | **10.13%** | **2.66%** |
| jonatasgrosman/wav2vec2-large-xlsr-53-greek | 11.62% | 3.36% |
| vasilis/wav2vec2-large-xlsr-53-greek | 19.09% | 5.88% |
| PereLluis13/wav2vec2-large-xlsr-53-greek | 20.16% | 5.71% |
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-greek,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {G}reek},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-greek}},
year={2021}
}
```
|
kmfoda/staging-pegasus-gmeetsamsum | 754f6d7c1bc561a173b6671fece11822e2082803 | 2022-02-02T14:34:58.000Z | [
"pytorch",
"pegasus",
"feature-extraction",
"en",
"arxiv:1912.08777",
"transformers",
"summarization"
] | summarization | false | kmfoda | null | kmfoda/staging-pegasus-gmeetsamsum | 31 | null | transformers | 7,078 | ---
language: en
tags:
- summarization
---
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lighteternal/gpt2-finetuned-greek-small | 44ce2064df77b9cf528232a386182df8f980ca04 | 2021-05-23T08:32:03.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"el",
"transformers",
"causal-lm",
"license:apache-2.0"
] | text-generation | false | lighteternal | null | lighteternal/gpt2-finetuned-greek-small | 31 | null | transformers | 7,079 |
---
language:
- el
tags:
- pytorch
- causal-lm
widget:
- text: "Το αγαπημένο μου μέρος είναι"
license: apache-2.0
---
# Greek (el) GPT2 model - small
<img src="https://huggingface.co/lighteternal/gpt2-finetuned-greek-small/raw/main/GPT2el.png" width="600"/>
#### A new version (recommended) trained on 5x more data is available at: https://huggingface.co/lighteternal/gpt2-finetuned-greek
### By the Hellenic Army Academy (SSE) and the Technical University of Crete (TUC)
* language: el
* licence: apache-2.0
* dataset: ~5GB of Greek corpora
* model: GPT2 (12-layer, 768-hidden, 12-heads, 117M parameters. OpenAI GPT-2 English model, finetuned for the Greek language)
* pre-processing: tokenization + BPE segmentation
### Model description
A text generation (autoregressive) model, using Huggingface transformers and fastai based on the English GPT-2(small). 

Finetuned with gradual layer unfreezing. This is a more efficient and sustainable alternative compared to training from scratch, especially for low-resource languages. 

Based on the work of Thomas Dehaene (ML6) for the creation of a Dutch GPT2: https://colab.research.google.com/drive/1Y31tjMkB8TqKKFlZ5OJ9fcMp3p8suvs4?usp=sharing
### How to use
```
from transformers import pipeline
model = "lighteternal/gpt2-finetuned-greek-small"
generator = pipeline(
'text-generation',
device=0,
model=f'{model}',
tokenizer=f'{model}')
text = "Μια φορά κι έναν καιρό"
print("\\\\
".join([x.get("generated_text") for x in generator(
text,
max_length=len(text.split(" "))+15,
do_sample=True,
top_k=50,
repetition_penalty = 1.2,
add_special_tokens=False,
num_return_sequences=5,
temperature=0.95,
top_p=0.95)]))
```
## Training data
We used a small (~5GB) sample from a consolidated Greek corpus based on CC100, Wikimatrix, Tatoeba, Books, SETIMES and GlobalVoices. A bigger corpus is expected to provide better results (T0D0).
### Acknowledgement
The research work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number:50, 2nd call)
Based on the work of Thomas Dehaene (ML6): https://blog.ml6.eu/dutch-gpt2-autoregressive-language-modelling-on-a-budget-cff3942dd020
|
mrm8488/bert-tiny2bert-tiny_shared-finetuned-wikisql | 88b44ad03e9239967134c48e307a64fc0df6cf4e | 2020-11-12T20:30:55.000Z | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | mrm8488 | null | mrm8488/bert-tiny2bert-tiny_shared-finetuned-wikisql | 31 | null | transformers | 7,080 | Entry not found |
mrm8488/bioclinicalBERT-finetuned-covid-papers | 9d5e3686566f2bc68799b26093e1bd0d35643bea | 2021-08-25T22:05:46.000Z | [
"pytorch",
"jax",
"bert",
"fill-mask",
"en",
"transformers",
"autotrain_compatible"
] | fill-mask | false | mrm8488 | null | mrm8488/bioclinicalBERT-finetuned-covid-papers | 31 | 1 | transformers | 7,081 | ---
language:
- en
widget:
- text: "Masks are [MASK] for preventing"
---
# BioclinicalBERT fine-tuned for MLM on COVID Papers |
mrm8488/gpt2-finetuned-reddit-tifu | 7b57f6cce4ebcbc31ae3dd778593ba245e18b695 | 2021-05-23T10:26:20.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | mrm8488 | null | mrm8488/gpt2-finetuned-reddit-tifu | 31 | 1 | transformers | 7,082 | Entry not found |
mrm8488/legalectra-base-spanish | a5a47b3ecb625bcde4e94394ac97cc5655dafccd | 2021-11-25T20:42:48.000Z | [
"pytorch",
"electra",
"pretraining",
"es",
"dataset:Spanish-legal-corpora",
"transformers",
"Spanish",
"Electra",
"Legal"
] | null | false | mrm8488 | null | mrm8488/legalectra-base-spanish | 31 | 3 | transformers | 7,083 | ---
language: es
tags:
- Spanish
- Electra
- Legal
datasets:
- Spanish-legal-corpora
---
## LEGALECTRA ⚖️
**LEGALECTRA** (base) is an Electra like model (discriminator in this case) trained on [A collection of corpora of Spanish legal domain](https://zenodo.org/record/5495529#.YZItp3vMLJw).
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Training details
TBA
## Model details ⚙
|Name| # Value|
|-----|--------|
|Layers| 12 |
|Hidden | 768 |
|Params| 110M |
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.941|
|AUC | 0.794|
|Precision| |
## Benchmarks 🔨
WIP 🚧
## How to use the discriminator in `transformers`
TBA
## Acknowledgments
TBA
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain |
neuralspace-reverie/indic-transformers-hi-distilbert | d1aa35663a0dcfffc090b32c31e6907d4ffc82ca | 2020-12-11T21:57:21.000Z | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"hi",
"transformers",
"MaskedLM",
"Hindi",
"DistilBERT",
"Question-Answering",
"Token Classification",
"Text Classification",
"autotrain_compatible"
] | fill-mask | false | neuralspace-reverie | null | neuralspace-reverie/indic-transformers-hi-distilbert | 31 | 1 | transformers | 7,084 | ---
language:
- hi
tags:
- MaskedLM
- Hindi
- DistilBERT
- Question-Answering
- Token Classification
- Text Classification
---
# Indic-Transformers Hindi DistilBERT
## Model description
This is a DistilBERT language model pre-trained on ~10 GB of monolingual training corpus. The pre-training data was majorly taken from [OSCAR](https://oscar-corpus.com/).
This model can be fine-tuned on various downstream tasks like text-classification, POS-tagging, question-answering, etc. Embeddings from this model can also be used for feature-based training.
## Intended uses & limitations
#### How to use
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('neuralspace-reverie/indic-transformers-hi-distilbert')
model = AutoModel.from_pretrained('neuralspace-reverie/indic-transformers-hi-distilbert')
text = "आपका स्वागत हैं"
input_ids = tokenizer(text, return_tensors='pt')['input_ids']
out = model(input_ids)[0]
print(out.shape)
# out = [1, 5, 768]
```
#### Limitations and bias
The original language model has been trained using `PyTorch` and hence the use of `pytorch_model.bin` weights file is recommended. The h5 file for `Tensorflow` has been generated manually by commands suggested [here](https://huggingface.co/transformers/model_sharing.html).
|
oguzhanolm/loodos-bert-base-uncased-QA-fine-tuned | ec65d64835701d73081616a9ccea9b46e2a1c2d0 | 2022-02-22T18:22:01.000Z | [
"pytorch",
"bert",
"question-answering",
"tr",
"dataset:TQuAD",
"transformers",
"loodos-bert-base",
"TQuAD",
"model-index",
"autotrain_compatible"
] | question-answering | false | oguzhanolm | null | oguzhanolm/loodos-bert-base-uncased-QA-fine-tuned | 31 | null | transformers | 7,085 | ---
language: tr
tags:
- question-answering
- loodos-bert-base
- TQuAD
- tr
datasets:
- TQuAD
model-index:
- name: loodos-bert-base-uncased-QA-fine-tuned
results:
- task:
name: Question Answering
type: question-answering
dataset:
name: TQuAD
type: question-answering
args: tr
metrics:
- name: Accuracy
type: acc
value: 0.91
---
# Turkish SQuAD Model : Question Answering
I fine-tuned Loodos-Turkish-Bert-Model for Question-Answering problem with TQuAD dataset. Since the "loodos/bert-base-turkish-uncased" model gave the best results for the Turkish language in classification in the "Auto-tagging of Short Conversational Sentences using Transformer Methods" research we conducted with my teammates, I used this model because I thought that the success rate could be high in the question-answering.
* Loodos-BERT-base-uncased: https://huggingface.co/loodos/bert-base-turkish-uncased
* TQuAD dataset: https://github.com/TQuad/turkish-nlp-qa-dataset
# Training Code
```
!python3 Turkish-QA.py \
--model_type bert \
--model_name_or_path loodos/bert-base-turkish-uncased
--do_train \
--do_eval \
--train_file trainQ.json \
--predict_file dev1.json \
--per_gpu_train_batch_size 8 \
--learning_rate 5e-5 \
--num_train_epochs 6 \
--max_seq_length 384 \
--output_dir "./model"
```
# Example Usage
> Load Model
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("oguzhanolm/loodos-bert-base-uncased-QA-fine-tuned")
model = AutoModelForQuestionAnswering.from_pretrained("oguzhanolm/loodos-bert-base-uncased-QA-fine-tuned")
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
```
> Apply the model
```
def ask(question,context):
temp = nlp(question=question, context=context)
start_idx = temp["start"]
end_idx = temp["end"]
return context[start_idx:end_idx]
istanbul="İstanbul, Türkiye'de Marmara Bölgesi'nde yer alan şehir ve Türkiye Cumhuriyeti Devletinin 81 ilinden biridir. Ülkenin nüfus bakımından en çok göç alan ve en kalabalık ilidir. Ekonomik, tarihî ve sosyo-kültürel açıdan önde gelen şehirlerden biridir. Şehir, iktisadi büyüklük açısından dünyada 34. sırada yer alır. Nüfuslarına göre şehirler listesinde belediye sınırları göz önüne alınarak yapılan sıralamaya göre Avrupa'da birinci, dünyada ise altıncı sırada yer almaktadır."
soru1 = "İstanbul büyüklük açısından kaçıncı sıradadır?"
print(ask(soru1,istanbul))
soru2 = "İstanbul nerede bulunur?"
print(ask(soru2,istanbul))
``` |
progg/shopping-list-ner | e466d8e7834b27008d8a3bc801d7d06766f5a1cc | 2021-03-01T09:52:13.000Z | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | progg | null | progg/shopping-list-ner | 31 | null | transformers | 7,086 | Entry not found |
qanastek/pos-french-camembert-flair | ceef7e8e36f3d92621dfefe3c77f94a26c50d3bf | 2022-07-06T23:49:12.000Z | [
"pytorch",
"fr",
"dataset:qanastek/ANTILLES",
"arxiv:1911.03894",
"flair",
"token-classification",
"sequence-tagger-model"
] | token-classification | false | qanastek | null | qanastek/pos-french-camembert-flair | 31 | 1 | flair | 7,087 | ---
tags:
- flair
- token-classification
- sequence-tagger-model
language: fr
datasets:
- qanastek/ANTILLES
widget:
- text: "George Washington est allé à Washington"
---
# POET: A French Extended Part-of-Speech Tagger
- Corpora: [ANTILLES](https://github.com/qanastek/ANTILLES)
- Embeddings: [Flair](https://aclanthology.org/C18-1139.pdf) & [CamemBERT](https://arxiv.org/abs/1911.03894)
- Sequence Labelling: [Bi-LSTM-CRF](https://arxiv.org/abs/1011.4088)
- Number of Epochs: 50
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
* [DUFOUR Richard](https://cv.archives-ouvertes.fr/richard-dufour) (2)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
2. [LS2N, TALN team](https://www.ls2n.fr/equipe/taln/), Nantes University, Nantes, France.
## Demo: How to use in Flair
Requires [Flair](https://pypi.org/project/flair/): ```pip install flair```
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# Load the model
model = SequenceTagger.load("qanastek/pos-french")
sentence = Sentence("George Washington est allé à Washington")
# predict tags
model.predict(sentence)
# print predicted pos tags
print(sentence.to_tagged_string())
```
Output:

## Training data
`ANTILLES` is a part-of-speech tagging corpora based on [UD_French-GSD](https://universaldependencies.org/treebanks/fr_gsd/index.html) which was originally created in 2015 and is based on the [universal dependency treebank v2.0](https://github.com/ryanmcd/uni-dep-tb).
Originally, the corpora consists of 400,399 words (16,341 sentences) and had 17 different classes. Now, after applying our tags augmentation we obtain 60 different classes which add linguistic and semantic information such as the gender, number, mood, person, tense or verb form given in the different CoNLL-03 fields from the original corpora.
We based our tags on the level of details given by the [LIA_TAGG](http://pageperso.lif.univ-mrs.fr/frederic.bechet/download.html) statistical POS tagger written by [Frédéric Béchet](http://pageperso.lif.univ-mrs.fr/frederic.bechet/index-english.html) in 2001.
The corpora used for this model is available on [Github](https://github.com/qanastek/ANTILLES) at the [CoNLL-U format](https://universaldependencies.org/format.html).
Training data are fed to the model as free language and doesn't pass a normalization phase. Thus, it's made the model case and punctuation sensitive.
## Original Tags
```plain
PRON VERB SCONJ ADP CCONJ DET NOUN ADJ AUX ADV PUNCT PROPN NUM SYM PART X INTJ
```
## New additional POS tags
| Abbreviation | Description | Examples |
|:--------:|:--------:|:--------:|
| PREP | Preposition | de |
| AUX | Auxiliary Verb | est |
| ADV | Adverb | toujours |
| COSUB | Subordinating conjunction | que |
| COCO | Coordinating Conjunction | et |
| PART | Demonstrative particle | -t |
| PRON | Pronoun | qui ce quoi |
| PDEMMS | Demonstrative Pronoun - Singular Masculine | ce |
| PDEMMP | Demonstrative Pronoun - Plural Masculine | ceux |
| PDEMFS | Demonstrative Pronoun - Singular Feminine | cette |
| PDEMFP | Demonstrative Pronoun - Plural Feminine | celles |
| PINDMS | Indefinite Pronoun - Singular Masculine | tout |
| PINDMP | Indefinite Pronoun - Plural Masculine | autres |
| PINDFS | Indefinite Pronoun - Singular Feminine | chacune |
| PINDFP | Indefinite Pronoun - Plural Feminine | certaines |
| PROPN | Proper noun | Houston |
| XFAMIL | Last name | Levy |
| NUM | Numerical Adjective | trentaine vingtaine |
| DINTMS | Masculine Numerical Adjective | un |
| DINTFS | Feminine Numerical Adjective | une |
| PPOBJMS | Pronoun complements of objects - Singular Masculine | le lui |
| PPOBJMP | Pronoun complements of objects - Plural Masculine | eux y |
| PPOBJFS | Pronoun complements of objects - Singular Feminine | moi la |
| PPOBJFP | Pronoun complements of objects - Plural Feminine | en y |
| PPER1S | Personal Pronoun First-Person - Singular | je |
| PPER2S | Personal Pronoun Second-Person - Singular | tu |
| PPER3MS | Personal Pronoun Third-Person - Singular Masculine | il |
| PPER3MP | Personal Pronoun Third-Person - Plural Masculine | ils |
| PPER3FS | Personal Pronoun Third-Person - Singular Feminine | elle |
| PPER3FP | Personal Pronoun Third-Person - Plural Feminine | elles |
| PREFS | Reflexive Pronoun First-Person - Singular | me m' |
| PREF | Reflexive Pronoun Third-Person - Singular | se s' |
| PREFP | Reflexive Pronoun First / Second-Person - Plural | nous vous |
| VERB | Verb | obtient |
| VPPMS | Past Participle - Singular Masculine | formulé |
| VPPMP | Past Participle - Plural Masculine | classés |
| VPPFS | Past Participle - Singular Feminine | appelée |
| VPPFP | Past Participle - Plural Feminine | sanctionnées |
| DET | Determinant | les l' |
| DETMS | Determinant - Singular Masculine | les |
| DETFS | Determinant - Singular Feminine | la |
| ADJ | Adjective | capable sérieux |
| ADJMS | Adjective - Singular Masculine | grand important |
| ADJMP | Adjective - Plural Masculine | grands petits |
| ADJFS | Adjective - Singular Feminine | française petite |
| ADJFP | Adjective - Plural Feminine | légères petites |
| NOUN | Noun | temps |
| NMS | Noun - Singular Masculine | drapeau |
| NMP | Noun - Plural Masculine | journalistes |
| NFS | Noun - Singular Feminine | tête |
| NFP | Noun - Plural Feminine | ondes |
| PREL | Relative Pronoun | qui dont |
| PRELMS | Relative Pronoun - Singular Masculine | lequel |
| PRELMP | Relative Pronoun - Plural Masculine | lesquels |
| PRELFS | Relative Pronoun - Singular Feminine | laquelle |
| PRELFP | Relative Pronoun - Plural Feminine | lesquelles |
| INTJ | Interjection | merci bref |
| CHIF | Numbers | 1979 10 |
| SYM | Symbol | € % |
| YPFOR | Endpoint | . |
| PUNCT | Ponctuation | : , |
| MOTINC | Unknown words | Technology Lady |
| X | Typos & others | sfeir 3D statu |
## Evaluation results
The test corpora used for this evaluation is available on [Github](https://github.com/qanastek/ANTILLES/blob/main/ANTILLES/test.conllu).
```plain
Results:
- F-score (micro) 0.9797
- F-score (macro) 0.9178
- Accuracy 0.9797
By class:
precision recall f1-score support
PREP 0.9966 0.9987 0.9976 1483
PUNCT 1.0000 1.0000 1.0000 833
NMS 0.9634 0.9801 0.9717 753
DET 0.9923 0.9984 0.9954 645
VERB 0.9913 0.9811 0.9862 583
NFS 0.9667 0.9839 0.9752 560
ADV 0.9940 0.9821 0.9880 504
PROPN 0.9541 0.8937 0.9229 395
DETMS 1.0000 1.0000 1.0000 362
AUX 0.9860 0.9915 0.9888 355
YPFOR 1.0000 1.0000 1.0000 353
NMP 0.9666 0.9475 0.9570 305
COCO 0.9959 1.0000 0.9980 245
ADJMS 0.9463 0.9385 0.9424 244
DETFS 1.0000 1.0000 1.0000 240
CHIF 0.9648 0.9865 0.9755 222
NFP 0.9515 0.9849 0.9679 199
ADJFS 0.9657 0.9286 0.9468 182
VPPMS 0.9387 0.9745 0.9563 157
COSUB 1.0000 0.9844 0.9921 128
DINTMS 0.9918 0.9918 0.9918 122
XFAMIL 0.9298 0.9217 0.9258 115
PPER3MS 1.0000 1.0000 1.0000 87
ADJMP 0.9294 0.9634 0.9461 82
PDEMMS 1.0000 1.0000 1.0000 75
ADJFP 0.9861 0.9342 0.9595 76
PREL 0.9859 1.0000 0.9929 70
DINTFS 0.9839 1.0000 0.9919 61
PREF 1.0000 1.0000 1.0000 52
PPOBJMS 0.9565 0.9362 0.9462 47
PREFP 0.9778 1.0000 0.9888 44
PINDMS 1.0000 0.9773 0.9885 44
VPPFS 0.8298 0.9750 0.8966 40
PPER1S 1.0000 1.0000 1.0000 42
SYM 1.0000 0.9474 0.9730 38
NOUN 0.8824 0.7692 0.8219 39
PRON 1.0000 0.9677 0.9836 31
PDEMFS 1.0000 1.0000 1.0000 29
VPPMP 0.9286 1.0000 0.9630 26
ADJ 0.9524 0.9091 0.9302 22
PPER3MP 1.0000 1.0000 1.0000 20
VPPFP 1.0000 1.0000 1.0000 19
PPER3FS 1.0000 1.0000 1.0000 18
MOTINC 0.3333 0.4000 0.3636 15
PREFS 1.0000 1.0000 1.0000 10
PPOBJMP 1.0000 0.8000 0.8889 10
PPOBJFS 0.6250 0.8333 0.7143 6
INTJ 0.5000 0.6667 0.5714 6
PART 1.0000 1.0000 1.0000 4
PDEMMP 1.0000 1.0000 1.0000 3
PDEMFP 1.0000 1.0000 1.0000 3
PPER3FP 1.0000 1.0000 1.0000 2
NUM 1.0000 0.3333 0.5000 3
PPER2S 1.0000 1.0000 1.0000 2
PPOBJFP 0.5000 0.5000 0.5000 2
PRELMS 1.0000 1.0000 1.0000 2
PINDFS 0.5000 1.0000 0.6667 1
PINDMP 1.0000 1.0000 1.0000 1
X 0.0000 0.0000 0.0000 1
PINDFP 1.0000 1.0000 1.0000 1
micro avg 0.9797 0.9797 0.9797 10019
macro avg 0.9228 0.9230 0.9178 10019
weighted avg 0.9802 0.9797 0.9798 10019
samples avg 0.9797 0.9797 0.9797 10019
```
## BibTeX Citations
Please cite the following paper when using this model.
ANTILLES corpus and POET taggers:
```latex
@inproceedings{labrak:hal-03696042,
TITLE = {{ANTILLES: An Open French Linguistically Enriched Part-of-Speech Corpus}},
AUTHOR = {Labrak, Yanis and Dufour, Richard},
URL = {https://hal.archives-ouvertes.fr/hal-03696042},
BOOKTITLE = {{25th International Conference on Text, Speech and Dialogue (TSD)}},
ADDRESS = {Brno, Czech Republic},
PUBLISHER = {{Springer}},
YEAR = {2022},
MONTH = Sep,
KEYWORDS = {Part-of-speech corpus ; POS tagging ; Open tools ; Word embeddings ; Bi-LSTM ; CRF ; Transformers},
PDF = {https://hal.archives-ouvertes.fr/hal-03696042/file/ANTILLES_A_freNch_linguisTIcaLLy_Enriched_part_of_Speech_corpus.pdf},
HAL_ID = {hal-03696042},
HAL_VERSION = {v1},
}
```
UD_French-GSD corpora:
```latex
@misc{
universaldependencies,
title={UniversalDependencies/UD_French-GSD},
url={https://github.com/UniversalDependencies/UD_French-GSD}, journal={GitHub},
author={UniversalDependencies}
}
```
LIA TAGG:
```latex
@techreport{LIA_TAGG,
author = {Frédéric Béchet},
title = {LIA_TAGG: a statistical POS tagger + syntactic bracketer},
institution = {Aix-Marseille University & CNRS},
year = {2001}
}
```
Flair Embeddings:
```latex
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
## Acknowledgment
This work was financially supported by [Zenidoc](https://zenidoc.fr/)
|
soikit/chinese-bert-wwm-chinese_bert_wwm3 | 187d2eb60011e10d706a762f105378146aa298d2 | 2021-10-22T05:09:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | text-generation | false | soikit | null | soikit/chinese-bert-wwm-chinese_bert_wwm3 | 31 | null | transformers | 7,088 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: chinese-bert-wwm-chinese_bert_wwm3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-bert-wwm-chinese_bert_wwm3
This model is a fine-tuned version of [hfl/chinese-bert-wwm](https://huggingface.co/hfl/chinese-bert-wwm) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 72 | 0.4251 |
| No log | 2.0 | 144 | 0.0282 |
| No log | 3.0 | 216 | 0.0048 |
| No log | 4.0 | 288 | 0.0018 |
| No log | 5.0 | 360 | 0.0011 |
| No log | 6.0 | 432 | 0.0006 |
| 0.483 | 7.0 | 504 | 0.0004 |
| 0.483 | 8.0 | 576 | 0.0004 |
| 0.483 | 9.0 | 648 | 0.0002 |
| 0.483 | 10.0 | 720 | 0.0002 |
| 0.483 | 11.0 | 792 | 0.0002 |
| 0.483 | 12.0 | 864 | 0.0001 |
| 0.483 | 13.0 | 936 | 0.0001 |
| 0.0031 | 14.0 | 1008 | 0.0001 |
| 0.0031 | 15.0 | 1080 | 0.0001 |
| 0.0031 | 16.0 | 1152 | 0.0001 |
| 0.0031 | 17.0 | 1224 | 0.0001 |
| 0.0031 | 18.0 | 1296 | 0.0001 |
| 0.0031 | 19.0 | 1368 | 0.0001 |
| 0.0031 | 20.0 | 1440 | 0.0001 |
| 0.0015 | 21.0 | 1512 | 0.0001 |
| 0.0015 | 22.0 | 1584 | 0.0001 |
| 0.0015 | 23.0 | 1656 | 0.0001 |
| 0.0015 | 24.0 | 1728 | 0.0001 |
| 0.0015 | 25.0 | 1800 | 0.0000 |
| 0.0015 | 26.0 | 1872 | 0.0001 |
| 0.0015 | 27.0 | 1944 | 0.0000 |
| 0.001 | 28.0 | 2016 | 0.0000 |
| 0.001 | 29.0 | 2088 | 0.0000 |
| 0.001 | 30.0 | 2160 | 0.0000 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.13.3
- Tokenizers 0.10.3
|
stanford-crfm/beren-gpt2-medium-x49 | fcf9ff1254e0d5c87de2fbc88f606e7a56201f22 | 2022-06-20T11:13:40.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | stanford-crfm | null | stanford-crfm/beren-gpt2-medium-x49 | 31 | null | transformers | 7,089 | Entry not found |
thunlp/neuba-roberta | 55a55fffdd35a65833cd06a6f6062866f1fcb24a | 2021-09-16T06:06:29.000Z | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | thunlp | null | thunlp/neuba-roberta | 31 | null | transformers | 7,090 | Entry not found |
vuiseng9/bert-base-squadv1 | eab1115060d076a1a54703c7105813dfd17c6300 | 2022-01-19T19:03:57.000Z | [
"pytorch",
"onnx",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | vuiseng9 | null | vuiseng9/bert-base-squadv1 | 31 | null | transformers | 7,091 | This model is a fork of [```csarron/bert-base-uncased-squad-v1```](https://huggingface.co/csarron/bert-base-uncased-squad-v1).
```
eval_exact_match = 80.9082
eval_f1 = 88.2275
eval_samples = 10784
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1 \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
chitanda/merit-albert-v2-xxlarge-v1 | 4622991a5822a369bb982958c5581680de2dfc68 | 2022-02-26T13:12:08.000Z | [
"pytorch",
"albert",
"transformers",
"license:mit"
] | null | false | chitanda | null | chitanda/merit-albert-v2-xxlarge-v1 | 31 | null | transformers | 7,092 | ---
license: mit
---
|
nlpaueb/sec-bert-num | eefb9538dfcca0f889d6c2fedb6549c1060a9e01 | 2022-04-28T14:46:16.000Z | [
"pytorch",
"tf",
"bert",
"pretraining",
"en",
"arxiv:2203.06482",
"transformers",
"finance",
"financial",
"license:cc-by-sa-4.0",
"fill-mask"
] | fill-mask | false | nlpaueb | null | nlpaueb/sec-bert-num | 31 | 4 | transformers | 7,093 | ---
language: en
pipeline_tag: fill-mask
license: cc-by-sa-4.0
thumbnail: https://i.ibb.co/0yz81K9/sec-bert-logo.png
tags:
- finance
- financial
widget:
- text: "Total net sales decreased [MASK]% or $[NUM] billion during [NUM] compared to [NUM]."
- text: "Total net sales decreased [NUM]% or $[MASK] billion during [NUM] compared to [NUM]."
- text: "Total net sales decreased [NUM]% or $[NUM] billion during [MASK] compared to [NUM]."
- text: "During [MASK], the Company repurchased $[NUM] billion of its common stock and paid dividend equivalents of $[NUM] billion."
- text: "During 2019, the Company repurchased $[MASK] billion of its common stock and paid dividend equivalents of $[NUM] billion."
---
# SEC-BERT
<img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="sec-bert-logo" width="400"/>
<div style="text-align: justify">
SEC-BERT is a family of BERT models for the financial domain, intended to assist financial NLP research and FinTech applications.
SEC-BERT consists of the following models:
* [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents.
* **SEC-BERT-NUM** (this model): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation).
* [**SEC-BERT-SHAPE**](https://huggingface.co/nlpaueb/sec-bert-shape): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.
</div>
## Pre-training corpus
The model was pre-trained on 260,773 10-K filings from 1993-2019, publicly available at <a href="https://www.sec.gov/">U.S. Securities and Exchange Commission (SEC)</a>
## Pre-training details
<div style="text-align: justify">
* We created a new vocabulary of 30k subwords by training a [BertWordPieceTokenizer](https://github.com/huggingface/tokenizers) from scratch on the pre-training corpus.
* We trained BERT using the official code provided in [Google BERT's GitHub repository](https://github.com/google-research/bert)</a>.
* We then used [Hugging Face](https://huggingface.co)'s [Transformers](https://github.com/huggingface/transformers) conversion script to convert the TF checkpoint in the desired format in order to be able to load the model in two lines of code for both PyTorch and TF2 users.
* We release a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TRC)](https://sites.research.google/trc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
</div>
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-num")
model = AutoModel.from_pretrained("nlpaueb/sec-bert-num")
```
## Pre-process Text
<div style="text-align: justify">
To use SEC-BERT-NUM, you have to pre-process texts replacing every numerical token with [NUM] pseudo-token.
Below there is an example of how you can pre-process a simple sentence. This approach is quite simple; feel free to modify it as you see fit.
</div>
```python
import re
import spacy
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/sec-bert-num")
spacy_tokenizer = spacy.load("en_core_web_sm")
sentence = "Total net sales decreased 2% or $5.4 billion during 2019 compared to 2018."
def sec_bert_num_preprocess(text):
tokens = [t.text for t in spacy_tokenizer(text)]
processed_text = []
for token in tokens:
if re.fullmatch(r"(\d+[\d,.]*)|([,.]\d+)", token):
processed_text.append('[NUM]')
else:
processed_text.append(token)
return ' '.join(processed_text)
tokenized_sentence = tokenizer.tokenize(sec_bert_num_preprocess(sentence))
print(tokenized_sentence)
"""
['total', 'net', 'sales', 'decreased', '[NUM]', '%', 'or', '$', '[NUM]', 'billion', 'during', '[NUM]', 'compared', 'to', '[NUM]', '.']
"""
```
## Using SEC-BERT variants as Language Models
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales [MASK] 2% or $5.4 billion during 2019 compared to 2018. | decreased
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | increased (0.221), were (0.131), are (0.103), rose (0.075), of (0.058)
| **SEC-BERT-BASE** | increased (0.678), decreased (0.282), declined (0.017), grew (0.016), rose (0.004)
| **SEC-BERT-NUM** | increased (0.753), decreased (0.211), grew (0.019), declined (0.010), rose (0.006)
| **SEC-BERT-SHAPE** | increased (0.747), decreased (0.214), grew (0.021), declined (0.013), rose (0.002)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 [MASK] during 2019 compared to 2018. | billion
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | billion (0.841), million (0.097), trillion (0.028), ##m (0.015), ##bn (0.006)
| **SEC-BERT-BASE** | million (0.972), billion (0.028), millions (0.000), ##million (0.000), m (0.000)
| **SEC-BERT-NUM** | million (0.974), billion (0.012), , (0.010), thousand (0.003), m (0.000)
| **SEC-BERT-SHAPE** | million (0.978), billion (0.021), % (0.000), , (0.000), millions (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased [MASK]% or $5.4 billion during 2019 compared to 2018. | 2
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 20 (0.031), 10 (0.030), 6 (0.029), 4 (0.027), 30 (0.027)
| **SEC-BERT-BASE** | 13 (0.045), 12 (0.040), 11 (0.040), 14 (0.035), 10 (0.035)
| **SEC-BERT-NUM** | [NUM] (1.000), one (0.000), five (0.000), three (0.000), seven (0.000)
| **SEC-BERT-SHAPE** | [XX] (0.316), [XX.X] (0.253), [X.X] (0.237), [X] (0.188), [X.XX] (0.002)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2[MASK] or $5.4 billion during 2019 compared to 2018. | %
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | % (0.795), percent (0.174), ##fold (0.009), billion (0.004), times (0.004)
| **SEC-BERT-BASE** | % (0.924), percent (0.076), points (0.000), , (0.000), times (0.000)
| **SEC-BERT-NUM** | % (0.882), percent (0.118), million (0.000), units (0.000), bps (0.000)
| **SEC-BERT-SHAPE** | % (0.961), percent (0.039), bps (0.000), , (0.000), bcf (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $[MASK] billion during 2019 compared to 2018. | 5.4
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 1 (0.074), 4 (0.045), 3 (0.044), 2 (0.037), 5 (0.034)
| **SEC-BERT-BASE** | 1 (0.218), 2 (0.136), 3 (0.078), 4 (0.066), 5 (0.048)
| **SEC-BERT-NUM** | [NUM] (1.000), l (0.000), 1 (0.000), - (0.000), 30 (0.000)
| **SEC-BERT-SHAPE** | [X.X] (0.787), [X.XX] (0.095), [XX.X] (0.049), [X.XXX] (0.046), [X] (0.013)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 billion during [MASK] compared to 2018. | 2019
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 2017 (0.485), 2018 (0.169), 2016 (0.164), 2015 (0.070), 2014 (0.022)
| **SEC-BERT-BASE** | 2019 (0.990), 2017 (0.007), 2018 (0.003), 2020 (0.000), 2015 (0.000)
| **SEC-BERT-NUM** | [NUM] (1.000), as (0.000), fiscal (0.000), year (0.000), when (0.000)
| **SEC-BERT-SHAPE** | [XXXX] (1.000), as (0.000), year (0.000), periods (0.000), , (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| Total net sales decreased 2% or $5.4 billion during 2019 compared to [MASK]. | 2018
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | 2017 (0.100), 2016 (0.097), above (0.054), inflation (0.050), previously (0.037)
| **SEC-BERT-BASE** | 2018 (0.999), 2019 (0.000), 2017 (0.000), 2016 (0.000), 2014 (0.000)
| **SEC-BERT-NUM** | [NUM] (1.000), year (0.000), last (0.000), sales (0.000), fiscal (0.000)
| **SEC-BERT-SHAPE** | [XXXX] (1.000), year (0.000), sales (0.000), prior (0.000), years (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company [MASK] $67.1 billion of its common stock and paid dividend equivalents of $14.1 billion. | repurchased
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | held (0.229), sold (0.192), acquired (0.172), owned (0.052), traded (0.033)
| **SEC-BERT-BASE** | repurchased (0.913), issued (0.036), purchased (0.029), redeemed (0.010), sold (0.003)
| **SEC-BERT-NUM** | repurchased (0.917), purchased (0.054), reacquired (0.013), issued (0.005), acquired (0.003)
| **SEC-BERT-SHAPE** | repurchased (0.902), purchased (0.068), issued (0.010), reacquired (0.008), redeemed (0.006)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common [MASK] and paid dividend equivalents of $14.1 billion. | stock
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | stock (0.835), assets (0.039), equity (0.025), debt (0.021), bonds (0.017)
| **SEC-BERT-BASE** | stock (0.857), shares (0.135), equity (0.004), units (0.002), securities (0.000)
| **SEC-BERT-NUM** | stock (0.842), shares (0.157), equity (0.000), securities (0.000), units (0.000)
| **SEC-BERT-SHAPE** | stock (0.888), shares (0.109), equity (0.001), securities (0.001), stocks (0.000)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common stock and paid [MASK] equivalents of $14.1 billion. | dividend
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | cash (0.276), net (0.128), annual (0.083), the (0.040), debt (0.027)
| **SEC-BERT-BASE** | dividend (0.890), cash (0.018), dividends (0.016), share (0.013), tax (0.010)
| **SEC-BERT-NUM** | dividend (0.735), cash (0.115), share (0.087), tax (0.025), stock (0.013)
| **SEC-BERT-SHAPE** | dividend (0.655), cash (0.248), dividends (0.042), share (0.019), out (0.003)
| Sample | Masked Token |
| --------------------------------------------------- | ------------ |
| During 2019, the Company repurchased $67.1 billion of its common stock and paid dividend [MASK] of $14.1 billion. | equivalents
| Model | Predictions (Probability) |
| --------------------------------------------------- | ------------ |
| **BERT-BASE-UNCASED** | revenue (0.085), earnings (0.078), rates (0.065), amounts (0.064), proceeds (0.062)
| **SEC-BERT-BASE** | payments (0.790), distributions (0.087), equivalents (0.068), cash (0.013), amounts (0.004)
| **SEC-BERT-NUM** | payments (0.845), equivalents (0.097), distributions (0.024), increases (0.005), dividends (0.004)
| **SEC-BERT-SHAPE** | payments (0.784), equivalents (0.093), distributions (0.043), dividends (0.015), requirements (0.009)
## Publication
<div style="text-align: justify">
If you use this model cite the following article:<br>
[**FiNER: Financial Numeric Entity Recognition for XBRL Tagging**](https://arxiv.org/abs/2203.06482)<br>
Lefteris Loukas, Manos Fergadiotis, Ilias Chalkidis, Eirini Spyropoulou, Prodromos Malakasiotis, Ion Androutsopoulos and George Paliouras<br>
In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022) (Long Papers), Dublin, Republic of Ireland, May 22 - 27, 2022
</div>
```
@inproceedings{loukas-etal-2022-finer,
title = {FiNER: Financial Numeric Entity Recognition for XBRL Tagging},
author = {Loukas, Lefteris and
Fergadiotis, Manos and
Chalkidis, Ilias and
Spyropoulou, Eirini and
Malakasiotis, Prodromos and
Androutsopoulos, Ion and
Paliouras George},
booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)},
publisher = {Association for Computational Linguistics},
location = {Dublin, Republic of Ireland},
year = {2022},
url = {https://arxiv.org/abs/2203.06482}
}
```
## About Us
<div style="text-align: justify">
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
</div>
[Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) |
nickmuchi/sec-bert-finetuned-finance-classification | e6f300a57b40c2944ae8da5f1159d0a7d55a2be6 | 2022-04-05T04:57:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:financial_phrasebank",
"dataset:Kaggle Self label",
"dataset:nickmuchi/financial-classification",
"transformers",
"financial-sentiment-analysis",
"sentiment-analysis",
"sentence_50agree",
"generated_from_trainer",
"financial",
"stocks",
"sentiment",
"license:cc-by-sa-4.0",
"model-index"
] | text-classification | false | nickmuchi | null | nickmuchi/sec-bert-finetuned-finance-classification | 31 | null | transformers | 7,094 | ---
license: cc-by-sa-4.0
tags:
- financial-sentiment-analysis
- sentiment-analysis
- sentence_50agree
- generated_from_trainer
- financial
- stocks
- sentiment
datasets:
- financial_phrasebank
- Kaggle Self label
- nickmuchi/financial-classification
metrics:
- accuracy
- f1
- precision
- recall
widget:
- text: "The USD rallied by 10% last night"
example_title: "Bullish Sentiment"
- text: "Covid-19 cases have been increasing over the past few months impacting earnings for global firms"
example_title: "Bearish Sentiment"
- text: "the USD has been trending lower"
example_title: "Mildly Bearish Sentiment"
model-index:
- name: sec-bert-finetuned-finance-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sec-bert-finetuned-finance-classification
This model is a fine-tuned version of [nlpaueb/sec-bert-base](https://huggingface.co/nlpaueb/sec-bert-base) on the sentence_50Agree [financial-phrasebank + Kaggle Dataset](https://huggingface.co/datasets/nickmuchi/financial-classification), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive). The Kaggle dataset includes Covid-19 sentiment data and can be found here: [sentiment-classification-selflabel-dataset](https://www.kaggle.com/percyzheng/sentiment-classification-selflabel-dataset).
It achieves the following results on the evaluation set:
- Loss: 0.5277
- Accuracy: 0.8755
- F1: 0.8744
- Precision: 0.8754
- Recall: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6005 | 0.99 | 71 | 0.3702 | 0.8478 | 0.8465 | 0.8491 | 0.8478 |
| 0.3226 | 1.97 | 142 | 0.3172 | 0.8834 | 0.8822 | 0.8861 | 0.8834 |
| 0.2299 | 2.96 | 213 | 0.3313 | 0.8814 | 0.8805 | 0.8821 | 0.8814 |
| 0.1277 | 3.94 | 284 | 0.3925 | 0.8775 | 0.8771 | 0.8770 | 0.8775 |
| 0.0764 | 4.93 | 355 | 0.4517 | 0.8715 | 0.8704 | 0.8717 | 0.8715 |
| 0.0533 | 5.92 | 426 | 0.4851 | 0.8735 | 0.8728 | 0.8731 | 0.8735 |
| 0.0363 | 6.9 | 497 | 0.5107 | 0.8755 | 0.8743 | 0.8757 | 0.8755 |
| 0.0248 | 7.89 | 568 | 0.5277 | 0.8755 | 0.8744 | 0.8754 | 0.8755 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
IIC/xprophetnet-spanish-mlsum | 3b166bb62f7ecf8b39433afe5d0d97cfb8a99e38 | 2022-04-02T15:09:07.000Z | [
"pytorch",
"xlm-prophetnet",
"text2text-generation",
"es",
"dataset:mlsum",
"transformers",
"summarization",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | summarization | false | IIC | null | IIC/xprophetnet-spanish-mlsum | 31 | 2 | transformers | 7,095 | ---
language:
- es
tags:
- summarization
license: apache-2.0
datasets:
- mlsum
metrics:
- rouge1
- rouge2
- rougeL
- rougeLsum
model-index:
- name: xprophetnet-spanish-mlsum
results:
- task:
type: summarization
name: abstractive summarization
dataset:
type: mlsum
name: mlsum-es
args: es
metrics:
- type: rouge1
value: 25.1158
name: rouge1
- type: rouge2
value: 8.4847
name: rouge2
- type: rougeL
value: 20.6184
name: rougeL
- type: rougeLsum
value: 20.8948
name: rougeLsum
---
This is a model for text summarization in Spanish. It has been trained on the spanish portion of [mlsum](https://huggingface.co/datasets/mlsum). For that, [XLM-ProphetNet (a multilingual version of Prophetnet)](https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased) was used.
For tuning the hyperparameters of the model we used [Optuna](https://optuna.org/), with only 10 different trials and 7 initial random trials, as the dataset chosen for training the model was huge. The set of hyperparameters used was the following:
```python
def hp_space(trial):
return {
"learning_rate": trial.suggest_float(
"learning_rate", 1e-5, 7e-5, log=True
),
"num_train_epochs": trial.suggest_categorical(
"num_train_epochs", [3, 5, 7, 10]
),
"per_device_train_batch_size": trial.suggest_categorical(
"per_device_train_batch_size", [16]),
"per_device_eval_batch_size": trial.suggest_categorical(
"per_device_eval_batch_size", [32]),
"gradient_accumulation_steps": trial.suggest_categorical(
"gradient_accumulation_steps", [2, 4, 8]),
"warmup_steps": trial.suggest_categorical(
"warmup_steps", [50, 100, 500, 1000]
),
"weight_decay": trial.suggest_float(
"weight_decay", 0.0, 0.1
),
```
The reported results are on the test split of mlsum. Complete metrics are:
```json
{"rouge1": 25.1158, "rouge2": 8.4847, "rougeL": 20.6184, "rougeLsum": 20.8948, "gen_len": 19.6496}
```
This model is really easy to use, and with the following lines of code you can just start summarizing your documents in Spanish:
```python
from transformers import ProphetNetForConditionalGeneration, AutoTokenizer
text = "Hola esto es un ejemplo de texto a resumir. Poco hay que resumir aquí, pero es sólo de muestra."
model_str = "avacaondata/xprophetnet-spanish-mlsum"
tokenizer = AutoTokenizer.from_pretrained(model_str)
model = ProphetNetForConditionalGeneration.from_pretrained(model_str)
input_ids = tokenizer(text, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
```
### Contributions
Thanks to [@avacaondata](https://huggingface.co/avacaondata), [@alborotis](https://huggingface.co/alborotis), [@albarji](https://huggingface.co/albarji), [@Dabs](https://huggingface.co/Dabs), [@GuillemGSubies](https://huggingface.co/GuillemGSubies) for adding this model. |
bipin/malayalam-gpt2 | 2780084e16c6814992511af8146c6db8c1d6f776 | 2022-03-20T11:05:57.000Z | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-generation | false | bipin | null | bipin/malayalam-gpt2 | 31 | null | transformers | 7,096 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: malayalam-gpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# malayalam-gpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9042 | 1.0 | 641 | 1.8638 |
| 1.8516 | 2.0 | 1282 | 1.8250 |
| 1.8034 | 3.0 | 1923 | 1.8095 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
junnyu/roformer_v2_chinese_char_small | da2234d6aef756525bb9622d2e9be18c1f4b2130 | 2022-05-11T03:32:58.000Z | [
"pytorch",
"roformer",
"fill-mask",
"zh",
"arxiv:2104.09864",
"transformers",
"roformer-v2",
"tf2.0",
"autotrain_compatible"
] | fill-mask | false | junnyu | null | junnyu/roformer_v2_chinese_char_small | 31 | null | transformers | 7,097 | ---
language: zh
tags:
- roformer-v2
- pytorch
- tf2.0
inference: False
---
## 介绍
### tf版本
https://github.com/ZhuiyiTechnology/roformer-v2
### pytorch版本+tf2.0版本
https://github.com/JunnYu/RoFormer_pytorch
## 评测对比
### CLUE-dev榜单分类任务结果,base+large版本。
| | iflytek | tnews | afqmc | cmnli | ocnli | wsc | csl |
| :-----: | :-----: | :---: | :---: | :---: | :---: | :---: | :---: |
| BERT | 60.06 | 56.80 | 72.41 | 79.56 | 73.93 | 78.62 | 83.93 |
| RoBERTa | 60.64 | 58.06 | 74.05 | 81.24 | 76.00 | 87.50 | 84.50 |
| RoFormer | 60.91 | 57.54 | 73.52 | 80.92 | 76.07 | 86.84 | 84.63 |
| RoFormerV2<sup>*</sup> | 60.87 | 56.54 | 72.75 | 80.34 | 75.36 | 80.92 | 84.67 |
| GAU-α | 61.41 | 57.76 | 74.17 | 81.82 | 75.86 | 79.93 | 85.67 |
| RoFormer-pytorch(本仓库代码) | 60.60 | 57.51 | 74.44 | 80.79 | 75.67 | 86.84 | 84.77 |
| RoFormerV2-pytorch(本仓库代码) | **62.87** | 59.03 | **76.20** | 80.85 | 79.73 | 87.82 | **91.87** |
| GAU-α-pytorch(Adafactor) | 61.18 | 57.52 | 73.42 | 80.91 | 75.69 | 80.59 | 85.5 |
| GAU-α-pytorch(AdamW wd0.01 warmup0.1) | 60.68 | 57.95 | 73.08 | 81.02 | 75.36 | 81.25 | 83.93 |
| RoFormerV2-large-pytorch(本仓库代码) | 61.75 | **59.21** | 76.14 | 82.35 | **81.73** | **91.45** | 91.5 |
| Chinesebert-large-pytorch | 61.25 | 58.67 | 74.70 | **82.65** | 79.63 | 87.83 | 84.97 |
### CLUE-1.0-test榜单分类任务结果,base+large版本。
| | iflytek | tnews | afqmc | cmnli | ocnli | wsc | csl |
| :-----: | :-----: | :---: | :---: | :---: | :---: | :---: | :---: |
| RoFormer-pytorch(本仓库代码) | 59.54 | 57.34 | 74.46 | 80.23 | 73.67 | 80.69 | 84.57 |
| RoFormerV2-pytorch(本仓库代码) | **63.15** | 58.24 | 75.42 | 80.59 | 74.17 | 83.79 | 83.73 |
| GAU-α-pytorch(Adafactor) | 61.38 | 57.08 | 74.05 | 80.37 | 73.53 | 74.83 | **85.6** |
| GAU-α-pytorch(AdamW wd0.01 warmup0.1) | 60.54 | 57.67 | 72.44 | 80.32 | 72.97 | 76.55 | 84.13 |
| RoFormerV2-large-pytorch(本仓库代码) | 61.85 | **59.13** | **76.38** | 80.97 | 76.23 | **85.86** | 84.33 |
| Chinesebert-large-pytorch | 61.54 | 58.57 | 74.8 | **81.94** | **76.93** | 79.66 | 85.1 |
### 注:
- 其中RoFormerV2<sup>*</sup>表示的是未进行多任务学习的RoFormerV2模型,该模型苏神并未开源,感谢苏神的提醒。
- 其中不带有pytorch后缀结果都是从[GAU-alpha](https://github.com/ZhuiyiTechnology/GAU-alpha)仓库复制过来的。
- 其中带有pytorch后缀的结果都是自己训练得出的。
- 苏神代码中拿了cls标签后直接进行了分类,而本仓库使用了如下的分类头,多了2个dropout,1个dense,1个relu激活。
```python
class RoFormerClassificationHead(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.out_proj = nn.Linear(config.hidden_size, config.num_labels)
self.config = config
def forward(self, features, **kwargs):
x = features[:, 0, :] # take <s> token (equiv. to [CLS])
x = self.dropout(x)
x = self.dense(x)
x = ACT2FN[self.config.hidden_act](x) # 这里是relu
x = self.dropout(x)
x = self.out_proj(x)
return x
```
### 安装
- pip install roformer==0.4.3
## pytorch & tf2.0使用
```python
import torch
import tensorflow as tf
from transformers import BertTokenizer
from roformer import RoFormerForMaskedLM, TFRoFormerForMaskedLM
text = "今天[MASK]很好,我[MASK]去公园玩。"
tokenizer = BertTokenizer.from_pretrained("junnyu/roformer_v2_chinese_char_small")
pt_model = RoFormerForMaskedLM.from_pretrained("junnyu/roformer_v2_chinese_char_small")
tf_model = TFRoFormerForMaskedLM.from_pretrained(
"junnyu/roformer_v2_chinese_char_base", from_pt=True
)
pt_inputs = tokenizer(text, return_tensors="pt")
tf_inputs = tokenizer(text, return_tensors="tf")
# pytorch
with torch.no_grad():
pt_outputs = pt_model(**pt_inputs).logits[0]
pt_outputs_sentence = "pytorch: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(pt_outputs[i].topk(k=5)[1])
pt_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
pt_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)
)
print(pt_outputs_sentence)
# tf
tf_outputs = tf_model(**tf_inputs, training=False).logits[0]
tf_outputs_sentence = "tf: "
for i, id in enumerate(tokenizer.encode(text)):
if id == tokenizer.mask_token_id:
tokens = tokenizer.convert_ids_to_tokens(tf.math.top_k(tf_outputs[i], k=5)[1])
tf_outputs_sentence += "[" + "||".join(tokens) + "]"
else:
tf_outputs_sentence += "".join(
tokenizer.convert_ids_to_tokens([id], skip_special_tokens=True)
)
print(tf_outputs_sentence)
# small
# pytorch: 今天[的||,||是||很||也]很好,我[要||会||是||想||在]去公园玩。
# tf: 今天[的||,||是||很||也]很好,我[要||会||是||想||在]去公园玩。
# base
# pytorch: 今天[我||天||晴||园||玩]很好,我[想||要||会||就||带]去公园玩。
# tf: 今天[我||天||晴||园||玩]很好,我[想||要||会||就||带]去公园玩。
# large
# pytorch: 今天[天||气||我||空||阳]很好,我[又||想||会||就||爱]去公园玩。
# tf: 今天[天||气||我||空||阳]很好,我[又||想||会||就||爱]去公园玩。
```
## 引用
Bibtex:
```tex
@misc{su2021roformer,
title={RoFormer: Enhanced Transformer with Rotary Position Embedding},
author={Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu},
year={2021},
eprint={2104.09864},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```tex
@techreport{roformerv2,
title={RoFormerV2: A Faster and Better RoFormer - ZhuiyiAI},
author={Jianlin Su, Shengfeng Pan, Bo Wen, Yunfeng Liu},
year={2022},
url="https://github.com/ZhuiyiTechnology/roformer-v2",
}
``` |
spartan97/distilbert-base-uncased-finetuned-objectivity-rotten | 5291bfca9b1dbbf5af1ffa0b9b17630669f847c1 | 2022-04-08T11:10:02.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"license:gpl-3.0"
] | text-classification | false | spartan97 | null | spartan97/distilbert-base-uncased-finetuned-objectivity-rotten | 31 | null | transformers | 7,098 | ---
license: gpl-3.0
---
Objectivity sentence classification model based on **distilbert-base-uncased-finetuned-sst-2-english**. It was fine-tuned with Rotten-IMDB movie review [data](http://www.cs.cornell.edu/people/pabo/movie-review-data/) using extracted sentences from film plots as objective examples and review comments as subjective language examples.
With a test set of 5%, we obtained an accuracy of 96% and f1 of the same value.
Please, feel free to try the demo online with subjective language examples like "I think...", "I believe...", and more objective claims.
For any further comments contact me, at [email protected].
|
bespin-global/klue-roberta-base-korquad2 | 44ba601528e7e449c08aba3588582ce031da4ab0 | 2022-04-14T01:07:13.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | bespin-global | null | bespin-global/klue-roberta-base-korquad2 | 31 | null | transformers | 7,099 | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.