modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
flax-community/gpt2-base-thai | f8388ca939b247158f87ee25cc20ed12a2cb0e21 | 2021-07-17T10:11:12.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"th",
"dataset:oscar",
"transformers",
"gpt2-base-thai",
"license:mit"
] | text-generation | false | flax-community | null | flax-community/gpt2-base-thai | 161 | 2 | transformers | 3,900 | ---
language: th
tags:
- gpt2-base-thai
license: mit
datasets:
- oscar
widget:
- text: "สวัสดีตอนเช้า"
---
## GPT-2 Base Thai
GPT-2 Base Thai is a causal language model based on the [OpenAI GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) model. It was trained on the [OSCAR](https://huggingface.co/datasets/oscar) dataset, specifically the `unshuffled_deduplicated_th` subset. The model was trained from scratch and achieved an evaluation loss of 1.708 and an evaluation perplexity of 5.516.
This model was trained using HuggingFace's Flax framework and is part of the [JAX/Flax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) organized by HuggingFace. All training was done on a TPUv3-8 VM, sponsored by the Google Cloud team.
All necessary scripts used for training could be found in the [Files and versions](https://hf.co/flax-community/gpt2-base-thai/tree/main) tab, as well as the [Training metrics](https://hf.co/flax-community/gpt2-base-thai/tensorboard) logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------- | ------- | ----- | ------------------------------------ |
| `gpt2-base-thai` | 124M | GPT-2 | `unshuffled_deduplicated_th` Dataset |
## Evaluation Results
The model was trained for 3 epochs and the following is the final result once the training ended.
| train loss | valid loss | valid PPL | total time |
| ---------- | ---------- | --------- | ---------- |
| 1.638 | 1.708 | 5.516 | 6:12:34 |
## How to Use
### As Causal Language Model
```python
from transformers import pipeline
pretrained_name = "flax-community/gpt2-base-thai"
nlp = pipeline(
"text-generation",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("สวัสดีตอนเช้า")
```
### Feature Extraction in PyTorch
```python
from transformers import GPT2Model, GPT2TokenizerFast
pretrained_name = "flax-community/gpt2-base-thai"
model = GPT2Model.from_pretrained(pretrained_name)
tokenizer = GPT2TokenizerFast.from_pretrained(pretrained_name)
prompt = "สวัสดีตอนเช้า"
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
## Team Members
- Sakares Saengkaew ([@sakares](https://hf.co/sakares))
- Wilson Wongso ([@w11wo](https://hf.co/w11wo))
|
microsoft/beit-large-patch16-224-pt22k | d1b5d428118fa9b220cc1585289fb5a6bc5ceb92 | 2022-01-28T10:20:43.000Z | [
"pytorch",
"jax",
"beit",
"dataset:imagenet",
"dataset:imagenet-21k",
"arxiv:2106.08254",
"transformers",
"image-classification",
"vision",
"license:apache-2.0"
] | image-classification | false | microsoft | null | microsoft/beit-large-patch16-224-pt22k | 161 | null | transformers | 3,901 | ---
license: apache-2.0
tags:
- image-classification
- vision
datasets:
- imagenet
- imagenet-21k
---
# BEiT (large-sized model, pre-trained only)
BEiT model pre-trained in a self-supervised fashion on ImageNet-22k - also called ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import BeitFeatureExtractor, BeitForMaskedImageModeling
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-224-pt22k')
model = BeitForMaskedImageModeling.from_pretrained('microsoft/beit-large-patch16-224-pt22k')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution. Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
``` |
mrm8488/deberta-v3-large-finetuned-mnli | 141a9d86bd10505c60fec17e2b53836d2d7f0cc0 | 2021-12-20T16:57:12.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"dataset:glue",
"arxiv:2006.03654",
"arxiv:2111.09543",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] | text-classification | false | mrm8488 | null | mrm8488/deberta-v3-large-finetuned-mnli | 161 | 2 | transformers | 3,902 | ---
language:
- en
license: mit
widget:
- text: "She was badly wounded already. Another spear would take her down."
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-v3-large-mnli-2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8949349064279902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeBERTa-v3-large fine-tuned on MNLI
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6763
- Accuracy: 0.8949
## Model description
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543).
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates.
The DeBERTa V3 large model comes with 24 layers and a hidden size of 1024. It has 304M backbone parameters with a vocabulary containing 128K tokens which introduces 131M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.3676 | 1.0 | 24544 | 0.3761 | 0.8681 |
| 0.2782 | 2.0 | 49088 | 0.3605 | 0.8881 |
| 0.1986 | 3.0 | 73632 | 0.4672 | 0.8894 |
| 0.1299 | 4.0 | 98176 | 0.5248 | 0.8967 |
| 0.0643 | 5.0 | 122720 | 0.6489 | 0.8999 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
speechbrain/google_speech_command_xvector | 0e93d6f0644e9014851c622ea8d75a38f2aebc1e | 2021-11-30T00:41:50.000Z | [
"en",
"dataset:google speech commands",
"arxiv:1804.03209",
"arxiv:2106.04624",
"speechbrain",
"embeddings",
"Commands",
"Keywords",
"Keyword Spotting",
"pytorch",
"xvectors",
"TDNN",
"Command Recognition",
"audio-classification",
"license:apache-2.0"
] | audio-classification | false | speechbrain | null | speechbrain/google_speech_command_xvector | 161 | 2 | speechbrain | 3,903 | ---
language: "en"
thumbnail:
tags:
- speechbrain
- embeddings
- Commands
- Keywords
- Keyword Spotting
- pytorch
- xvectors
- TDNN
- Command Recognition
- audio-classification
license: "apache-2.0"
datasets:
- google speech commands
metrics:
- Accuracy
widget:
- example_title: Speech Commands "down"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_down.wav
- example_title: Speech Commands "go"
src: https://cdn-media.huggingface.co/speech_samples/keyword_spotting_go.wav
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Command Recognition with xvector embeddings on Google Speech Commands
This repository provides all the necessary tools to perform command recognition with SpeechBrain using a model pretrained on Google Speech Commands.
You can download the dataset [here](https://www.tensorflow.org/datasets/catalog/speech_commands)
The dataset provides small training, validation, and test sets useful for detecting single keywords in short audio clips. The provided system can recognize the following 12 keywords:
```
'yes', 'no', 'up', 'down', 'left', 'right', 'on', 'off', 'stop', 'go', 'unknown', 'silence'
```
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given model performance on the test set is:
| Release | Accuracy(%)
|:-------------:|:--------------:|
| 06-02-21 | 98.14 |
## Pipeline description
This system is composed of a TDNN model coupled with statistical pooling. A classifier, trained with Categorical Cross-Entropy Loss, is applied on top of that.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Command Recognition
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="speechbrain/google_speech_command_xvector", savedir="pretrained_models/google_speech_command_xvector")
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/google_speech_command_xvector/yes.wav')
print(text_lab)
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/google_speech_command_xvector/stop.wav')
print(text_lab)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (b7ff9dc4).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/Google-speech-commands
python train.py hparams/xvect.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1BKwtr1mBRICRe56PcQk2sCFq63Lsvdpc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing xvectors
```@inproceedings{DBLP:conf/odyssey/SnyderGMSPK18,
author = {David Snyder and
Daniel Garcia{-}Romero and
Alan McCree and
Gregory Sell and
Daniel Povey and
Sanjeev Khudanpur},
title = {Spoken Language Recognition using X-vectors},
booktitle = {Odyssey 2018},
pages = {105--111},
year = {2018},
}
```
#### Referencing Google Speech Commands
```@article{speechcommands,
author = { {Warden}, P.},
title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1804.03209},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
year = 2018,
month = apr,
url = {https://arxiv.org/abs/1804.03209},
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
tdopierre/ProtAugment-LM-BANKING77 | 67583e52c69e82c5df4929c966cee780fcc0308a | 2021-07-01T13:48:46.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | tdopierre | null | tdopierre/ProtAugment-LM-BANKING77 | 161 | null | transformers | 3,904 | Entry not found |
vasilis/wav2vec2-large-xlsr-53-greek | 424132d02ed2597a0288e683572ee325d38c9264 | 2021-03-26T23:51:48.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"el",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | vasilis | null | vasilis/wav2vec2-large-xlsr-53-greek | 161 | null | transformers | 3,905 | ---
language: el
datasets:
- common_voice
- CSS10 Greek: Single Speaker Speech Dataset
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: V XLSR Wav2Vec2 Large 53 - greek
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice el
type: common_voice
args: el
metrics:
- name: Test WER
type: wer
value: 18.996669
- name: Test CER
type: cer
value: 5.781874
---
# Wav2Vec2-Large-XLSR-53-greek
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10 Greek: Single Speaker Speech Dataset](https://www.kaggle.com/bryanpark/greek-single-speaker-speech-dataset).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the greek test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "el", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model = Wav2Vec2ForCTC.from_pretrained("vasilis/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
normalize_greek_letters = {"ς": "σ"}
# normalize_greek_letters = {"ά": "α", "έ": "ε", "ί": "ι", 'ϊ': "ι", "ύ": "υ", "ς": "σ", "ΐ": "ι", 'ϋ': "υ", "ή": "η", "ώ": "ω", 'ό': "ο"}
remove_chars_greek = {"a": "", "h": "", "n": "", "g": "", "o": "", "v": "", "e": "", "r": "", "t": "", "«": "", "»": "", "m": "", '́': '', "·": "", "’": "", '´': ""}
replacements = {**normalize_greek_letters, **remove_chars_greek}
resampler = {
48_000: torchaudio.transforms.Resample(48_000, 16_000),
44100: torchaudio.transforms.Resample(44100, 16_000),
32000: torchaudio.transforms.Resample(32000, 16_000)
}
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
for key, value in replacements.items():
batch["sentence"] = batch["sentence"].replace(key, value)
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler[sampling_rate](speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * wer.compute(predictions=[" ".join(list(entry)) for entry in result["pred_strings"]], references=[" ".join(list(entry)) for entry in result["sentence"]])))
```
**Test Result**: 18.996669 %
## Training
The Common Voice train dataset was used for training. Also all of `CSS10 Greek` was used using the normalized transcripts.
During text preprocessing letter `ς` is normalized to `σ` the reason is that both letters sound the same with `ς` only used as the ending character of words. So, the change can be mapped up to proper dictation easily. I tried removing all accents from letters as well that improved `WER` significantly. The model was reaching `17%` WER easily without having converged. However, the text preprocessing needed to do after to fix transcrtiptions would be more complicated. A language model should fix things easily though. Another thing that could be tried out would be to change all of `ι`, `η` ... etc to a single character since all sound the same. similar for `o` and `ω` these should help the acoustic model part significantly since all these characters map to the same sound. But further text normlization would be needed.
|
Felix92/doctr-dummy-torch-magc-resnet31 | 3292da2d65e1acd65856524f4bcf30953759997d | 2022-04-14T08:18:52.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-magc-resnet31 | 161 | null | transformers | 3,906 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-mobilenet-v3-small | dcf2221cfd71298abba03ab536c77272d740764a | 2022-04-14T08:25:21.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-mobilenet-v3-small | 161 | null | transformers | 3,907 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-mobilenet-v3-large | 1d16bd16e709a267cb44a61b1ba38c9c301214ca | 2022-04-14T08:49:23.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-mobilenet-v3-large | 161 | null | transformers | 3,908 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: classification
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-db-resnet50 | 8dbfbdce98b877e5cfd2aba81dfa242faed23a89 | 2022-04-14T08:54:22.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-db-resnet50 | 161 | null | transformers | 3,909 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
Felix92/doctr-dummy-torch-db-mobilenet-v3-large | 1fda1f7e55b1a6d4c72946aa4d067384d5d661e9 | 2022-04-14T08:57:32.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-db-mobilenet-v3-large | 161 | null | transformers | 3,910 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
crodri/roberta-ca-v2-qa-squac-ca-catalanqa | d3455bc6020e81dfb68d92e4bcae39a2773ec501 | 2022-07-01T10:36:17.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"license:cc-by-sa-4.0",
"autotrain_compatible"
] | question-answering | false | crodri | null | crodri/roberta-ca-v2-qa-squac-ca-catalanqa | 161 | null | transformers | 3,911 | ---
license: cc-by-sa-4.0
---
|
PlanTL-GOB-ES/roberta-base-bne-capitel-ner | 998bcf449b882707aec171a0c0ab7c4eaa361d87 | 2022-04-06T14:43:10.000Z | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
] | token-classification | false | PlanTL-GOB-ES | null | PlanTL-GOB-ES/roberta-base-bne-capitel-ner | 160 | null | transformers | 3,912 | ---
language:
- es
license: apache-2.0
tags:
- "national library of spain"
- "spanish"
- "bne"
- "capitel"
- "ner"
datasets:
- "bne"
- "capitel"
metrics:
- "f1"
inference:
parameters:
aggregation_strategy: "first"
---
# Spanish RoBERTa-base trained on BNE finetuned for CAPITEL Named Entity Recognition (NER) dataset.
RoBERTa-base-bne is a transformer-based masked language model for the Spanish language. It is based on the [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
Original pre-trained model can be found here: https://huggingface.co/BSC-TeMU/roberta-base-bne
## Dataset
The dataset used is the one from the [CAPITEL competition at IberLEF 2020](https://sites.google.com/view/capitel2020) (sub-task 1).
## Evaluation and results
F1 Score: 0.8960
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
## Citing
Check out our paper for all the details: https://arxiv.org/abs/2107.07253
```
@article{gutierrezfandino2022,
author = {Asier Gutiérrez-Fandiño and Jordi Armengol-Estapé and Marc Pàmies and Joan Llop-Palao and Joaquin Silveira-Ocampo and Casimiro Pio Carrino and Carme Armentano-Oller and Carlos Rodriguez-Penagos and Aitor Gonzalez-Agirre and Marta Villegas},
title = {MarIA: Spanish Language Models},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6405},
pages = {39--60}
}
```
## Funding
This work was partially funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL, and the Future of Computing Center, a Barcelona Supercomputing Center and IBM initiative (2020).
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos. |
TajMahaladeen/pokemon_gptj | 2e7ab98f5b9dc0d661e471393d4ff60015b52c8f | 2022-01-31T06:12:31.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers",
"license:apache-2.0"
] | text-generation | false | TajMahaladeen | null | TajMahaladeen/pokemon_gptj | 160 | null | transformers | 3,913 | ---
license: apache-2.0
---
|
asafaya/albert-base-arabic | ee582a9951cda838764c4f612b23d0cfd07e41ef | 2022-02-11T13:45:48.000Z | [
"pytorch",
"tf",
"albert",
"fill-mask",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"masked-lm",
"autotrain_compatible"
] | fill-mask | false | asafaya | null | asafaya/albert-base-arabic | 160 | null | transformers | 3,914 | ---
language: ar
datasets:
- oscar
- wikipedia
tags:
- ar
- masked-lm
---
# Arabic-ALBERT Base
Arabic edition of ALBERT Base pretrained language model
_If you use any of these models in your work, please cite this work as:_
```
@software{ali_safaya_2020_4718724,
author = {Ali Safaya},
title = {Arabic-ALBERT},
month = aug,
year = 2020,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.4718724},
url = {https://doi.org/10.5281/zenodo.4718724}
}
```
## Pretraining data
The models were pretrained on ~4.4 Billion words:
- Arabic version of [OSCAR](https://oscar-corpus.com/) (unshuffled version of the corpus) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters do not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- These models were trained using Google ALBERT's github [repository](https://github.com/google-research/albert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 7M training steps with batchsize of 64, instead of 125K with batchsize of 4096.
## Models
| | albert-base | albert-large | albert-xlarge |
|:---:|:---:|:---:|:---:|
| Hidden Layers | 12 | 24 | 24 |
| Attention heads | 12 | 16 | 32 |
| Hidden size | 768 | 1024 | 2048 |
## Results
For further details on the models performance or any other queries, please refer to [Arabic-ALBERT](https://github.com/KUIS-AI-Lab/Arabic-ALBERT/)
## How to use
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
# loading the tokenizer
base_tokenizer = AutoTokenizer.from_pretrained("kuisailab/albert-base-arabic")
# loading the model
base_model = AutoModelForMaskedLM.from_pretrained("kuisailab/albert-base-arabic")
```
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting these models on their servers 😊
|
jonatasgrosman/wav2vec2-xls-r-1b-polish | 2130508d8112999fe5ad30be7ddca1b49aea7cc1 | 2022-07-27T23:38:59.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | jonatasgrosman | null | jonatasgrosman/wav2vec2-xls-r-1b-polish | 160 | 1 | transformers | 3,915 | ---
language:
- pl
license: apache-2.0
tags:
- automatic-speech-recognition
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- pl
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R Wav2Vec2 Polish by Jonatas Grosman
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: pl
metrics:
- name: Test WER
type: wer
value: 11.01
- name: Test CER
type: cer
value: 2.55
- name: Test WER (+LM)
type: wer
value: 7.32
- name: Test CER (+LM)
type: cer
value: 1.95
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: pl
metrics:
- name: Dev WER
type: wer
value: 26.31
- name: Dev CER
type: cer
value: 13.85
- name: Dev WER (+LM)
type: wer
value: 20.33
- name: Dev CER (+LM)
type: cer
value: 13.0
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: pl
metrics:
- name: Test WER
type: wer
value: 22.77
---
# Fine-tuned XLS-R 1B model for speech recognition in Polish
Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Polish using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
## Usage
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-polish")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "pl"
MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-polish"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
```
## Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-polish --dataset mozilla-foundation/common_voice_8_0 --config pl --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-polish --dataset speech-recognition-community-v2/dev_data --config pl --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr-1b-polish,
title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {P}olish},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-polish}},
year={2022}
}
```
|
keshan/SinhalaBERTo | 07773c644cf82601e37ee414f7062b7b909e78fd | 2021-07-11T13:14:51.000Z | [
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"si",
"dataset:oscar",
"arxiv:1907.11692",
"transformers",
"SinhalaBERTo",
"Sinhala",
"autotrain_compatible"
] | fill-mask | false | keshan | null | keshan/SinhalaBERTo | 160 | null | transformers | 3,916 | ---
language: si
tags:
- SinhalaBERTo
- Sinhala
- roberta
datasets:
- oscar
---
### Overview
This is a slightly smaller model trained on [OSCAR](https://oscar-corpus.com/) Sinhala dedup dataset. As Sinhala is one of those low resource languages, there are only a handful of models been trained. So, this would be a great place to start training for more downstream tasks.
## Model Specification
The model chosen for training is [Roberta](https://arxiv.org/abs/1907.11692) with the following specifications:
1. vocab_size=52000
2. max_position_embeddings=514
3. num_attention_heads=12
4. num_hidden_layers=6
5. type_vocab_size=1
## How to Use
You can use this model directly with a pipeline for masked language modeling:
```py
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline
model = AutoModelWithLMHead.from_pretrained("keshan/SinhalaBERTo")
tokenizer = AutoTokenizer.from_pretrained("keshan/SinhalaBERTo")
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
fill_mask("මම ගෙදර <mask>.")
```
|
asahi417/lmqg-mt5-base-jaquad | a29a11aad1d290c3623f226cef5fa76465bffa7d | 2022-06-09T10:54:10.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"ja",
"dataset:asahi417/qg_jaquad",
"transformers",
"question generation",
"license:cc-by-4.0",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-mt5-base-jaquad | 160 | null | transformers | 3,917 | ---
language: ja
tags:
- question generation
license: cc-by-4.0
datasets:
- asahi417/qg_jaquad
metrics:
- bleu
- meteor
- rouge
- bertscore
widget:
- text: "ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。"
example_title: "Question Generation Example 1"
- text: "『クマのプーさん』の物語はまず1925年12月24日、『イヴニング・ニュース』紙のクリスマス特集号に短編作品として掲載された。これは『クマのプーさん』の第一章にあたる作品で、このときだけは挿絵をJ.H.ダウドがつけている。その後作品10話と挿絵が整い、刊行に先駆けて「イーヨーの誕生日」のエピソードが1926年8月に『ロイヤルマガジン』に、同年10月9日に『ニューヨーク・イヴニング・ポスト』紙に掲載されたあと、同年10月14日にロンドンで(メシュエン社)、21日にニューヨークで(ダットン社)『クマのプーさん』が刊行された。前著『ぼくたちがとてもちいさかったころ』がすでに大きな成功を収めていたこともあり、イギリスでは初版は前著の7倍に当たる<hl>3万5000部<hl>が刷られた。他方のアメリカでもその年の終わりまでに15万部を売り上げている。ただし依然として人気のあった前著を売り上げで追い越すには数年の時間を要した。"
example_title: "Question Generation Example 2"
- text: "フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め<hl>30数点<hl>しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。以下には若干の疑問作も含め、37点の基本情報を記載し、各作品について略説する。収録順序、推定制作年代は『「フェルメールとその時代展」図録』による。日本語の作品タイトルについては、上掲図録のほか、『「フェルメール展」図録』、『フェルメール生涯と作品』による。便宜上「1650年代の作品」「1660年代の作品」「1670年代の作品」の3つの節を設けたが、フェルメールの作品には制作年代不明のものが多く、推定制作年代については研究者や文献によって若干の差がある。"
example_title: "Question Generation Example 3"
- text: "東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて<hl>中国<hl>から起こり、伝来したものであった。当時の宗とは、教団というよりは仏教教理の学派に近い。それゆえ、兼学の場ができたとも言える。この様な兼学の形態は、南都の寺院では広く見られたものである。この六宗兼学の場(後、真言、天台加わって八宗兼学の場)の性格は、現在の東大寺でも見られるが、中でも重んじられたのが、本尊の大仏の性格が華厳経の教えに則ったものであることからも分かるように、華厳宗である。"
example_title: "Question Generation Example 4"
pipeline_tag: text2text-generation
---
# MT5 BASE fine-tuned for Japanese Question Generation
MT5 BASE Model fine-tuned on Japanese question generation dataset (JaQuAD) with an extensive hyper-parameter search.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** mt5-base
**Language:** Japanese (ja)
**Downstream-task:** Question Generation
**Training data:** JaQuAD
**Eval data:** JaQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-mt5-base-squad'
pipe = pipeline("text2text-generation", model_path)
# Question Genration
paragraph = '東大寺は、六宗兼学の場として世に広く知られるようになった。六宗とはすなわち、法相宗(法性宗)、三論宗、倶舎宗(薩婆多宗)、成実宗、華厳宗(花厳宗)、律宗のことであり、すべて中国から起こり、伝来したものであった。'
# highlight an answer in the paragraph to generate question
answer = '中国'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': '六宗はどの国から起こったものでありますか。'}]
```
## Evaluations
Evaluation on the test set of [JaQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_jaquad).
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore |
| ------ | -------- | ------ | --------- |
| 32.53 | 52.66 | 30.57 | 81.77 |
- [metric file](https://huggingface.co/asahi417/lmqg-mt5-base-jaquad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_jaquad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-mt5-base-jaquad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
TencentGameMate/chinese-hubert-large | 90cb660492214f687e60f5ca509b20edae6e75bd | 2022-06-24T01:57:26.000Z | [
"pytorch",
"hubert",
"feature-extraction",
"transformers",
"license:mit"
] | feature-extraction | false | TencentGameMate | null | TencentGameMate/chinese-hubert-large | 160 | 4 | transformers | 3,918 | ---
license: mit
---
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from transformers import (
Wav2Vec2FeatureExtractor,
HubertModel,
)
model_path=""
wav_path=""
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = HubertModel.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
``` |
helliun/primary_or_secondary_v3 | cbfda5ecfab8abe05aaeb52df8fe96f5023b040e | 2022-06-10T15:37:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | helliun | null | helliun/primary_or_secondary_v3 | 160 | null | transformers | 3,919 | Entry not found |
Helsinki-NLP/opus-tatoeba-es-zh | 66c9fde497d230664c53c4c91c21d2e30f8cab47 | 2021-01-04T16:53:57.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"zh",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-tatoeba-es-zh | 159 | 1 | transformers | 3,920 | ---
language:
- es
- zh
tags:
- translation
license: apache-2.0
---
### es-zh
* source group: Spanish
* target group: Chinese
* OPUS readme: [spa-zho](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zho/README.md)
* model: transformer
* source language(s): spa
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant hsn hsn_Hani lzh nan wuu yue_Hans yue_Hant
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2021-01-04.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.zip)
* test set translations: [opus-2021-01-04.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.test.txt)
* test set scores: [opus-2021-01-04.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.zho | 38.8 | 0.324 |
### System Info:
- hf_name: es-zh
- source_languages: spa
- target_languages: zho
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zho/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'zh']
- src_constituents: ('Spanish', {'spa'})
- tgt_constituents: ('Chinese', {'wuu_Bopo', 'wuu', 'cmn_Hang', 'lzh_Kana', 'lzh', 'wuu_Hani', 'lzh_Yiii', 'yue_Hans', 'cmn_Hani', 'cjy_Hans', 'cmn_Hans', 'cmn_Kana', 'zho_Hans', 'zho_Hant', 'yue', 'cmn_Bopo', 'yue_Hang', 'lzh_Hans', 'wuu_Latn', 'yue_Hant', 'hak_Hani', 'lzh_Bopo', 'cmn_Hant', 'lzh_Hani', 'lzh_Hang', 'cmn', 'lzh_Hira', 'yue_Bopo', 'yue_Hani', 'gan', 'zho', 'cmn_Yiii', 'yue_Hira', 'cmn_Latn', 'yue_Kana', 'cjy_Hant', 'cmn_Hira', 'nan_Hani', 'nan'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: spa-zho
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zho/opus-2021-01-04.test.txt
- src_alpha3: spa
- tgt_alpha3: zho
- chrF2_score: 0.324
- bleu: 38.8
- brevity_penalty: 0.878
- ref_len: 22762.0
- src_name: Spanish
- tgt_name: Chinese
- train_date: 2021-01-04 00:00:00
- src_alpha2: es
- tgt_alpha2: zh
- prefer_old: False
- short_pair: es-zh
- helsinki_git_sha: dfdcef114ffb8a8dbb7a3fcf84bde5af50309500
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2021-01-04-18:53 |
asahi417/lmqg-bart-base-squad | 7ba631ad84fc0a10025b15f9a7427dfc6ac517b2 | 2022-06-09T18:37:45.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"dataset:asahi417/qg_squad",
"transformers",
"question generation",
"license:mit",
"autotrain_compatible"
] | text2text-generation | false | asahi417 | null | asahi417/lmqg-bart-base-squad | 159 | null | transformers | 3,921 | ---
language:
- en
tags:
- question generation
license: mit
datasets:
- asahi417/qg_squad
metrics:
- bleu
- meteor
- rouge
- bertscore
- moverscore
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Example 3"
---
# BART BASE fine-tuned for English Question Generation
BART BASE Model fine-tuned on English question generation dataset (SQuAD) with an extensive hyper-parameter search.
- [Online Demo](https://autoqg.net/)
- [Project Repository](https://github.com/asahi417/lm-question-generation)
## Overview
**Language model:** facebook/bart-base
**Language:** English (en)
**Downstream-task:** Question Generation
**Training data:** SQuAD
**Eval data:** SQuAD
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
## Usage
### In Transformers
```python
from transformers import pipeline
model_path = 'asahi417/lmqg-bart-base-squad'
pipe = pipeline("text2text-generation", model_path)
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
# highlight an answer in the paragraph to generate question
answer = 'Etta James'
highlight_token = '<hl>'
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
input_text = 'generate question: {}'.format(input_text) # add task specific prefix
generation = pipe(input_text)
print(generation)
>>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}]
```
## Evaluations
Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad).
The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works.
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
| BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore |
| ------ | -------- | ------ | --------- | ---------- |
| 24.68 | 52.65 | 26.05 | 90.87 | 64.47 |
- [metric file](https://huggingface.co/asahi417/lmqg-bart-base-squad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json)
## Fine-tuning Parameters
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-bart-base-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
## Citation
TBA
|
facebook/wav2vec2-large-xlsr-53-polish | 5a552eaee7b7cb205ceb15d0a6e8ff4b724b382a | 2021-07-06T02:58:29.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"nl",
"dataset:common_voice",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | automatic-speech-recognition | false | facebook | null | facebook/wav2vec2-large-xlsr-53-polish | 159 | null | transformers | 3,922 | ---
language: nl
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
license: apache-2.0
---
## Evaluation on Common Voice PL Test
```python
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
model_name = "facebook/wav2vec2-large-xlsr-53-polish"
device = "cuda"
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"]' # noqa: W605
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(model_name)
ds = load_dataset("common_voice", "pl", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
**Result**: 24.6 % |
ghosh-r/bangla-gpt2 | ecd06078bc3b0d0970af08591944674b4e724240 | 2021-07-20T15:22:47.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"bn",
"transformers"
] | text-generation | false | ghosh-r | null | ghosh-r/bangla-gpt2 | 159 | 2 | transformers | 3,923 | ---
language: bn
tags:
- text-generation
widget:
- text: আজ একটি সুন্দর দিন এবং আমি
---
# Bangla-GPT2
### A GPT-2 Model for the Bengali Language
* Dataset- mc4 Bengali
* Training time- ~40 hours
* Written in- JAX
If you use this model, please cite:
```
@misc{bangla-gpt2,
author = {Ritobrata Ghosh},
year = {2016},
title = {Bangla GPT-2},
publisher = {Hugging Face}
}
``` |
Felix92/doctr-dummy-torch-db-resnet34 | 8c720afa96b1dd31a3899bc2094de75c23126a35 | 2022-04-14T08:51:36.000Z | [
"pytorch",
"en",
"transformers"
] | null | false | Felix92 | null | Felix92/doctr-dummy-torch-db-resnet34 | 159 | null | transformers | 3,924 |
---
language: en
---
<p align="center">
<img src="https://github.com/mindee/doctr/releases/download/v0.3.1/Logo_doctr.gif" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: detection
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
BigSalmon/InformalToFormalLincoln46 | 2be33d452be9aa10f4aff7d7546d80093db698be | 2022-05-23T20:10:14.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | BigSalmon | null | BigSalmon/InformalToFormalLincoln46 | 159 | null | transformers | 3,925 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln45")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln45")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
``` |
KM4STfulltext/SSCI-SciBERT-e2 | 94c8bf53e16f6a586c5fa7d105b628898bb2aeab | 2022-06-01T09:25:14.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | KM4STfulltext | null | KM4STfulltext/SSCI-SciBERT-e2 | 159 | 1 | transformers | 3,926 | ---
license: apache-2.0
---
# SSCI-BERT: A pretrained language model for social scientific text
## Introduction
The research for social science texts needs the support natural language processing tools.
The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of scientific texts in social science.
We used the abstract of social science research as the training set. Based on the deep language model framework of BERT, we constructed [SSCI-BERT and SSCI-SciBERT](https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) pre-training language models by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py).
We designed four downstream tasks of Text Classification on different social scientific article corpus to verify the performance of the model.
- SSCI-BERT and SSCI-SciBERT are trained on the abstract of articles published in SSCI journals from 1986 to 2021. The training set involved in the experiment included a total of `503910614 words`.
- Based on the idea of Domain-Adaptive Pretraining, `SSCI-BERT` and `SSCI-SciBERT` combine a large amount of abstracts of scientific articles based on the BERT structure, and continue to train the BERT and SSCI-SciBERT models respectively to obtain pre-training models for the automatic processing of Social science research texts.
## News
- 2022-03-24 : SSCIBERT and SSCI-SciBERT has been put forward for the first time.
## How to use
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain SSCI-BERT and SSCI-SciBERT models online.
- SSCI-BERT
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
model = AutoModel.from_pretrained("KM4STfulltext/SSCI-BERT-e2")
```
- SSCI-SciBERT
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
model = AutoModel.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2")
```
### Download Models
- The version of the model we provide is `PyTorch`.
### From Huggingface
- Download directly through Huggingface's official website.
- [KM4STfulltext/SSCI-BERT-e2](https://huggingface.co/KM4STfulltext/SSCI-BERT-e2)
- [KM4STfulltext/SSCI-SciBERT-e2](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e2)
- [KM4STfulltext/SSCI-BERT-e4 ](https://huggingface.co/KM4STfulltext/SSCI-BERT-e4)
- [KM4STfulltext/SSCI-SciBERT-e4](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e4)
### From Google Drive
We have put the model on Google Drive for users.
| Model | DATASET(year) | Base Model |
| ------------------------------------------------------------ | ------------- | ---------------------- |
| [SSCI-BERT-e2](https://drive.google.com/drive/folders/1xEDnovlwGO2JxqCaf3rdjS2cB6DOxhj4?usp=sharing) | 1986-2021 | Bert-base-cased |
| [SSCI-SciBERT-e2](https://drive.google.com/drive/folders/16DtIvnHvbrR_92MwgthRRsULW6An9te1?usp=sharing) (recommended) | 1986-2021 | Scibert-scivocab-cased |
| [SSCI-BERT-e4](https://drive.google.com/drive/folders/1sr6Av8p904Jrjps37g7E8aj4HnAHXSxW?usp=sharing) | 1986-2021 | Bert-base-cased |
| [SSCI-SciBERT-e4](https://drive.google.com/drive/folders/1ty-b4TIFu8FbilgC4VcI7Bgn_O5MDMVe?usp=sharing) | 1986-2021 | Scibert-scivocab-cased |
## Evaluation & Results
- We use SSCI-BERT and SSCI-SciBERT to perform Text Classificationon different social science research corpus. The experimental results are as follows. Relevant data sets are available for download in the **Verification task datasets** folder of this project.
#### JCR Title Classify Dataset
| Model | accuracy | macro avg | weighted avg |
| ---------------------- | -------- | --------- | ------------ |
| Bert-base-cased | 28.43 | 22.06 | 21.86 |
| Scibert-scivocab-cased | 38.48 | 33.89 | 33.92 |
| SSCI-BERT-e2 | 40.43 | 35.37 | 35.33 |
| SSCI-SciBERT-e2 | 41.35 | 37.27 | 37.25 |
| SSCI-BERT-e4 | 40.65 | 35.49 | 35.40 |
| SSCI-SciBERT-e4 | 41.13 | 36.96 | 36.94 |
| Support | 2300 | 2300 | 2300 |
#### JCR Abstract Classify Dataset
| Model | accuracy | macro avg | weighted avg |
| ---------------------- | -------- | --------- | ------------ |
| Bert-base-cased | 48.59 | 42.8 | 42.82 |
| Scibert-scivocab-cased | 55.59 | 51.4 | 51.81 |
| SSCI-BERT-e2 | 58.05 | 53.31 | 53.73 |
| SSCI-SciBERT-e2 | 59.95 | 56.51 | 57.12 |
| SSCI-BERT-e4 | 59.00 | 54.97 | 55.59 |
| SSCI-SciBERT-e4 | 60.00 | 56.38 | 56.90 |
| Support | 2200 | 2200 | 2200 |
#### JCR Mixed Titles and Abstracts Dataset
| **Model** | **accuracy** | **macro avg** | **weighted avg** |
| ---------------------- | ------------ | -------------- | ----------------- |
| Bert-base-cased | 58.24 | 57.27 | 57.25 |
| Scibert-scivocab-cased | 59.58 | 58.65 | 58.68 |
| SSCI-BERT-e2 | 60.89 | 60.24 | 60.30 |
| SSCI-SciBERT-e2 | 60.96 | 60.54 | 60.51 |
| SSCI-BERT-e4 | 61.00 | 60.48 | 60.43 |
| SSCI-SciBERT-e4 | 61.24 | 60.71 | 60.75 |
| Support | 4500 | 4500 | 4500 |
#### SSCI Abstract Structural Function Recognition (Classify Dataset)
| | Bert-base-cased | SSCI-BERT-e2 | SSCI-BERT-e4 | support |
| ------------ | -------------------------- | ------------------- | ------------------- | ----------- |
| B | 63.77 | 64.29 | 64.63 | 224 |
| P | 53.66 | 57.14 | 57.99 | 95 |
| M | 87.63 | 88.43 | 89.06 | 323 |
| R | 86.81 | 88.28 | **88.47** | 419 |
| C | 78.32 | 79.82 | 78.95 | 316 |
| accuracy | 79.59 | 80.9 | 80.97 | 1377 |
| macro avg | 74.04 | 75.59 | 75.82 | 1377 |
| weighted avg | 79.02 | 80.32 | 80.44 | 1377 |
| | **Scibert-scivocab-cased** | **SSCI-SciBERT-e2** | **SSCI-SciBERT-e4** | **support** |
| B | 69.98 | **70.95** | **70.95** | 224 |
| P | 58.89 | **60.12** | 58.96 | 95 |
| M | 89.37 | **90.12** | 88.11 | 323 |
| R | 87.66 | 88.07 | 87.44 | 419 |
| C | 80.7 | 82.61 | **82.94** | 316 |
| accuracy | 81.63 | **82.72** | 82.06 | 1377 |
| macro avg | 77.32 | **78.37** | 77.68 | 1377 |
| weighted avg | 81.6 | **82.58** | 81.92 | 1377 |
## Cited
- If our content is helpful for your research work, please quote our research in your article.
- If you want to quote our research, you can use this url (https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) as an alternative before our paper is published.
## Disclaimer
- The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment.
- **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## Acknowledgment
- SSCI-BERT was trained based on [BERT-Base-Cased]([google-research/bert: TensorFlow code and pre-trained models for BERT (github.com)](https://github.com/google-research/bert)).
- SSCI-SciBERT was trained based on [scibert-scivocab-cased]([allenai/scibert: A BERT model for scientific text. (github.com)](https://github.com/allenai/scibert))
|
Yehor/wav2vec2-xls-r-300m-uk-with-small-lm | bbd936400e7566ba44560440aa4abd05b5983c17 | 2022-07-30T08:51:01.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_10_0",
"transformers",
"license:apache-2.0"
] | automatic-speech-recognition | false | Yehor | null | Yehor/wav2vec2-xls-r-300m-uk-with-small-lm | 159 | 3 | transformers | 3,927 | ---
language:
- uk
license: "apache-2.0"
datasets:
- mozilla-foundation/common_voice_10_0
---
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model has apostrophes and hyphens.
The language model is trained on the texts of the Common Voice dataset, which is used during training.
Metrics:
| Dataset | CER | WER |
|-|-|-|
| CV7 (no LM) | 0.0432 | 0.2288 |
| CV7 (with LM) | 0.0169 | 0.0706 |
| CV10 (no LM) | 0.0412 | 0.2206 |
| CV10 (with LM) | 0.0118 | 0.0463 |
More:
- The same model, but trained on noisy data: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm-noisy
- Traced JIT version: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-traced-jit
|
pszemraj/distilgpt2-email-generation | 781faba8ab885a94e2a16efb027ce9b2d8ad8932 | 2022-07-21T09:16:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"dataset:aeslc",
"transformers",
"generated_from_trainer",
"email generation",
"email",
"license:apache-2.0"
] | text-generation | false | pszemraj | null | pszemraj/distilgpt2-email-generation | 159 | 1 | transformers | 3,928 | ---
license: apache-2.0
tags:
- generated_from_trainer
- email generation
- email
datasets:
- aeslc
widget:
- text: "Hey <NAME>,\n\nThank you for signing up for my weekly newsletter. Before we get started, you'll have to confirm your email address."
example_title: "newsletter"
- text: "Hi <NAME>,\n\nI hope this email finds you well. Let me start by saying that I am a big fan of your work."
example_title: "fan"
- text: "Greetings <NAME>,\n\nI hope you had a splendid evening at the Company sausage eating festival. I am reaching out because"
example_title: "festival"
- text: "Good Morning <NAME>,\n\nI was just thinking to myself about how much I love creating value"
example_title: "value"
- text: "URGENT - I need the TPS reports"
example_title: "URGENT"
- text: "Hi <NAME>,\n\nI hope this email finds you extremely well."
example_title: "emails find you"
parameters:
min_length: 4
max_length: 96
length_penalty: 0.7
no_repeat_ngram_size: 2
do_sample: False
num_beams: 8
early_stopping: True
repetition_penalty: 2.5
---
# distilgpt2-email-generation
Why write the rest of your email when you can generate it?
```
from transformers import pipeline
model_tag = "pszemraj/distilgpt2-email-generation"
generator = pipeline(
'text-generation',
model=model_tag,
use_fast=False,
do_sample=False,
early_stopping=True,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
generator(
prompt,
max_length=64,
) # generate
```
- A script to use this on CPU/command line can be found [here](https://gist.github.com/pszemraj/c1b0a76445418b6bbddd5f9633d1bb7f) :)
- A more performant (but slightly more compute intensive) version is also available: [gpt-medium](https://huggingface.co/pszemraj/gpt2-medium-email-generation)
> For this model, formatting matters. The results may be (significantly) different between the structure outlined above and `prompt = "Hey, just wanted to ..."` etc.
## Model description
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the `aeslc` dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8176
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1589 | 1.0 | 129 | 3.0496 |
| 2.9914 | 2.0 | 258 | 2.9224 |
| 2.8058 | 3.0 | 387 | 2.8449 |
| 2.6141 | 4.0 | 516 | 2.8214 |
| 2.6337 | 5.0 | 645 | 2.8109 |
| 2.5428 | 6.0 | 774 | 2.8176 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
Finnish-NLP/gpt2-finnish | 2150ab891ba64ee43caa054221e3a47cbcac95a8 | 2022-06-13T16:13:42.000Z | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"transformers",
"finnish",
"license:apache-2.0"
] | text-generation | false | Finnish-NLP | null | Finnish-NLP/gpt2-finnish | 158 | null | transformers | 3,929 | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- gpt2
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
widget:
- text: "Tekstiä tuottava tekoäly on"
---
# GPT-2 for Finnish
Pretrained GPT-2 model on Finnish language using a causal language modeling (CLM) objective. GPT-2 was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
**Note**: this model is quite small 117M parameter variant as in Huggingface's [GPT-2 config](https://huggingface.co/gpt2), so not the famous big 1.5B parameter variant by OpenAI. We also have bigger 345M parameter variant [gpt2-medium-finnish](https://huggingface.co/Finnish-NLP/gpt2-medium-finnish) and 774M parameter variant [gpt2-large-finnish](https://huggingface.co/Finnish-NLP/gpt2-large-finnish) available which perform better compared to this model.
## Model description
Finnish GPT-2 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the Finnish language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation:
```python
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='Finnish-NLP/gpt2-finnish')
>>> generator("Tekstiä tuottava tekoäly on", max_length=30, num_return_sequences=5)
[{'generated_text': 'Tekstiä tuottava tekoäly on kuin onkin hyvin pieni. Sitä voi käyttää myös hyvin nopeasti ja myös täysin automatisoituna, eikä sitä tarvitse käydä läpi. Se'},
{'generated_text': 'Tekstiä tuottava tekoäly on saanut jalansijaa, mutta Suomessa se on jo ehtinyt hajota käsiin, koska sen avulla ei pystytä tuottamaan täysin ajantasaisia'},
{'generated_text': 'Tekstiä tuottava tekoäly on tehnyt työtä kymmenien vuosien ajan ja ottanut käyttöön jo yli kahden vuosikymmenen ajan tekoälyn ratkaisuja. Tekoäly on jo pitkään tehnyt työtä'},
{'generated_text': 'Tekstiä tuottava tekoäly on tekoälyn sovellus, jota käytetään esimerkiksi liiketoiminnan ja päätöksenteon tukena. Työhön liittyy data-analyysin ohella tekoälyn avulla esimerkiksi tekoäl'},
{'generated_text': 'Tekstiä tuottava tekoäly on juuri nyt erityisen hyödyllinen, koska se tunnistaa käyttäjän tietokoneen ruudulla olevat ilmoitukset, kuten näytön värin ja osoittimet ilman välkyn'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-finnish')
model = GPT2Model.from_pretrained('Finnish-NLP/gpt2-finnish')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('Finnish-NLP/gpt2-finnish')
model = TFGPT2Model.from_pretrained('Finnish-NLP/gpt2-finnish', from_pt=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
As with all language models, it is hard to predict in advance how the Finnish GPT-2 will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Training data
This Finnish GPT-2 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were cleaned to filter out bad quality and non-Finnish examples. Together these cleaned datasets were around 84GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 512 consecutive tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 300k steps (a bit over 2 epochs, 256 batch size). The optimizer used was a second-order optimization method called [Distributed Shampoo](https://github.com/google-research/google-research/tree/master/scalable_shampoo) with learning rate 1e-4, learning rate warmup for 4000 steps and cosine decay of the learning rate after.
At first, commonly used Adam optimizer was tried but there were significant issues getting the model to converge even with multiple different learning rate trials so then Adam optimizer was replaced with the Distributed Shampoo which worked a lot better.
## Evaluation results
Evaluation was done using the *validation* split of the [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned) dataset with [Perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) (smaller score the better) as the evaluation metric. As seen from the table below, this model (the first row of the table) loses to our bigger model variants.
| | Perplexity |
|------------------------------------------|------------|
|Finnish-NLP/gpt2-finnish |44.19 |
|Finnish-NLP/gpt2-medium-finnish |34.08 |
|Finnish-NLP/gpt2-large-finnish |**30.74** |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
GermanT5/t5-efficient-oscar-german-small-el32 | 6958eca520ad7e815a550c274ba30cdee0ece68f | 2022-02-20T18:23:10.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | GermanT5 | null | GermanT5/t5-efficient-oscar-german-small-el32 | 158 | 1 | transformers | 3,930 | Entry not found |
cambridgeltl/trans-encoder-bi-simcse-roberta-base | 6771443bb2feaddc2b053721352d51f8f1ea7312 | 2021-10-18T13:29:56.000Z | [
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2109.13059",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/trans-encoder-bi-simcse-roberta-base | 158 | null | transformers | 3,931 | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-roberta-base
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-roberta-base](https://huggingface.co/princeton-nlp/unsup-simcse-roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
gayanin/bart-mlm-pubmed | 7fff9ce2b136cc41641bbe6696af37c44c359ffe | 2021-11-08T12:50:54.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | gayanin | null | gayanin/bart-mlm-pubmed | 158 | 1 | transformers | 3,932 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-mlm-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-mlm-pubmed
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7223
- Rouge2 Precision: 0.6572
- Rouge2 Recall: 0.5164
- Rouge2 Fmeasure: 0.5662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.0322 | 1.0 | 663 | 0.7891 | 0.639 | 0.4989 | 0.5491 |
| 0.8545 | 2.0 | 1326 | 0.7433 | 0.6461 | 0.5057 | 0.5556 |
| 0.758 | 3.0 | 1989 | 0.7299 | 0.647 | 0.5033 | 0.5547 |
| 0.6431 | 4.0 | 2652 | 0.7185 | 0.6556 | 0.5101 | 0.5616 |
| 0.6058 | 5.0 | 3315 | 0.7126 | 0.6537 | 0.5144 | 0.5638 |
| 0.5726 | 6.0 | 3978 | 0.7117 | 0.6567 | 0.5169 | 0.5666 |
| 0.5168 | 7.0 | 4641 | 0.7150 | 0.6585 | 0.5154 | 0.566 |
| 0.5011 | 8.0 | 5304 | 0.7220 | 0.6568 | 0.5164 | 0.5664 |
| 0.4803 | 9.0 | 5967 | 0.7208 | 0.6573 | 0.5161 | 0.5662 |
| 0.4577 | 10.0 | 6630 | 0.7223 | 0.6572 | 0.5164 | 0.5662 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
sentence-transformers/nli-roberta-base | ba3a8b8db1132e2e02a6ae688552009b5b59825a | 2022-06-15T22:33:33.000Z | [
"pytorch",
"tf",
"roberta",
"feature-extraction",
"arxiv:1908.10084",
"sentence-transformers",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | sentence-similarity | false | sentence-transformers | null | sentence-transformers/nli-roberta-base | 158 | 1 | sentence-transformers | 3,933 | ---
pipeline_tag: sentence-similarity
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
# sentence-transformers/nli-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/nli-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/nli-roberta-base')
model = AutoModel.from_pretrained('sentence-transformers/nli-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/nli-roberta-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': True}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
2gud/DialogGPT-small-Koopsbot | ef66957b3a7baa70ba9652c76c7c49edb18975e9 | 2022-02-25T19:50:25.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | 2gud | null | 2gud/DialogGPT-small-Koopsbot | 158 | null | transformers | 3,934 | ---
tags:
- conversational
---
#Stinky doo doo |
microsoft/tapex-large | 6315ea450000e987c235f173a5c35a1b8d22208a | 2022-05-17T08:26:50.000Z | [
"pytorch",
"bart",
"text2text-generation",
"en",
"arxiv:2107.07653",
"transformers",
"tapex",
"table-question-answering",
"license:mit",
"autotrain_compatible"
] | table-question-answering | false | microsoft | null | microsoft/tapex-large | 158 | null | transformers | 3,935 | ---
language: en
tags:
- tapex
- table-question-answering
license: mit
---
# TAPEX (large-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
## Intended Uses
⚠️ This model checkpoint is **ONLY** used for fine-tuining on downstream tasks, and you **CANNOT** use this model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. The one that can neurally execute SQL queries is at [here](https://huggingface.co/microsoft/tapex-large-sql-execution).
> This separation of two models for two kinds of intention is because of a known issue in BART large, and we recommend readers to see [this comment](https://github.com/huggingface/transformers/issues/15559#issuecomment-1062880564) for more details.
### How to Fine-tuning
Please find the fine-tuning script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` |
clearspandex/face-parsing | 1d02cdc745a2277e3738bef3bbd9328b4bee8b30 | 2022-07-06T04:17:33.000Z | [
"pytorch",
"segformer",
"en",
"dataset:celebamaskhq",
"transformers",
"vision",
"image-segmentation",
"nvidia/mit-b5",
"license:cc0-1.0"
] | image-segmentation | false | clearspandex | null | clearspandex/face-parsing | 158 | 1 | transformers | 3,936 | ---
language: en
license: cc0-1.0
library_name: transformers
tags:
- vision
- image-segmentation
- nvidia/mit-b5
datasets:
- celebamaskhq
---
## Face Parsing |
Akashpb13/Central_kurdish_xlsr | f56917ace1e434a0daaebd363259775ef29ce1d6 | 2022-03-24T11:52:44.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ckb",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | Akashpb13 | null | Akashpb13/Central_kurdish_xlsr | 157 | 2 | transformers | 3,937 | ---
language:
- ckb
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- ckb
- robust-speech-event
- model_for_talk
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: Akashpb13/Central_kurdish_xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: ckb
metrics:
- name: Test WER
type: wer
value: 0.36754389884276845
- name: Test CER
type: cer
value: 0.07827896768334217
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ckb
metrics:
- name: Test WER
type: wer
value: 0.36754389884276845
- name: Test CER
type: cer
value: 0.07827896768334217
---
# Akashpb13/Central_kurdish_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other and dev datasets):
- Loss: 0.348580
- Wer: 0.401147
## Model description
"facebook/wav2vec2-xls-r-300m" was finetuned.
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data -
Common voice Central Kurdish train.tsv, dev.tsv, invalidated.tsv, reported.tsv, and other.tsv
Only those points were considered where upvotes were greater than downvotes and duplicates were removed after concatenation of all the datasets given in common voice 7.0
## Training procedure
For creating the train dataset, all possible datasets were appended and 90-10 split was used.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000095637994662983496
- train_batch_size: 16
- eval_batch_size: 16
- seed: 13
- gradient_accumulation_steps: 2
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 200
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Step | Training Loss | Validation Loss | Wer |
|-------|---------------|-----------------|----------|
| 500 | 5.097800 | 2.190326 | 1.001207 |
| 1000 | 0.797500 | 0.331392 | 0.576819 |
| 1500 | 0.405100 | 0.262009 | 0.549049 |
| 2000 | 0.322100 | 0.248178 | 0.479626 |
| 2500 | 0.264600 | 0.258866 | 0.488983 |
| 3000 | 0.228300 | 0.261523 | 0.469665 |
| 3500 | 0.201000 | 0.270135 | 0.451856 |
| 4000 | 0.180900 | 0.279302 | 0.448536 |
| 4500 | 0.163800 | 0.280921 | 0.459704 |
| 5000 | 0.147300 | 0.319249 | 0.471778 |
| 5500 | 0.137600 | 0.289546 | 0.449140 |
| 6000 | 0.132000 | 0.311350 | 0.458195 |
| 6500 | 0.117100 | 0.316726 | 0.432840 |
| 7000 | 0.109200 | 0.302210 | 0.439481 |
| 7500 | 0.104900 | 0.325913 | 0.439481 |
| 8000 | 0.097500 | 0.329446 | 0.431935 |
| 8500 | 0.088600 | 0.345259 | 0.425898 |
| 9000 | 0.084900 | 0.342891 | 0.428313 |
| 9500 | 0.080900 | 0.353081 | 0.424389 |
| 10000 | 0.075600 | 0.347063 | 0.424992 |
| 10500 | 0.072800 | 0.330086 | 0.424691 |
| 11000 | 0.068100 | 0.350658 | 0.421974 |
| 11500 | 0.064700 | 0.342949 | 0.413522 |
| 12000 | 0.061500 | 0.341704 | 0.415334 |
| 12500 | 0.059500 | 0.346279 | 0.411410 |
| 13000 | 0.057400 | 0.349901 | 0.407184 |
| 13500 | 0.056400 | 0.347733 | 0.402656 |
| 14000 | 0.053300 | 0.344899 | 0.405976 |
| 14500 | 0.052900 | 0.346708 | 0.402656 |
| 15000 | 0.050600 | 0.344118 | 0.400845 |
| 15500 | 0.050200 | 0.348396 | 0.402958 |
| 16000 | 0.049800 | 0.348312 | 0.401751 |
| 16500 | 0.051900 | 0.348372 | 0.401147 |
| 17000 | 0.049800 | 0.348580 | 0.401147 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.18.1
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id Akashpb13/Central_kurdish_xlsr --dataset mozilla-foundation/common_voice_8_0 --config ckb --split test
```
|
Geotrend/bert-base-es-cased | 3277130ede0919265f4975daa5b414de13b82faa | 2021-05-18T19:55:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"es",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | Geotrend | null | Geotrend/bert-base-es-cased | 157 | 1 | transformers | 3,938 | ---
language: es
datasets: wikipedia
license: apache-2.0
---
# bert-base-es-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-es-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-es-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
Ninja5000/DialoGPT-medium-TWEWYJoshua | 32f77a12b5d30cbb809e62c16a090eabcf2da2ec | 2022-02-23T10:47:52.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | false | Ninja5000 | null | Ninja5000/DialoGPT-medium-TWEWYJoshua | 157 | 1 | transformers | 3,939 | ---
tags:
- conversational
---
# DialoGPT-medium-TWEWYJoshua
Another not-so-good AI chatbot. Joshua from the game TWEWY(The World Ends With You).
* Credits to Lynn's Devlab who made the amazing tutorial. |
alon-albalak/bert-base-multilingual-xquad | 7fc22c05cdaf03aa12a38008017a61641e4b21de | 2021-11-05T20:25:43.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:xquad",
"transformers",
"multilingual",
"autotrain_compatible"
] | question-answering | false | alon-albalak | null | alon-albalak/bert-base-multilingual-xquad | 157 | null | transformers | 3,940 | ---
tags:
- multilingual
datasets:
- xquad
---
# bert-base-multilingual-uncased for multilingual QA
# Overview
**Language Model**: bert-base-multilingual-uncased \
**Downstream task**: Extractive QA \
**Training data**: [XQuAD](https://github.com/deepmind/xquad) \
**Testing Data**: [XQuAD](https://github.com/deepmind/xquad)
# Hyperparameters
```python
batch_size = 48
n_epochs = 6
max_seq_len = 384
doc_stride = 128
learning_rate = 3e-5
```
# Performance
Evaluated on held-out test set from XQuAD
```python
"exact_match": 64.6067415730337,
"f1": 79.52043478874286,
"test_samples": 2384
```
# Usage
## In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "alon-albalak/bert-base-multilingual-xquad"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import QAInferencer
model_name = "alon-albalak/bert-base-multilingual-xquad"
# a) Get predictions
nlp = QAInferencer.load(model_name)
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
## In Haystack
```python
reader = FARMReader(model_name_or_path="alon-albalak/bert-base-multilingual-xquad")
# or
reader = TransformersReader(model="alon-albalak/bert-base-multilingual-xquad",tokenizer="alon-albalak/bert-base-multilingual-xquad")
```
Usage instructions for FARM and Haystack were adopted from https://huggingface.co/deepset/xlm-roberta-large-squad2 |
qanastek/XLMRoberta-Alexa-Intents-Classification | 2996b73c03bafabf219d727d2967188cd9c38981 | 2022-05-05T00:52:15.000Z | [
"pytorch",
"dataset:qanastek/MASSIVE",
"Transformers",
"text-classification",
"intent-classification",
"multi-class-classification",
"natural-language-understanding",
"license:cc-by-4.0"
] | text-classification | false | qanastek | null | qanastek/XLMRoberta-Alexa-Intents-Classification | 157 | 1 | null | 3,941 | ---
tags:
- Transformers
- text-classification
- intent-classification
- multi-class-classification
- natural-language-understanding
languages:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
multilinguality:
- af-ZA
- am-ET
- ar-SA
- az-AZ
- bn-BD
- cy-GB
- da-DK
- de-DE
- el-GR
- en-US
- es-ES
- fa-IR
- fi-FI
- fr-FR
- he-IL
- hi-IN
- hu-HU
- hy-AM
- id-ID
- is-IS
- it-IT
- ja-JP
- jv-ID
- ka-GE
- km-KH
- kn-IN
- ko-KR
- lv-LV
- ml-IN
- mn-MN
- ms-MY
- my-MM
- nb-NO
- nl-NL
- pl-PL
- pt-PT
- ro-RO
- ru-RU
- sl-SL
- sq-AL
- sv-SE
- sw-KE
- ta-IN
- te-IN
- th-TH
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-TW
datasets:
- qanastek/MASSIVE
widget:
- text: "wake me up at five am this week"
- text: "je veux écouter la chanson de jacques brel encore une fois"
- text: "quiero escuchar la canción de arijit singh una vez más"
- text: "olly onde é que á um parque por perto onde eu possa correr"
- text: "פרק הבא בפודקאסט בבקשה"
- text: "亚马逊股价"
- text: "найди билет на поезд в санкт-петербург"
license: cc-by-4.0
---
**People Involved**
* [LABRAK Yanis](https://www.linkedin.com/in/yanis-labrak-8a7412145/) (1)
**Affiliations**
1. [LIA, NLP team](https://lia.univ-avignon.fr/), Avignon University, Avignon, France.
## Demo: How to use in HuggingFace Transformers Pipeline
Requires [transformers](https://pypi.org/project/transformers/): ```pip install transformers```
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
model_name = 'qanastek/XLMRoberta-Alexa-Intents-Classification'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
res = classifier("réveille-moi à neuf heures du matin le vendredi")
print(res)
```
Outputs:
```python
[{'label': 'alarm_set', 'score': 0.9998375177383423}]
```
## Training data
[MASSIVE](https://huggingface.co/datasets/qanastek/MASSIVE) is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions.
## Intents
* audio_volume_other
* play_music
* iot_hue_lighton
* general_greet
* calendar_set
* audio_volume_down
* social_query
* audio_volume_mute
* iot_wemo_on
* iot_hue_lightup
* audio_volume_up
* iot_coffee
* takeaway_query
* qa_maths
* play_game
* cooking_query
* iot_hue_lightdim
* iot_wemo_off
* music_settings
* weather_query
* news_query
* alarm_remove
* social_post
* recommendation_events
* transport_taxi
* takeaway_order
* music_query
* calendar_query
* lists_query
* qa_currency
* recommendation_movies
* general_joke
* recommendation_locations
* email_querycontact
* lists_remove
* play_audiobook
* email_addcontact
* lists_createoradd
* play_radio
* qa_stock
* alarm_query
* email_sendemail
* general_quirky
* music_likeness
* cooking_recipe
* email_query
* datetime_query
* transport_traffic
* play_podcasts
* iot_hue_lightchange
* calendar_remove
* transport_query
* transport_ticket
* qa_factoid
* iot_cleaning
* alarm_set
* datetime_convert
* iot_hue_lightoff
* qa_definition
* music_dislikeness
## Evaluation results
```plain
precision recall f1-score support
alarm_query 0.9661 0.9037 0.9338 1734
alarm_remove 0.9484 0.9608 0.9545 1071
alarm_set 0.8611 0.9254 0.8921 2091
audio_volume_down 0.8657 0.9537 0.9075 561
audio_volume_mute 0.8608 0.9130 0.8861 1632
audio_volume_other 0.8684 0.5392 0.6653 306
audio_volume_up 0.7198 0.8446 0.7772 663
calendar_query 0.7555 0.8229 0.7878 6426
calendar_remove 0.8688 0.9441 0.9049 3417
calendar_set 0.9092 0.9014 0.9053 10659
cooking_query 0.0000 0.0000 0.0000 0
cooking_recipe 0.9282 0.8592 0.8924 3672
datetime_convert 0.8144 0.7686 0.7909 765
datetime_query 0.9152 0.9305 0.9228 4488
email_addcontact 0.6482 0.8431 0.7330 612
email_query 0.9629 0.9319 0.9472 6069
email_querycontact 0.6853 0.8032 0.7396 1326
email_sendemail 0.9530 0.9381 0.9455 5814
general_greet 0.1026 0.3922 0.1626 51
general_joke 0.9305 0.9123 0.9213 969
general_quirky 0.6984 0.5417 0.6102 8619
iot_cleaning 0.9590 0.9359 0.9473 1326
iot_coffee 0.9304 0.9749 0.9521 1836
iot_hue_lightchange 0.8794 0.9374 0.9075 1836
iot_hue_lightdim 0.8695 0.8711 0.8703 1071
iot_hue_lightoff 0.9440 0.9229 0.9334 2193
iot_hue_lighton 0.4545 0.5882 0.5128 153
iot_hue_lightup 0.9271 0.8315 0.8767 1377
iot_wemo_off 0.9615 0.8715 0.9143 918
iot_wemo_on 0.8455 0.7941 0.8190 510
lists_createoradd 0.8437 0.8356 0.8396 1989
lists_query 0.8918 0.8335 0.8617 2601
lists_remove 0.9536 0.8601 0.9044 2652
music_dislikeness 0.7725 0.7157 0.7430 204
music_likeness 0.8570 0.8159 0.8359 1836
music_query 0.8667 0.8050 0.8347 1785
music_settings 0.4024 0.3301 0.3627 306
news_query 0.8343 0.8657 0.8498 6324
play_audiobook 0.8172 0.8125 0.8149 2091
play_game 0.8666 0.8403 0.8532 1785
play_music 0.8683 0.8845 0.8763 8976
play_podcasts 0.8925 0.9125 0.9024 3213
play_radio 0.8260 0.8935 0.8585 3672
qa_currency 0.9459 0.9578 0.9518 1989
qa_definition 0.8638 0.8552 0.8595 2907
qa_factoid 0.7959 0.8178 0.8067 7191
qa_maths 0.8937 0.9302 0.9116 1275
qa_stock 0.7995 0.9412 0.8646 1326
recommendation_events 0.7646 0.7702 0.7674 2193
recommendation_locations 0.7489 0.8830 0.8104 1581
recommendation_movies 0.6907 0.7706 0.7285 1020
social_post 0.9623 0.9080 0.9344 4131
social_query 0.8104 0.7914 0.8008 1275
takeaway_order 0.7697 0.8458 0.8059 1122
takeaway_query 0.9059 0.8571 0.8808 1785
transport_query 0.8141 0.7559 0.7839 2601
transport_taxi 0.9222 0.9403 0.9312 1173
transport_ticket 0.9259 0.9384 0.9321 1785
transport_traffic 0.6919 0.9660 0.8063 765
weather_query 0.9387 0.9492 0.9439 7956
accuracy 0.8617 151674
macro avg 0.8162 0.8273 0.8178 151674
weighted avg 0.8639 0.8617 0.8613 151674
```
|
anas-awadalla/opt-125m-squad | 19bbcf710cb5f0b4db6430d8a537321e3bf5cc9e | 2022-06-25T23:56:38.000Z | [
"pytorch",
"opt",
"text-generation",
"transformers"
] | text-generation | false | anas-awadalla | null | anas-awadalla/opt-125m-squad | 157 | null | transformers | 3,942 | A facebook/opt-125m model trained on SQUAD for extractive question answering.
To use the model format input in the following manner:
"(Context Text)\nQuestion:(Question Text)\nAnswer:"
|
BukaByaka/opus-mt-ru-en-finetuned-ru-to-en | c9f616769dd988a6951dd6bb1c8939bf82574a7c | 2022-06-27T14:05:53.000Z | [
"pytorch",
"marian",
"text2text-generation",
"dataset:wmt16",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | BukaByaka | null | BukaByaka/opus-mt-ru-en-finetuned-ru-to-en | 157 | null | transformers | 3,943 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-ru-en-finetuned-ru-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ru-en
metrics:
- name: Bleu
type: bleu
value: 30.4049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ru-en-finetuned-ru-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4092
- Bleu: 30.4049
- Gen Len: 26.3911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.2606 | 1.0 | 94761 | 1.4092 | 30.4049 | 26.3911 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0.post202
- Datasets 2.3.2
- Tokenizers 0.12.1
|
m-salman-a/wav2vec2-xlsr-53-common-voice-indonesian | 4ce6a399d7b599ea4623ba4c6ca21c1c86f8b89b | 2022-07-05T16:37:46.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"id",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers"
] | automatic-speech-recognition | false | m-salman-a | null | m-salman-a/wav2vec2-xlsr-53-common-voice-indonesian | 157 | 1 | transformers | 3,944 | ---
language: id
datasets:
- mozilla-foundation/common_voice_8_0
metrics:
- wer
---
# wav2vec 2.0 XLSR-53 Model
This is the [wav2vec 2.0 XLSR-53 model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) fine-tuned on the [Common Voice 8.0 datasets](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) for Bahasa Indonesia using the `train`, `validation`, and `other` splits (~32.000 sound samples). This model was used for research purposes to complete my Undergraduate Thesis.
## Preprocessing
1. Removal of symbols from transcript
2. Convert numbers (0, 1, ..., 9) to word forms (satu, dua, ..., sembilan)
3. Convert all characters to lowercase
2. Resample the audio data to 16kHz.
3. Uses data collator from [this example](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)
## Hyperparameters used
- Learning rate = 1e-4
- Maximum Epochs = 30
- Batch size = 4 (limitations of compute resource)
- Early stopping = Stop when WER doesn't improve for 2 validations
- Other parameters use the defaults from [this config](https://huggingface.co/docs/transformers/v4.20.1/en/model_doc/wav2vec2#overview)
## Results
The results are an average of 5 runs using the `test` split from the Common Voice datasets for Bahasa Indonesia.
**Test Result: 15,6% WER**
## References
- [Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)
- [Wav2Vec2-Large-XLSR-Indonesian by Indonesian NLP](https://huggingface.co/indonesian-nlp/wav2vec2-large-xlsr-indonesian-baseline)
|
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | 4d6fbcde0baa8982501145858cfb1b3ea1dbf86e | 2021-10-17T13:35:38.000Z | [
"pytorch",
"tf",
"bert",
"text-classification",
"ar",
"arxiv:2103.06678",
"transformers",
"license:apache-2.0"
] | text-classification | false | CAMeL-Lab | null | CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5 | 156 | null | transformers | 3,945 | ---
language:
- ar
license: apache-2.0
widget:
- text: "عامل ايه ؟"
---
# CAMeLBERT-MSA DID MADAR Twitter-5 Model
## Model description
**CAMeLBERT-MSA DID MADAR Twitter-5 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model.
For the fine-tuning, we used the [MADAR Twitter-5](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 21 labels.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5')
>>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟']
>>> did(sentences)
[{'label': 'Egypt', 'score': 0.5741344094276428},
{'label': 'Kuwait', 'score': 0.5225679278373718}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
``` |
Helsinki-NLP/opus-mt-zh-vi | 1f087ae316710c9fb781049f0bb16d91dc054b9d | 2020-08-21T14:42:52.000Z | [
"pytorch",
"marian",
"text2text-generation",
"zh",
"vi",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-zh-vi | 156 | null | transformers | 3,946 | ---
language:
- zh
- vi
tags:
- translation
license: apache-2.0
---
### zho-vie
* source group: Chinese
* target group: Vietnamese
* OPUS readme: [zho-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-vie/README.md)
* model: transformer-align
* source language(s): cmn_Hani cmn_Latn
* target language(s): vie
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.zho.vie | 20.0 | 0.385 |
### System Info:
- hf_name: zho-vie
- source_languages: zho
- target_languages: vie
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-vie/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['zh', 'vi']
- src_constituents: {'cmn_Hans', 'nan', 'nan_Hani', 'gan', 'yue', 'cmn_Kana', 'yue_Hani', 'wuu_Bopo', 'cmn_Latn', 'yue_Hira', 'cmn_Hani', 'cjy_Hans', 'cmn', 'lzh_Hang', 'lzh_Hira', 'cmn_Hant', 'lzh_Bopo', 'zho', 'zho_Hans', 'zho_Hant', 'lzh_Hani', 'yue_Hang', 'wuu', 'yue_Kana', 'wuu_Latn', 'yue_Bopo', 'cjy_Hant', 'yue_Hans', 'lzh', 'cmn_Hira', 'lzh_Yiii', 'lzh_Hans', 'cmn_Bopo', 'cmn_Hang', 'hak_Hani', 'cmn_Yiii', 'yue_Hant', 'lzh_Kana', 'wuu_Hani'}
- tgt_constituents: {'vie', 'vie_Hani'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zho-vie/opus-2020-06-17.test.txt
- src_alpha3: zho
- tgt_alpha3: vie
- short_pair: zh-vi
- chrF2_score: 0.385
- bleu: 20.0
- brevity_penalty: 0.917
- ref_len: 4667.0
- src_name: Chinese
- tgt_name: Vietnamese
- train_date: 2020-06-17
- src_alpha2: zh
- tgt_alpha2: vi
- prefer_old: False
- long_pair: zho-vie
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Shahm/bart-german | d22d8abba8b2635c5c46ffe048e7bc166b0ea574 | 2021-12-27T09:19:35.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"dataset:mlsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | Shahm | null | Shahm/bart-german | 156 | null | transformers | 3,947 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlsum
metrics:
- rouge
model-index:
- name: mode-bart-deutsch
results:
- task:
name: Summarization
type: summarization
dataset:
name: mlsum de
type: mlsum
args: de
metrics:
- name: Rouge1
type: rouge
value: 41.698
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mode-bart-deutsch
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the mlsum de dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2152
- Rouge1: 41.698
- Rouge2: 31.3548
- Rougel: 38.2817
- Rougelsum: 39.6349
- Gen Len: 63.1723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tner/xlm-roberta-base-uncased-mit-restaurant | 1a0155a856b5d6ebfac771e757fe9c093ec2c1fb | 2021-02-12T23:47:38.000Z | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | false | tner | null | tner/xlm-roberta-base-uncased-mit-restaurant | 156 | null | transformers | 3,948 | # XLM-RoBERTa for NER
XLM-RoBERTa finetuned on NER. Check more detail at [TNER repository](https://github.com/asahi417/tner).
## Usage
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-restaurant")
model = AutoModelForTokenClassification.from_pretrained("asahi417/tner-xlm-roberta-base-uncased-mit-restaurant")
``` |
cambridgeltl/simctg_wikitext103 | 16525d5b754781f592a17b9be43bc44ae7eb0318 | 2022-06-25T19:22:06.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:1609.07843",
"arxiv:2202.06417",
"transformers"
] | text-generation | false | cambridgeltl | null | cambridgeltl/simctg_wikitext103 | 156 | 1 | transformers | 3,949 | This model provides a GPT-2 language model trained with SimCTG on the Wikitext-103 benchmark [(Merity et al., 2016)](https://arxiv.org/abs/1609.07843) based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417).
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
```yaml
pip install simctg --upgrade
```
## 2. Initialize SimCTG Model:
```python
import torch
# load SimCTG language model
from simctg.simctggpt import SimCTGGPT
model_name = r'cambridgeltl/simctg_wikitext103'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
```
## 3. Prepare the Text Prefix:
```python
prefix_text = r"Butt criticized Donald 's controls in certain situations in the game , as well as the difficulty of some levels and puzzles .
Buchanan also criticized the controls , calling"
print ('Prefix is: {}'.format(prefix_text))
tokens = tokenizer.tokenize(prefix_text)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
## 4. Generate Text with Contrastive Search:
```python
beam_width, alpha, decoding_len = 8, 0.6, 128
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width,
alpha=alpha, decoding_len=decoding_len)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output))
'''
Prefix is: Butt criticized Donald 's controls in certain situations in the game , as well as the difficulty of some levels and puzzles .
Buchanan also criticized the controls , calling
Output:
----------------------------------------------------------------------------------------------------
Butt criticized Donald's controls in certain situations in the game, as well as the difficulty of some levels and puzzles. Buchanan also
criticized the controls, calling them " unimpressive " and a " nightmare " of an experience to play with players unfamiliar with Tetris.
On the other hand, his opinion was shared by other reviewers, and some were critical of the game's technical design for the Wii version
of Tetris. In addition, Tintin's review included a quote from Roger Ebert, who said that Tetris was better than the original game due to
its simplicity and ease of play. Ebert's comments were included in the game's DVD commentary, released on March 22, 2010. It is unclear
if any of the video commentary was taken from the DVD
'''
```
For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG).
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
```bibtex
@article{su2022contrastive,
title={A Contrastive Framework for Neural Text Generation},
author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel},
journal={arXiv preprint arXiv:2202.06417},
year={2022}
}
```
|
huggingtweets/gordonramsay | 37186569732e438f379c947461b759e5447b5f89 | 2021-05-22T05:57:14.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/gordonramsay | 156 | null | transformers | 3,950 | ---
language: en
thumbnail: https://www.huggingtweets.com/gordonramsay/1614174227495/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1349755150316040194/VpUCtbH8_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Gordon Ramsay 🤖 AI Bot </div>
<div style="font-size: 15px">@gordonramsay bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@gordonramsay's tweets](https://twitter.com/gordonramsay).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 269 |
| Short tweets | 206 |
| Tweets kept | 2771 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/27mcq63k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gordonramsay's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12n07etn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12n07etn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gordonramsay')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
persiannlp/mt5-large-parsinlu-qqp-query-paraphrasing | 65fc3eace893cbef0f81b11fdf8512633b4c7023 | 2021-09-23T16:20:19.000Z | [
"pytorch",
"t5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"dataset:qqp",
"transformers",
"query-paraphrasing",
"mt5",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-large-parsinlu-qqp-query-paraphrasing | 156 | null | transformers | 3,951 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- query-paraphrasing
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
- qqp
metrics:
- accuracy
---
# Detection of Paraphrased Queries (تشخصیص سوالات هممعنی)
This is a model for detection of paraphrased queries.
Here is an example of how you can run this model:
```python
from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer
model_name = "persiannlp/mt5-large-parsinlu-qqp-query-paraphrasing"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(q1, q2, **generator_args):
input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟")
run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟")
run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
pile-of-law/legalbert-large-1.7M-2 | af1320d25bbc8a578cb04f2e5434f3762e5fcb93 | 2022-07-04T07:28:01.000Z | [
"pytorch",
"bert",
"en",
"dataset:pile-of-law/pile-of-law",
"arxiv:1907.11692",
"arxiv:1810.04805",
"arxiv:2110.00976",
"arxiv:2207.00220",
"transformers",
"fill-mask"
] | fill-mask | false | pile-of-law | null | pile-of-law/legalbert-large-1.7M-2 | 156 | 2 | transformers | 3,952 | ---
language:
- en
datasets:
- pile-of-law/pile-of-law
pipeline_tag: fill-mask
---
# Pile of Law BERT large model 2 (uncased)
Pretrained model on English language legal and administrative text using the [RoBERTa](https://arxiv.org/abs/1907.11692) pretraining objective. This model was trained with the same setup as [pile-of-law/legalbert-large-1.7M-1](https://huggingface.co/pile-of-law/legalbert-large-1.7M-1), but with a different seed.
## Model description
Pile of Law BERT large 2 is a transformers model with the [BERT large model (uncased)](https://huggingface.co/bert-large-uncased) architecture pretrained on the [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law), a dataset consisting of ~256GB of English language legal and administrative text for language model pretraining.
## Intended uses & limitations
You can use the raw model for masked language modeling or fine-tune it for a downstream task. Since this model was pretrained on a English language legal and administrative text corpus, legal downstream tasks will likely be more in-domain for this model.
## How to use
You can use the model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> pipe = pipeline(task='fill-mask', model='pile-of-law/legalbert-large-1.7M-2')
>>> pipe("An [MASK] is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.")
[{'sequence': 'an exception is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.5218929052352905,
'token': 4028,
'token_str': 'exception'},
{'sequence': 'an appeal is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.11434809118509293,
'token': 1151,
'token_str': 'appeal'},
{'sequence': 'an exclusion is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.06454459577798843,
'token': 5345,
'token_str': 'exclusion'},
{'sequence': 'an example is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.043593790382146835,
'token': 3677,
'token_str': 'example'},
{'sequence': 'an objection is a request made after a trial by a party that has lost on one or more issues that a higher court review the decision to determine if it was correct.',
'score': 0.03758585825562477,
'token': 3542,
'token_str': 'objection'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('pile-of-law/legalbert-large-1.7M-2')
model = BertModel.from_pretrained('pile-of-law/legalbert-large-1.7M-2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('pile-of-law/legalbert-large-1.7M-2')
model = TFBertModel.from_pretrained('pile-of-law/legalbert-large-1.7M-2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Limitations and bias
Please see Appendix G of the Pile of Law paper for copyright limitations related to dataset and model use.
This model can have biased predictions. In the following example where the model is used with a pipeline for masked language modeling, for the race descriptor of the criminal, the model predicts a higher score for "black" than "white".
```python
>>> from transformers import pipeline
>>> pipe = pipeline(task='fill-mask', model='pile-of-law/legalbert-large-1.7M-2')
>>> pipe("The transcript of evidence reveals that at approximately 7:30 a. m. on January 22, 1973, the prosecutrix was awakened in her home in DeKalb County by the barking of the family dog, and as she opened her eyes she saw a [MASK] man standing beside her bed with a gun.", targets=["black", "white"])
[{'sequence': 'the transcript of evidence reveals that at approximately 7 : 30 a. m. on january 22, 1973, the prosecutrix was awakened in her home in dekalb county by the barking of the family dog, and as she opened her eyes she saw a black man standing beside her bed with a gun.',
'score': 0.02685137465596199,
'token': 4311,
'token_str': 'black'},
{'sequence': 'the transcript of evidence reveals that at approximately 7 : 30 a. m. on january 22, 1973, the prosecutrix was awakened in her home in dekalb county by the barking of the family dog, and as she opened her eyes she saw a white man standing beside her bed with a gun.',
'score': 0.013632853515446186,
'token': 4249,
'token_str': 'white'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Pile of Law BERT large model was pretrained on the Pile of Law, a dataset consisting of ~256GB of English language legal and administrative text for language model pretraining. The Pile of Law consists of 35 data sources, including legal analyses, court opinions and filings, government agency publications, contracts, statutes, regulations, casebooks, etc. We describe the data sources in detail in Appendix E of the Pile of Law paper. The Pile of Law dataset is placed under a CreativeCommons Attribution-NonCommercial-ShareAlike 4.0 International license.
## Training procedure
### Preprocessing
The model vocabulary consists of 29,000 tokens from a custom word-piece vocabulary fit to Pile of Law using the [HuggingFace WordPiece tokenizer](https://github.com/huggingface/tokenizers) and 3,000 randomly sampled legal terms from Black's Law Dictionary, for a vocabulary size of 32,000 tokens. The 80-10-10 masking, corruption, leave split, as in [BERT](https://arxiv.org/abs/1810.04805), is used, with a replication rate of 20 to create different masks for each context. To generate sequences, we use the [LexNLP sentence segmenter](https://github.com/LexPredict/lexpredict-lexnlp), which handles sentence segmentation for legal citations (which are often falsely mistaken as sentences). The input is formatted by filling sentences until they comprise 256 tokens, followed by a [SEP] token, and then filling sentences such that the entire span is under 512 tokens. If the next sentence in the series is too large, it is not added, and the remaining context length is filled with padding tokens.
### Pretraining
The model was trained on a SambaNova cluster, with 8 RDUs, for 1.7 million steps. We used a smaller learning rate of 5e-6 and batch size of 128, to mitigate training instability, potentially due to the diversity of sources in our training data. The masked language modeling (MLM) objective without NSP loss, as described in [RoBERTa](https://arxiv.org/abs/1907.11692), was used for pretraining. The model was pretrained with 512 length sequence lengths for all steps.
We trained two models with the same setup in parallel model training runs, with different random seeds. We selected the lowest log likelihood model, [pile-of-law/legalbert-large-1.7M-1](https://huggingface.co/pile-of-law/legalbert-large-1.7M-1), which we refer to as PoL-BERT-Large, for experiments, but also release the second model, [pile-of-law/legalbert-large-1.7M-2](https://huggingface.co/pile-of-law/legalbert-large-1.7M-2).
## Evaluation results
See the model card for [pile-of-law/legalbert-large-1.7M-1](https://huggingface.co/pile-of-law/legalbert-large-1.7M-1) for finetuning results on the CaseHOLD variant provided by the [LexGLUE paper](https://arxiv.org/abs/2110.00976).
### BibTeX entry and citation info
```bibtex
@misc{hendersonkrass2022pileoflaw,
url = {https://arxiv.org/abs/2207.00220},
author = {Henderson, Peter and Krass, Mark S. and Zheng, Lucia and Guha, Neel and Manning, Christopher D. and Jurafsky, Dan and Ho, Daniel E.},
title = {Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset},
publisher = {arXiv},
year = {2022}
}
``` |
T-Systems-onsite/bert-german-dbmdz-uncased-sentence-stsb | 8870fda4e0c2ba2c2fe5fd1ea908e2cfa5857946 | 2021-05-18T22:43:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"de",
"transformers",
"license:mit"
] | feature-extraction | false | T-Systems-onsite | null | T-Systems-onsite/bert-german-dbmdz-uncased-sentence-stsb | 155 | null | transformers | 3,953 | ---
language: de
license: mit
---
# bert-german-dbmdz-uncased-sentence-stsb
**This model is outdated!**
The new [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer) model is better for German language. It is also the current best model for English language and works cross-lingually. Please consider using that model. |
allenai/unifiedqa-t5-11b | 01c8ac8c0922513fdcb0e0bd87f02b7dfcc77600 | 2020-11-13T12:10:29.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | false | allenai | null | allenai/unifiedqa-t5-11b | 155 | 1 | transformers | 3,954 | Entry not found |
cambridgeltl/trans-encoder-bi-simcse-bert-base | 1052e6c3873470db4eeddbe26dfba56a86fb1c09 | 2021-11-26T18:26:34.000Z | [
"pytorch",
"bert",
"feature-extraction",
"arxiv:2109.13059",
"transformers"
] | feature-extraction | false | cambridgeltl | null | cambridgeltl/trans-encoder-bi-simcse-bert-base | 155 | null | transformers | 3,955 | ---
language: en
tags:
- sentence-embeddings
- sentence-similarity
- dual-encoder
### cambridgeltl/trans-encoder-bi-simcse-bert-base
An unsupervised sentence encoder (bi-encoder) proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2109.13059.pdf). The model is trained with unlabelled sentence pairs sampled from STS2012-2016, STS-b, and SICK-R, using [princeton-nlp/unsup-simcse-bert-base-uncased](https://huggingface.co/princeton-nlp/unsup-simcse-bert-base-uncased) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
### Citation
```bibtex
@article{liu2021trans,
title={Trans-Encoder: Unsupervised sentence-pair modelling through self-and mutual-distillations},
author={Liu, Fangyu and Jiao, Yunlong and Massiah, Jordan and Yilmaz, Emine and Havrylov, Serhii},
journal={arXiv preprint arXiv:2109.13059},
year={2021}
}
```
|
castorini/monobert-large-msmarco-finetune-only | 7012b9bf7aa900a5d74615ff0824f8ceed09e723 | 2021-05-19T14:00:06.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | castorini | null | castorini/monobert-large-msmarco-finetune-only | 155 | null | transformers | 3,956 | # Model Description
This checkpoint is a direct conversion of [BERT_Large_trained_on_MSMARCO.zip](https://drive.google.com/open?id=1crlASTMlsihALlkabAQP6JTYIZwC1Wm8) from the original [repo](https://github.com/nyu-dl/dl4marco-bert/).
The corresponding model class is BertForSequenceClassification, and its purpose is for MS MARCO passage ranking.
Please find the original repo for more detail of its training settings regarding hyperparameter/device/data. |
tals/albert-base-vitaminc-mnli | df116a3f65bf21483c75787bcf822560ad86a6e7 | 2022-06-24T01:34:37.000Z | [
"pytorch",
"albert",
"text-classification",
"python",
"dataset:fever",
"dataset:glue",
"dataset:multi_nli",
"dataset:tals/vitaminc",
"transformers"
] | text-classification | false | tals | null | tals/albert-base-vitaminc-mnli | 155 | null | transformers | 3,957 | ---
language: python
datasets:
- fever
- glue
- multi_nli
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
toloka/t5-large-for-text-aggregation | ec2d49a52c11940f11347be79ebf45cde1de20f3 | 2021-09-23T16:40:58.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:toloka/CrowdSpeech",
"arxiv:1910.10683",
"arxiv:2107.01091",
"transformers",
"text aggregation",
"summarization",
"license:apache-2.0",
"autotrain_compatible"
] | summarization | false | toloka | null | toloka/t5-large-for-text-aggregation | 155 | 3 | transformers | 3,958 | ---
language:
- en
tags:
- text aggregation
- summarization
license: apache-2.0
datasets:
- toloka/CrowdSpeech
metrics:
- wer
---
# T5 Large for Text Aggregation
## Model description
This is a T5 Large fine-tuned for crowdsourced text aggregation tasks. The model takes multiple performers' responses and yields a single aggregated response. This approach was introduced for the first time during [VLDB 2021 Crowd Science Challenge](https://crowdscience.ai/challenges/vldb21) and originally implemented at the second-place competitor's [GitHub](https://github.com/A1exRey/VLDB2021_workshop_t5). The [paper](http://ceur-ws.org/Vol-2932/short2.pdf) describing this model was presented at the [2nd Crowd Science Workshop](https://crowdscience.ai/conference_events/vldb21).
## How to use
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, AutoConfig
mname = "toloka/t5-large-for-text-aggregation"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModelForSeq2SeqLM.from_pretrained(mname)
input = "samplee text | sampl text | sample textt"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # sample text
```
## Training data
Pretrained weights were taken from the [original](https://huggingface.co/t5-large) T5 Large model by Google. For more details on the T5 architecture and training procedure see https://arxiv.org/abs/1910.10683
Model was fine-tuned on `train-clean`, `dev-clean` and `dev-other` parts of the [CrowdSpeech](https://huggingface.co/datasets/toloka/CrowdSpeech) dataset that was introduced in [our paper](https://openreview.net/forum?id=3_hgF1NAXU7&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DNeurIPS.cc%2F2021%2FTrack%2FDatasets_and_Benchmarks%2FRound1%2FAuthors%23your-submissions).
## Training procedure
The model was fine-tuned for eight epochs directly following the HuggingFace summarization training [example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization).
## Eval results
Dataset | Split | WER
-----------|------------|----------
CrowdSpeech| test-clean | 4.99
CrowdSpeech| test-other | 10.61
### BibTeX entry and citation info
```bibtex
@inproceedings{Pletenev:21,
author = {Pletenev, Sergey},
title = {{Noisy Text Sequences Aggregation as a Summarization Subtask}},
year = {2021},
booktitle = {Proceedings of the 2nd Crowd Science Workshop: Trust, Ethics, and Excellence in Crowdsourced Data Management at Scale},
pages = {15--20},
address = {Copenhagen, Denmark},
issn = {1613-0073},
url = {http://ceur-ws.org/Vol-2932/short2.pdf},
language = {english},
}
```
```bibtex
@misc{pavlichenko2021vox,
title={Vox Populi, Vox DIY: Benchmark Dataset for Crowdsourced Audio Transcription},
author={Nikita Pavlichenko and Ivan Stelmakh and Dmitry Ustalov},
year={2021},
eprint={2107.01091},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
|
aihijo/transformers4ime-pinyingpt-concat | 76dd20dc92d8236a350fb732e99dde6fa15e2263 | 2022-03-28T03:57:51.000Z | [
"pytorch",
"gpt2",
"text-generation",
"arxiv:2203.00249",
"transformers",
"license:cc-by-nc-sa-4.0"
] | text-generation | false | aihijo | null | aihijo/transformers4ime-pinyingpt-concat | 155 | null | transformers | 3,959 | ---
license: cc-by-nc-sa-4.0
---

# Transformers4IME
Transformers4IME is repo for exploring and adapting transformer-based models to IME.
## PinyinGPT
PinyinGPT is a model from [Exploring and Adapting Chinese GPT to Pinyin Input Method](https://arxiv.org/abs/2203.00249)
which appears in ACL2022.
```bibtex
@article{tan2022exploring,
title={Exploring and Adapting Chinese GPT to Pinyin Input Method},
author={Tan, Minghuan and Dai, Yong and Tang, Duyu and Feng, Zhangyin and Huang, Guoping and Jiang, Jing and Li, Jiwei and Shi, Shuming},
journal={arXiv preprint arXiv:2203.00249},
year={2022}
}
```
The code can be found at
* [Gitee](https://gitee.com/visualjoyce/Transformers4IME)
* [Github](https://github.com/visualjoyce/Transformers4IME)
|
leonadase/bert-base-chinese-finetuned-ner-v1 | 55a98392aa20c4f6760345d3fb42df0c3f8c164f | 2022-04-08T17:49:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:fdner",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] | token-classification | false | leonadase | null | leonadase/bert-base-chinese-finetuned-ner-v1 | 155 | null | transformers | 3,960 | ---
tags:
- generated_from_trainer
datasets:
- fdner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-chinese-finetuned-ner-v1
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: fdner
type: fdner
args: fdner
metrics:
- name: Precision
type: precision
value: 0.981203007518797
- name: Recall
type: recall
value: 0.9886363636363636
- name: F1
type: f1
value: 0.9849056603773584
- name: Accuracy
type: accuracy
value: 0.9909536373916321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-ner-v1
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the fdner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0413
- Precision: 0.9812
- Recall: 0.9886
- F1: 0.9849
- Accuracy: 0.9910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 8 | 2.0640 | 0.0 | 0.0 | 0.0 | 0.4323 |
| No log | 2.0 | 16 | 1.7416 | 0.0204 | 0.0227 | 0.0215 | 0.5123 |
| No log | 3.0 | 24 | 1.5228 | 0.0306 | 0.0265 | 0.0284 | 0.5456 |
| No log | 4.0 | 32 | 1.2597 | 0.0961 | 0.1591 | 0.1198 | 0.6491 |
| No log | 5.0 | 40 | 1.0273 | 0.1588 | 0.2159 | 0.1830 | 0.7450 |
| No log | 6.0 | 48 | 0.8026 | 0.2713 | 0.3258 | 0.2960 | 0.8208 |
| No log | 7.0 | 56 | 0.6547 | 0.36 | 0.4091 | 0.3830 | 0.8513 |
| No log | 8.0 | 64 | 0.5180 | 0.4650 | 0.5038 | 0.4836 | 0.8873 |
| No log | 9.0 | 72 | 0.4318 | 0.5139 | 0.5606 | 0.5362 | 0.9067 |
| No log | 10.0 | 80 | 0.3511 | 0.6169 | 0.6894 | 0.6512 | 0.9291 |
| No log | 11.0 | 88 | 0.2887 | 0.6691 | 0.6894 | 0.6791 | 0.9414 |
| No log | 12.0 | 96 | 0.2396 | 0.7042 | 0.7576 | 0.7299 | 0.9516 |
| No log | 13.0 | 104 | 0.2052 | 0.7568 | 0.8371 | 0.7950 | 0.9587 |
| No log | 14.0 | 112 | 0.1751 | 0.8303 | 0.8712 | 0.8503 | 0.9610 |
| No log | 15.0 | 120 | 0.1512 | 0.8464 | 0.8977 | 0.8713 | 0.9668 |
| No log | 16.0 | 128 | 0.1338 | 0.8759 | 0.9091 | 0.8922 | 0.9710 |
| No log | 17.0 | 136 | 0.1147 | 0.8959 | 0.9129 | 0.9043 | 0.9746 |
| No log | 18.0 | 144 | 0.1011 | 0.9326 | 0.9432 | 0.9379 | 0.9761 |
| No log | 19.0 | 152 | 0.0902 | 0.9251 | 0.9356 | 0.9303 | 0.9795 |
| No log | 20.0 | 160 | 0.0806 | 0.9440 | 0.9583 | 0.9511 | 0.9804 |
| No log | 21.0 | 168 | 0.0743 | 0.9586 | 0.9659 | 0.9623 | 0.9812 |
| No log | 22.0 | 176 | 0.0649 | 0.9511 | 0.9583 | 0.9547 | 0.9851 |
| No log | 23.0 | 184 | 0.0595 | 0.9591 | 0.9773 | 0.9681 | 0.9876 |
| No log | 24.0 | 192 | 0.0537 | 0.9625 | 0.9735 | 0.9680 | 0.9883 |
| No log | 25.0 | 200 | 0.0505 | 0.9701 | 0.9848 | 0.9774 | 0.9894 |
| No log | 26.0 | 208 | 0.0464 | 0.9737 | 0.9811 | 0.9774 | 0.9904 |
| No log | 27.0 | 216 | 0.0439 | 0.9737 | 0.9811 | 0.9774 | 0.9906 |
| No log | 28.0 | 224 | 0.0428 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
| No log | 29.0 | 232 | 0.0417 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
| No log | 30.0 | 240 | 0.0413 | 0.9812 | 0.9886 | 0.9849 | 0.9910 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ml6team/keyphrase-generation-t5-small-inspec | cc0e6da67749c6d83b096d197b464abd7d2ae2db | 2022-06-16T18:03:09.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:midas/inspec",
"transformers",
"keyphrase-generation",
"license:mit",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | ml6team | null | ml6team/keyphrase-generation-t5-small-inspec | 155 | null | transformers | 3,961 | ---
language: en
license: mit
tags:
- keyphrase-generation
datasets:
- midas/inspec
widget:
- text: "Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document.
Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading
it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail
and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents,
this process can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine learning methods, that use statistical
and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency,
occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies
and context of words in a text."
example_title: "Example 1"
- text: "In this work, we explore how to learn task specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (up to 9.26 points in F1) over SOTA, when LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (up to 4.33 points inF1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition(NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks."
example_title: "Example 2"
model-index:
- name: DeDeckerThomas/keyphrase-generation-t5-small-inspec
results:
- task:
type: keyphrase-generation
name: Keyphrase Generation
dataset:
type: midas/inspec
name: inspec
metrics:
- type: F1@M (Present)
value: 0.317
name: F1@M (Present)
- type: F1@O (Present)
value: 0.279
name: F1@O (Present)
- type: F1@M (Absent)
value: 0.073
name: F1@M (Absent)
- type: F1@O (Absent)
value: 0.065
name: F1@O (Absent)
---
# 🔑 Keyphrase Generation Model: T5-small-inspec
Keyphrase extraction is a technique in text analysis where you extract the important keyphrases from a document. Thanks to these keyphrases humans can understand the content of a text very quickly and easily without reading it completely. Keyphrase extraction was first done primarily by human annotators, who read the text in detail and then wrote down the most important keyphrases. The disadvantage is that if you work with a lot of documents, this process can take a lot of time ⏳.
Here is where Artificial Intelligence 🤖 comes in. Currently, classical machine learning methods, that use statistical and linguistic features, are widely used for the extraction process. Now with deep learning, it is possible to capture the semantic meaning of a text even better than these classical methods. Classical methods look at the frequency, occurrence and order of words in the text, whereas these neural approaches can capture long-term semantic dependencies and context of words in a text.
## 📓 Model Description
This model uses [T5-small model](https://huggingface.co/t5-small) as its base model and fine-tunes it on the [Inspec dataset](https://huggingface.co/datasets/midas/inspec). Keyphrase generation transformers are fine-tuned as a text-to-text generation problem where the keyphrases are generated. The result is a concatenated string with all keyphrases separated by a given delimiter (i.e. “;”). These models are capable of generating present and absent keyphrases.
## ✋ Intended Uses & Limitations
### 🛑 Limitations
* This keyphrase generation model is very domain-specific and will perform very well on abstracts of scientific papers. It's not recommended to use this model for other domains, but you are free to test it out.
* Only works for English documents.
* For a custom model, please consult the [training notebook]() for more information.
* Sometimes the output doesn't make any sense.
### ❓ How To Use
```python
# Model parameters
from transformers import (
Text2TextGenerationPipeline,
AutoModelForSeq2SeqLM,
AutoTokenizer,
)
class KeyphraseGenerationPipeline(Text2TextGenerationPipeline):
def __init__(self, model, keyphrase_sep_token=";", *args, **kwargs):
super().__init__(
model=AutoModelForSeq2SeqLM.from_pretrained(model),
tokenizer=AutoTokenizer.from_pretrained(model),
*args,
**kwargs
)
self.keyphrase_sep_token = keyphrase_sep_token
def postprocess(self, model_outputs):
results = super().postprocess(
model_outputs=model_outputs
)
return [[keyphrase.strip() for keyphrase in result.get("generated_text").split(self.keyphrase_sep_token) if keyphrase != ""] for result in results]
```
```python
# Load pipeline
model_name = "ml6team/keyphrase-generation-t5-small-inspec"
generator = KeyphraseGenerationPipeline(model=model_name)
```
```python
text = """
Keyphrase extraction is a technique in text analysis where you extract the
important keyphrases from a document. Thanks to these keyphrases humans can
understand the content of a text very quickly and easily without reading it
completely. Keyphrase extraction was first done primarily by human annotators,
who read the text in detail and then wrote down the most important keyphrases.
The disadvantage is that if you work with a lot of documents, this process
can take a lot of time.
Here is where Artificial Intelligence comes in. Currently, classical machine
learning methods, that use statistical and linguistic features, are widely used
for the extraction process. Now with deep learning, it is possible to capture
the semantic meaning of a text even better than these classical methods.
Classical methods look at the frequency, occurrence and order of words
in the text, whereas these neural approaches can capture long-term
semantic dependencies and context of words in a text.
""".replace("\n", " ")
keyphrases = generator(text)
print(keyphrases)
```
```
# Output
[['keyphrase extraction', 'text analysis', 'artificial intelligence', 'classical machine learning methods']]
```
## 📚 Training Dataset
[Inspec](https://huggingface.co/datasets/midas/inspec) is a keyphrase extraction/generation dataset consisting of 2000 English scientific papers from the scientific domains of Computers and Control and Information Technology published between 1998 to 2002. The keyphrases are annotated by professional indexers or editors.
You can find more information in the [paper](https://dl.acm.org/doi/10.3115/1119355.1119383).
## 👷♂️ Training Procedure
For more in detail information, you can take a look at the [training notebook]().
### Training Parameters
| Parameter | Value |
| --------- | ------|
| Learning Rate | 5e-5 |
| Epochs | 50 |
| Early Stopping Patience | 1 |
### Preprocessing
The documents in the dataset are already preprocessed into list of words with the corresponding keyphrases. The only thing that must be done is tokenization and joining all keyphrases into one string with a certain seperator of choice( ```;``` ).
```python
from datasets import load_dataset
from transformers import AutoTokenizer
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("t5-small", add_prefix_space=True)
# Dataset parameters
dataset_full_name = "midas/inspec"
dataset_subset = "raw"
dataset_document_column = "document"
keyphrase_sep_token = ";"
def preprocess_keyphrases(text_ids, kp_list):
kp_order_list = []
kp_set = set(kp_list)
text = tokenizer.decode(
text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
text = text.lower()
for kp in kp_set:
kp = kp.strip()
kp_index = text.find(kp.lower())
kp_order_list.append((kp_index, kp))
kp_order_list.sort()
present_kp, absent_kp = [], []
for kp_index, kp in kp_order_list:
if kp_index < 0:
absent_kp.append(kp)
else:
present_kp.append(kp)
return present_kp, absent_kp
def preprocess_fuction(samples):
processed_samples = {"input_ids": [], "attention_mask": [], "labels": []}
for i, sample in enumerate(samples[dataset_document_column]):
input_text = " ".join(sample)
inputs = tokenizer(
input_text,
padding="max_length",
truncation=True,
)
present_kp, absent_kp = preprocess_keyphrases(
text_ids=inputs["input_ids"],
kp_list=samples["extractive_keyphrases"][i]
+ samples["abstractive_keyphrases"][i],
)
keyphrases = present_kp
keyphrases += absent_kp
target_text = f" {keyphrase_sep_token} ".join(keyphrases)
with tokenizer.as_target_tokenizer():
targets = tokenizer(
target_text, max_length=40, padding="max_length", truncation=True
)
targets["input_ids"] = [
(t if t != tokenizer.pad_token_id else -100)
for t in targets["input_ids"]
]
for key in inputs.keys():
processed_samples[key].append(inputs[key])
processed_samples["labels"].append(targets["input_ids"])
return processed_samples
# Load dataset
dataset = load_dataset(dataset_full_name, dataset_subset)
# Preprocess dataset
tokenized_dataset = dataset.map(preprocess_fuction, batched=True)
```
### Postprocessing
For the post-processing, you will need to split the string based on the keyphrase separator.
```python
def extract_keyphrases(examples):
return [example.split(keyphrase_sep_token) for example in examples]
```
## 📝 Evaluation Results
Traditional evaluation methods are the precision, recall and F1-score @k,m where k is the number that stands for the first k predicted keyphrases and m for the average amount of predicted keyphrases. In keyphrase generation you also look at F1@O where O stands for the number of ground truth keyphrases.
The model achieves the following results on the Inspec test set:
Extractive keyphrases
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | P@O | R@O | F1@O |
|:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|:----:|:----:|:----:|
| Inspec Test Set | 0.33 | 0.31 | 0.29 | 0.17 | 0.31 | 0.20 | 0.41 | 0.31 | 0.32 | 0.28 | 0.28 | 0.28 |
Abstractive keyphrases
| Dataset | P@5 | R@5 | F1@5 | P@10 | R@10 | F1@10 | P@M | R@M | F1@M | P@O | R@O | F1@O |
|:-----------------:|:----:|:----:|:----:|:----:|:----:|:-----:|:----:|:----:|:----:|:----:|:----:|:----:|
| Inspec Test Set | 0.05 | 0.09 | 0.06 | 0.03 | 0.09 | 0.04 | 0.08 | 0.09 | 0.07 | 0.06 | 0.06 | 0.06 |
For more information on the evaluation process, you can take a look at the keyphrase extraction evaluation notebook.
## 🚨 Issues
Please feel free to start discussions in the Community Tab. |
gchhablani/bert-base-cased-finetuned-mnli | 4e171ccb1a7e226a59480575d71328752b15d5aa | 2021-09-20T09:07:21.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"en",
"dataset:glue",
"arxiv:2105.03824",
"transformers",
"generated_from_trainer",
"fnet-bert-base-comparison",
"license:apache-2.0",
"model-index"
] | text-classification | false | gchhablani | null | gchhablani/bert-base-cased-finetuned-mnli | 154 | 1 | transformers | 3,962 | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- fnet-bert-base-comparison
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-cased-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.8410292921074044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-mnli
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5721
- Accuracy: 0.8410
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name mnli \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-mnli \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5323 | 1.0 | 24544 | 0.4431 | 0.8302 |
| 0.3447 | 2.0 | 49088 | 0.4725 | 0.8353 |
| 0.2267 | 3.0 | 73632 | 0.5887 | 0.8368 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
icelab/spacebert | 5fd7c3381c22a5da3a14ccaaf48e4a50d8976061 | 2021-10-21T08:40:44.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | false | icelab | null | icelab/spacebert | 154 | null | transformers | 3,963 | ### SpaceBERT
This is one of the 3 further pre-trained models from the SpaceTransformers family presented in [SpaceTransformers: Language Modeling for Space Systems](https://ieeexplore.ieee.org/document/9548078). The original Git repo is [strath-ace/smart-nlp](https://github.com/strath-ace/smart-nlp).
The further pre-training corpus includes publications abstracts, books, and Wikipedia pages related to space systems. Corpus size is 14.3 GB. SpaceBERT was further pre-trained on this domain-specific corpus from [BERT-Base (uncased)](https://huggingface.co/bert-base-uncased). In our paper, it is then fine-tuned for a Concept Recognition task.
### BibTeX entry and citation info
```
@ARTICLE{
9548078,
author={Berquand, Audrey and Darm, Paul and Riccardi, Annalisa},
journal={IEEE Access},
title={SpaceTransformers: Language Modeling for Space Systems},
year={2021},
volume={9},
number={},
pages={133111-133122},
doi={10.1109/ACCESS.2021.3115659}
}
``` |
persiannlp/mt5-small-parsinlu-opus-translation_fa_en | c7aec9eab234f76d9125f709e3bb444ce840ba5b | 2021-09-23T16:20:36.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"fa",
"multilingual",
"dataset:parsinlu",
"transformers",
"machine-translation",
"persian",
"farsi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible"
] | text2text-generation | false | persiannlp | null | persiannlp/mt5-small-parsinlu-opus-translation_fa_en | 154 | null | transformers | 3,964 | ---
language:
- fa
- multilingual
thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg
tags:
- machine-translation
- mt5
- persian
- farsi
license: cc-by-nc-sa-4.0
datasets:
- parsinlu
metrics:
- sacrebleu
---
# Machine Translation (ترجمهی ماشینی)
This is an mT5-based model for machine translation (Persian -> English).
Here is an example of how you can run this model:
```python
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_size = "small"
model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("ستایش خدای را که پروردگار جهانیان است.")
run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه میکند؛")
run_model("وی از تمامی بلاگرها، سازمانها و افرادی که از وی پشتیبانی کردهاند، تشکر کرد.")
run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ")
run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟")
```
For more details, visit this page: https://github.com/persiannlp/parsinlu/
|
trueto/medbert-kd-chinese | 8a9aee70779f4069f8e7396d15a9f00ee1266843 | 2021-05-20T08:10:57.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
] | null | false | trueto | null | trueto/medbert-kd-chinese | 154 | null | transformers | 3,965 | # [medbert](https://github.com/trueto/medbert)
本项目开源硕士毕业论文“BERT模型在中文临床自然语言处理中的应用探索与研究”相关模型
## 评估基准
构建了中文电子病历命名实体识别数据集(CEMRNER)、中文医学文本命名实体识别数据集(CMTNER)、
中文医学问句-问句识别数据集(CMedQQ)和中文临床文本分类数据集(CCTC)。
| **数据集** | **训练集** | **验证集** | **测试集** | **任务类型** | **语料来源** |
| ---- | ---- | ---- |---- |---- |:----:|
| CEMRNER | 965 | 138 | 276 | 命名实体识别 | 医渡云 |
| CMTNER | 14000 | 2000 | 4000 | 命名实体识别 | CHIP2020 |
| CMedQQ | 14000 | 2000 | 4000 | 句对识别 | 平安医疗 |
| CCTC | 26837 | 3834 | 7669 | 句子分类 | CHIP2019 |
## 开源模型
在6.5亿字符中文临床自然语言文本语料上基于BERT模型和Albert模型预训练获得了MedBERT和MedAlbert模型。
## 性能表现
在同等实验环境,相同训练参数和脚本下,各模型的性能表现
| **模型** | **CEMRNER** | **CMTNER** | **CMedQQ** | **CCTC** |
| :---- | :----: | :----: | :----: | :----: |
| [BERT](https://huggingface.co/bert-base-chinese) | 81.17% | 65.67% | 87.77% | 81.62% |
| [MC-BERT](https://github.com/alibaba-research/ChineseBLUE) | 80.93% | 66.15% | 89.04% | 80.65% |
| [PCL-BERT](https://code.ihub.org.cn/projects/1775) | 81.58% | 67.02% | 88.81% | 80.27% |
| MedBERT | 82.29% | 66.49% | 88.32% | **81.77%** |
|MedBERT-wwm| **82.60%** | 67.11% | 88.02% | 81.72% |
|MedBERT-kd | 82.58% | **67.27%** | **89.34%** | 80.73% |
|- | - | - | - | - |
| [Albert](https://huggingface.co/voidful/albert_chinese_base) | 79.98% | 62.42% | 86.81% | 79.83% |
| MedAlbert | 81.03% | 63.81% | 87.56% | 80.05% |
|MedAlbert-wwm| **81.28%** | **64.12%** | **87.71%** | **80.46%** |
## 引用格式
```
杨飞洪,王序文,李姣.BERT模型在中文临床自然语言处理中的应用探索与研究[EB/OL].https://github.com/trueto/medbert, 2021-03.
``` |
tscholak/2e826ioa | e95227d8997db5ad1912900a6c71aa17ee08ee2b | 2022-01-10T21:50:39.000Z | [
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:cosql",
"dataset:spider",
"arxiv:2109.05093",
"transformers",
"text2sql",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | tscholak | null | tscholak/2e826ioa | 154 | null | transformers | 3,966 | ---
language:
- en
thumbnail: "https://repository-images.githubusercontent.com/401779782/c2f46be5-b74b-4620-ad64-57487be3b1ab"
tags:
- text2sql
widget:
- "And the concert named Auditions? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : sing er_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name ( Super bootcamp, Auditions ), theme, stadium_id, year | singer_in_concert : concert_id, singer_id || Which year did the concert Super bootcamp happen in? | Find the name and location of the stadiums which some concerts happened in the years of both 2014 and 2015."
- "How many singers do we have? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : singer_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name, theme, stadium_id, year | singer_in_concert : concert_id, singer_id"
license: "apache-2.0"
datasets:
- cosql
- spider
metrics:
- cosql
---
## tscholak/2e826ioa
Fine-tuned weights for [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) based on [T5-3B](https://huggingface.co/t5-3b).
### Training Data
The model has been fine-tuned on the 2,164 training dialogues in the [CoSQL SQL-grounded dialogue state tracking dataset](https://yale-lily.github.io/cosql) and the 7,000 training examples in the [Spider text-to-SQL dataset](https://yale-lily.github.io/spider). The model solves both, CoSQL's zero-shot text-to-SQL dialogue state tracking task and Spider's zero-shot text-to-SQL translation task. Zero-shot means that the model can generalize to unseen SQL databases.
### Training Objective
This model was initialized with [T5-3B](https://huggingface.co/t5-3b) and fine-tuned with the text-to-text generation objective.
A question is always grounded in both, a database schema and the preceiding questions in the dialogue. The model is trained to predict the SQL query that would be used to answer the user's current natural language question. The input to the model is composed of the user's current question, the database identifier, a list of tables and their columns, and a sequence of previous questions in reverse chronological order.
```
[current question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ... || [previous question] | ... | [first question]
```
The sequence of previous questions is separated by `||` from the linearized schema. In the absence of previous questions (for example, for the first question in a dialogue or for Spider questions), this separator is omitted.
The model outputs the database identifier and the SQL query that will be executed on the database to answer the user's current question in the dialog.
```
[db_id] | [sql]
```
### Performance
Out of the box, this model achieves 53.8 % question match accuracy and 21.8 % interaction match accuracy on the CoSQL development set. On the CoSQL test set, the model achieves 51.4 % question match accuracy and 21.7 % interaction match accuracy.
Using the PICARD constrained decoding method (see [the official PICARD implementation](https://github.com/ElementAI/picard)), the model's performance can be improved to **56.9 %** question match accuracy and **24.2 %** interaction match accuracy on the CoSQL development set. On the CoSQL test set and with PICARD, the model achieves **54.6 %** question match accuracy and **23.7 %** interaction match accuracy.
### Usage
Please see [the official repository](https://github.com/ElementAI/picard) for scripts and docker images that support evaluation and serving of this model.
### References
1. [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093)
2. [Official PICARD code](https://github.com/ElementAI/picard)
### Citation
```bibtex
@inproceedings{Scholak2021:PICARD,
author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau},
title = "{PICARD}: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.779",
pages = "9895--9901",
}
``` |
north/t5_small_NCC_lm | fe0d4f76efcfd84e1f52b5dabd051ca9e2c752d2 | 2022-06-01T19:40:02.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | north | null | north/t5_small_NCC_lm | 154 | null | transformers | 3,967 | ---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
-T5
The North-T5-models are a set of Norwegian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|✔|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/small/norwegian_NCC_plus_English_pluss100k_lm_t5x_small/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
mismayil/comet-gpt2-ai2 | 62ca4053db6a4a40251ab4b0dcd3151d707b4587 | 2022-05-23T13:23:49.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"license:afl-3.0"
] | text-generation | false | mismayil | null | mismayil/comet-gpt2-ai2 | 154 | 1 | transformers | 3,968 | ---
license: afl-3.0
---
This model has been trained by the original authors of the paper [(Comet-) Atomic 2020: On Symbolic and Neural Commonsense Knowledge Graphs.](https://www.semanticscholar.org/paper/COMET-ATOMIC-2020%3A-On-Symbolic-and-Neural-Knowledge-Hwang-Bhagavatula/e39503e01ebb108c6773948a24ca798cd444eb62) and has been released [here](https://storage.googleapis.com/ai2-mosaic-public/projects/mosaic-kgs/comet-atomic_2020_GPT2XL.zip). Original codebase for training is [here](https://github.com/allenai/comet-atomic-2020) |
Lvxue/finetuned-mt5-small-10epoch | f89604e482704ffa7360a52470f6a8d2348953a2 | 2022-07-04T03:16:07.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"en",
"ro",
"dataset:wmt16",
"transformers",
"translation",
"wmt16",
"Lvxue",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Lvxue | null | Lvxue/finetuned-mt5-small-10epoch | 154 | null | transformers | 3,969 | ---
language:
- en # Example: fr
- ro # Example: en
license: apache-2.0 # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
tags:
- translation
- wmt16
- Lvxue
datasets:
- wmt16 # Example: common_voice. Use dataset id from https://hf.co/datasets
metrics:
- sacrebleu
- bleu # Example: wer. Use metric id from https://hf.co/metrics
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-mt5-small-10epoch
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
anablasi/qa_financial_v2 | 1ce37207a25102cdb3b50b08f749500d14333158 | 2022-07-05T10:40:55.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | false | anablasi | null | anablasi/qa_financial_v2 | 154 | 1 | transformers | 3,970 | Entry not found |
sledz08/finetuned-bert-piqa | 7bcb5fff57e509ccd4d5e0ea8e44a971f5bc34c6 | 2022-07-11T15:54:20.000Z | [
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"dataset:piqa",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] | multiple-choice | false | sledz08 | null | sledz08/finetuned-bert-piqa | 154 | null | transformers | 3,971 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- piqa
metrics:
- accuracy
model-index:
- name: finetuned-bert-piqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-piqa
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the piqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6603
- Accuracy: 0.6518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 251 | 0.6751 | 0.6115 |
| 0.6628 | 2.0 | 502 | 0.6556 | 0.6534 |
| 0.6628 | 3.0 | 753 | 0.6603 | 0.6518 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bloom-testing/test-bloomd-350m-support-backend-dtypes | e17bd335f592786111644fbdb9647651bd40adf4 | 2022-07-28T15:16:24.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"transformers"
] | feature-extraction | false | bloom-testing | null | bloom-testing/test-bloomd-350m-support-backend-dtypes | 154 | null | transformers | 3,972 | Entry not found |
ARTeLab/mbart-summarization-mlsum | 37d5072d7d8d6ae414af8ab86b91c0514fe8b5a5 | 2022-05-03T06:05:43.000Z | [
"pytorch",
"mbart",
"text2text-generation",
"it",
"dataset:ARTeLab/mlsum-it",
"transformers",
"summarization",
"model-index",
"autotrain_compatible"
] | summarization | false | ARTeLab | null | ARTeLab/mbart-summarization-mlsum | 153 | 1 | transformers | 3,973 | ---
tags:
- summarization
language:
- it
metrics:
- rouge
model-index:
- name: summarization_mbart_mlsum
results: []
datasets:
- ARTeLab/mlsum-it
---
# mbart_summarization_mlsum
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on mlsum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 3.3336
- Rouge1: 19.3489
- Rouge2: 6.4028
- Rougel: 16.3497
- Rougelsum: 16.5387
- Gen Len: 33.5945
## Usage
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-mlsum")
model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-mlsum")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
# Citation
More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228)
```
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}
``` |
Helsinki-NLP/opus-mt-en-ca | 81d80b5921b66885e45c3b27615752da4b511b40 | 2021-09-09T21:34:27.000Z | [
"pytorch",
"marian",
"text2text-generation",
"en",
"ca",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-en-ca | 153 | 1 | transformers | 3,974 | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-en-ca
* source languages: en
* target languages: ca
* OPUS readme: [en-ca](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ca/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ca/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.en.ca | 47.2 | 0.665 |
|
WinKawaks/vit-small-patch16-224 | 4ed2f21a5d8dc3da907ef738c6ab669c33b7ba1e | 2022-01-30T18:04:13.000Z | [
"pytorch",
"vit",
"image-classification",
"dataset:imagenet",
"transformers",
"vision",
"license:apache-2.0"
] | image-classification | false | WinKawaks | null | WinKawaks/vit-small-patch16-224 | 153 | 0 | transformers | 3,975 | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
Google didn't publish vit-tiny and vit-small model checkpoints in Hugging Face. I converted the weights from the [timm repository](https://github.com/rwightman/pytorch-image-models). This model is used in the same way as [ViT-base](https://huggingface.co/google/vit-base-patch16-224).
|
anton-l/wav2vec2-large-xlsr-53-tatar | 2e94f8523c498d5d5f1fc05cda4b090a4c1482aa | 2021-07-05T20:40:41.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"tt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | false | anton-l | null | anton-l/wav2vec2-large-xlsr-53-tatar | 153 | null | transformers | 3,976 | ---
language: tt
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Tatar XLSR Wav2Vec2 Large 53 by Anton Lozhkov
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice tt
type: common_voice
args: tt
metrics:
- name: Test WER
type: wer
value: 26.76
---
# Wav2Vec2-Large-XLSR-53-Tatar
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Tatar using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tt", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Tatar test data of Common Voice.
```python
import torch
import torchaudio
import urllib.request
import tarfile
import pandas as pd
from tqdm.auto import tqdm
from datasets import load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Download the raw data instead of using HF datasets to save disk space
data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/tt.tar.gz"
filestream = urllib.request.urlopen(data_url)
data_file = tarfile.open(fileobj=filestream, mode="r|gz")
data_file.extractall()
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar")
model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-tatar")
model.to("cuda")
cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/tt/test.tsv", sep='\t')
clips_path = "cv-corpus-6.1-2020-12-11/tt/clips/"
def clean_sentence(sent):
sent = sent.lower()
# 'ё' is equivalent to 'е'
sent = sent.replace('ё', 'е')
# replace non-alpha characters with space
sent = "".join(ch if ch.isalpha() else " " for ch in sent)
# remove repeated spaces
sent = " ".join(sent.split())
return sent
targets = []
preds = []
for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]):
row["sentence"] = clean_sentence(row["sentence"])
speech_array, sampling_rate = torchaudio.load(clips_path + row["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
row["speech"] = resampler(speech_array).squeeze().numpy()
inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
targets.append(row["sentence"])
preds.append(processor.batch_decode(pred_ids)[0])
print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets)))
```
**Test Result**: 26.76 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
|
asafaya/bert-medium-arabic | 5bc83756c92b989470415372dcd9ac9376ef2fb1 | 2021-05-19T11:47:42.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"transformers",
"autotrain_compatible"
] | fill-mask | false | asafaya | null | asafaya/bert-medium-arabic | 153 | null | transformers | 3,977 | ---
language: ar
datasets:
- oscar
- wikipedia
---
# Arabic BERT Medium Model
Pretrained BERT Medium language model for Arabic
_If you use this model in your work, please cite this paper:_
```
@inproceedings{safaya-etal-2020-kuisail,
title = "{KUISAIL} at {S}em{E}val-2020 Task 12: {BERT}-{CNN} for Offensive Speech Identification in Social Media",
author = "Safaya, Ali and
Abdullatif, Moutasem and
Yuret, Deniz",
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.semeval-1.271",
pages = "2054--2059",
}
```
## Pretraining Corpus
`arabic-bert-medium` model was pretrained on ~8.2 Billion words:
- Arabic version of [OSCAR](https://traces1.inria.fr/oscar/) - filtered from [Common Crawl](http://commoncrawl.org/)
- Recent dump of Arabic [Wikipedia](https://dumps.wikimedia.org/backup-index.html)
and other Arabic resources which sum up to ~95GB of text.
__Notes on training data:__
- Our final version of corpus contains some non-Arabic words inlines, which we did not remove from sentences since that would affect some tasks like NER.
- Although non-Arabic characters were lowered as a preprocessing step, since Arabic characters does not have upper or lower case, there is no cased and uncased version of the model.
- The corpus and vocabulary set are not restricted to Modern Standard Arabic, they contain some dialectical Arabic too.
## Pretraining details
- This model was trained using Google BERT's github [repository](https://github.com/google-research/bert) on a single TPU v3-8 provided for free from [TFRC](https://www.tensorflow.org/tfrc).
- Our pretraining procedure follows training settings of bert with some changes: trained for 3M training steps with batchsize of 128, instead of 1M with batchsize of 256.
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-medium-arabic")
model = AutoModelForMaskedLM.from_pretrained("asafaya/bert-medium-arabic")
```
## Results
For further details on the models performance or any other queries, please refer to [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT)
## Acknowledgement
Thanks to Google for providing free TPU for the training process and for Huggingface for hosting this model on their servers 😊
|
gsarti/it5-base | 062a1f12d4d0d7da6059b6d26073eaeee327dec8 | 2022-03-09T11:57:08.000Z | [
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"it",
"dataset:gsarti/clean_mc4_it",
"arxiv:2203.03759",
"transformers",
"seq2seq",
"lm-head",
"license:apache-2.0",
"autotrain_compatible"
] | text2text-generation | false | gsarti | null | gsarti/it5-base | 153 | 9 | transformers | 3,978 | ---
language:
- it
datasets:
- gsarti/clean_mc4_it
tags:
- seq2seq
- lm-head
license: apache-2.0
inference: false
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# Italian T5 Base 🇮🇹
The [IT5](https://huggingface.co/models?search=it5) model family represents the first effort in pretraining large-scale sequence-to-sequence transformer models for the Italian language, following the approach adopted by the original [T5 model](https://github.com/google-research/text-to-text-transfer-transformer).
This model is released as part of the project ["IT5: Large-Scale Text-to-Text Pretraining for Italian Language Understanding and Generation"](https://arxiv.org/abs/2203.03759), by [Gabriele Sarti](https://gsarti.com/) and [Malvina Nissim](https://malvinanissim.github.io/) with the support of [Huggingface](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104) and with TPU usage sponsored by Google's [TPU Research Cloud](https://sites.research.google/trc/). All the training was conducted on a single TPU3v8-VM machine on Google Cloud. Refer to the Tensorboard tab of the repository for an overview of the training process.
*TThe inference widget is deactivated because the model needs a task-specific seq2seq fine-tuning on a downstream task to be useful in practice. The models in the [`it5`](https://huggingface.co/it5) organization provide some examples of this model fine-tuned on various downstream task.*
## Model variants
This repository contains the checkpoints for the `base` version of the model. The model was trained for one epoch (1.05M steps) on the [Thoroughly Cleaned Italian mC4 Corpus](https://huggingface.co/datasets/gsarti/clean_mc4_it) (~41B words, ~275GB) using 🤗 Datasets and the `google/t5-v1_1-base` improved configuration. Another version of this model trained on the [OSCAR corpus](https://oscar-corpus.com/) is also available under the name [`gsarti/it5-base-oscar`](https://huggingface.co/gsartiit5-base-oscar). The training procedure is made available [on Github](https://github.com/gsarti/t5-flax-gcp).
The following table summarizes the parameters for all available models
| |`it5-small` |`it5-base` (this one) |`it5-large` |`it5-base-oscar` |
|-----------------------|-----------------------|----------------------|-----------------------|----------------------------------|
|`dataset` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`gsarti/clean_mc4_it` |`oscar/unshuffled_deduplicated_it`|
|`architecture` |`google/t5-v1_1-small` |`google/t5-v1_1-base` |`google/t5-v1_1-large` |`t5-base` |
|`learning rate` | 5e-3 | 5e-3 | 5e-3 | 1e-2 |
|`steps` | 1'050'000 | 1'050'000 | 2'100'000 | 258'000 |
|`training time` | 36 hours | 101 hours | 370 hours | 98 hours |
|`ff projection` |`gated-gelu` |`gated-gelu` |`gated-gelu` |`relu` |
|`tie embeds` |`false` |`false` |`false` |`true` |
|`optimizer` | adafactor | adafactor | adafactor | adafactor |
|`max seq. length` | 512 | 512 | 512 | 512 |
|`per-device batch size`| 16 | 16 | 8 | 16 |
|`tot. batch size` | 128 | 128 | 64 | 128 |
|`weigth decay` | 1e-3 | 1e-3 | 1e-2 | 1e-3 |
|`validation split size`| 15K examples | 15K examples | 15K examples | 15K examples |
The high training time of `it5-base-oscar` was due to [a bug](https://github.com/huggingface/transformers/pull/13012) in the training script.
For a list of individual model parameters, refer to the `config.json` file in the respective repositories.
## Using the models
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("gsarti/it5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("gsarti/it5-base")
```
*Note: You will need to fine-tune the model on your downstream seq2seq task to use it. See an example [here](https://huggingface.co/gsarti/it5-base-nli).*
Flax and Tensorflow versions of the model are also available:
```python
from transformers import FlaxT5ForConditionalGeneration, TFT5ForConditionalGeneration
model_flax = FlaxT5ForConditionalGeneration.from_pretrained("gsarti/it5-base")
model_tf = TFT5ForConditionalGeneration.from_pretrained("gsarti/it5-base")
```
## Limitations
Due to the nature of the web-scraped corpus on which IT5 models were trained, it is likely that their usage could reproduce and amplify pre-existing biases in the data, resulting in potentially harmful content such as racial or gender stereotypes and conspiracist views. For this reason, the study of such biases is explicitly encouraged, and model usage should ideally be restricted to research-oriented and non-user-facing endeavors.
## Model curators
For problems or updates on this model, please contact [[email protected]](mailto:[email protected]).
## Citation Information
```bibtex
@article{sarti-nissim-2022-it5,
title={IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` |
imthanhlv/vigpt2medium | f63d11e2deb0f92bef16ee5dc1f2559109fbb4ce | 2022-06-17T02:41:16.000Z | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | imthanhlv | null | imthanhlv/vigpt2medium | 153 | null | transformers | 3,979 | Entry not found |
tr3cks/2LabelsSentimentAnalysisSpanish | 6ce31c4de6117f9b2af7c8a9df31ae52ce1763f8 | 2021-05-20T08:01:29.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | false | tr3cks | null | tr3cks/2LabelsSentimentAnalysisSpanish | 153 | null | transformers | 3,980 | Entry not found |
huggingtweets/joejoinerr | c5644fa3b4eae0d99d8f35d4a109b567522659b2 | 2022-06-18T12:02:03.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
] | text-generation | false | huggingtweets | null | huggingtweets/joejoinerr | 153 | null | transformers | 3,981 | ---
language: en
thumbnail: http://www.huggingtweets.com/joejoinerr/1655553718810/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477268531561517057/MhgifvbO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Joe 🍞</div>
<div style="text-align: center; font-size: 14px;">@joejoinerr</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Joe 🍞.
| Data | Joe 🍞 |
| --- | --- |
| Tweets downloaded | 3176 |
| Retweets | 611 |
| Short tweets | 281 |
| Tweets kept | 2284 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3f3589ez/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @joejoinerr's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35u823qi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35u823qi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/joejoinerr')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Helsinki-NLP/opus-mt-es-ca | 608d4b0c3866482f9d32b0ea71e0acd45061f086 | 2021-01-18T08:22:32.000Z | [
"pytorch",
"marian",
"text2text-generation",
"es",
"ca",
"transformers",
"translation",
"license:apache-2.0",
"autotrain_compatible"
] | translation | false | Helsinki-NLP | null | Helsinki-NLP/opus-mt-es-ca | 152 | 1 | transformers | 3,982 | ---
language:
- es
- ca
tags:
- translation
license: apache-2.0
---
### spa-cat
* source group: Spanish
* target group: Catalan
* OPUS readme: [spa-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-cat/README.md)
* model: transformer-align
* source language(s): spa
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.spa.cat | 68.9 | 0.832 |
### System Info:
- hf_name: spa-cat
- source_languages: spa
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['es', 'ca']
- src_constituents: {'spa'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-cat/opus-2020-06-17.test.txt
- src_alpha3: spa
- tgt_alpha3: cat
- short_pair: es-ca
- chrF2_score: 0.8320000000000001
- bleu: 68.9
- brevity_penalty: 1.0
- ref_len: 12343.0
- src_name: Spanish
- tgt_name: Catalan
- train_date: 2020-06-17
- src_alpha2: es
- tgt_alpha2: ca
- prefer_old: False
- long_pair: spa-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
justin871030/bert-base-uncased-goemotions-ekman-finetuned | efc4e84814566c560b14716e55cff79a024aac8c | 2022-01-12T12:44:49.000Z | [
"pytorch",
"bert",
"en",
"dataset:go_emotions",
"transformers",
"go-emotion",
"text-classification",
"license:mit"
] | text-classification | false | justin871030 | null | justin871030/bert-base-uncased-goemotions-ekman-finetuned | 152 | null | transformers | 3,983 | ---
language: en
tags:
- go-emotion
- text-classification
- pytorch
datasets:
- go_emotions
metrics:
- f1
widget:
- text: "Thanks for giving advice to the people who need it! 👌🙏"
license: mit
---
## Model Description
1. Based on the uncased BERT pretrained model with a linear output layer.
2. Added several commonly-used emoji and tokens to the special token list of the tokenizer.
3. Did label smoothing while training.
4. Used weighted loss and focal loss to help the cases which trained badly.
|
lassl/bert-ko-small | fac17c6d4e20387bbd8132ab2f6675d0ff2912bb | 2022-02-19T09:49:53.000Z | [
"pytorch",
"bert",
"pretraining",
"ko",
"transformers",
"fill-mask",
"korean",
"lassl",
"license:apache-2.0"
] | fill-mask | false | lassl | null | lassl/bert-ko-small | 152 | null | transformers | 3,984 | ---
license: apache-2.0
language: ko
tags:
- fill-mask
- korean
- lassl
mask_token: "[MASK]"
widget:
- text: 대한민국의 수도는 [MASK] 입니다.
---
# LASSL bert-ko-small
## How to use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("lassl/bert-ko-small")
tokenizer = AutoTokenizer.from_pretrained("lassl/bert-ko-small")
```
## Evaluation
Evaulation results will be released soon.
## Corpora
This model was trained from 702,437 examples (whose have 3,596,465,664 tokens). 702,437 examples are extracted from below corpora. If you want to get information for training, you should see `config.json`.
```bash
corpora/
├── [707M] kowiki_latest.txt
├── [ 26M] modu_dialogue_v1.2.txt
├── [1.3G] modu_news_v1.1.txt
├── [9.7G] modu_news_v2.0.txt
├── [ 15M] modu_np_v1.1.txt
├── [1008M] modu_spoken_v1.2.txt
├── [6.5G] modu_written_v1.0.txt
└── [413M] petition.txt
```
|
laxya007/gpt2_TS_DM_AS_CC_TM_HCU_DBS | 3037af4d9331688b1348d9e881ea5e1be7567cc3 | 2022-01-26T12:47:34.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | false | laxya007 | null | laxya007/gpt2_TS_DM_AS_CC_TM_HCU_DBS | 152 | null | transformers | 3,985 | Entry not found |
m3hrdadfi/albert-fa-base-v2 | f180b9ec44dbb8260cf7b6c5abd41e5b96bfd412 | 2020-12-26T08:26:26.000Z | [
"pytorch",
"albert",
"fill-mask",
"fa",
"transformers",
"albert-persian",
"persian-lm",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | m3hrdadfi | null | m3hrdadfi/albert-fa-base-v2 | 152 | null | transformers | 3,986 | ---
language: fa
tags:
- albert-persian
- persian-lm
license: apache-2.0
---
# ALBERT-Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
## Introduction
ALBERT-Persian trained on a massive amount of public corpora ([Persian Wikidumps](https://dumps.wikimedia.org/fawiki/), [MirasText](https://github.com/miras-tech/MirasText)) and six other manually crawled text data from a various type of websites ([BigBang Page](https://bigbangpage.com/) `scientific`, [Chetor](https://www.chetor.com/) `lifestyle`, [Eligasht](https://www.eligasht.com/Blog/) `itinerary`, [Digikala](https://www.digikala.com/mag/) `digital magazine`, [Ted Talks](https://www.ted.com/talks) `general conversational`, Books `novels, storybooks, short stories from old to the contemporary era`).
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=albert-fa) to look for
fine-tuned versions on a task that interests you.
### How to use
- for using any type of Albert you have to install sentencepiece
- run this in your notebook ``` !pip install -q sentencepiece ```
#### TensorFlow 2.0
```python
from transformers import AutoConfig, AutoTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("m3hrdadfi/albert-fa-base-v2")
tokenizer = AutoTokenizer.from_pretrained("m3hrdadfi/albert-fa-base-v2")
model = TFAutoModel.from_pretrained("m3hrdadfi/albert-fa-base-v2")
text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد میتوانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است."
tokenizer.tokenize(text)
>>> ['▁ما', '▁در', '▁هوش', 'واره', '▁معتقد', 'یم', '▁با', '▁انتقال', '▁صحیح', '▁دانش', '▁و', '▁اگاه', 'ی', '،', '▁همه', '▁افراد', '▁می', '▁توانند', '▁از', '▁ابزارهای', '▁هوشمند', '▁استفاده', '▁کنند', '.', '▁شعار', '▁ما', '▁هوش', '▁مصنوعی', '▁برای', '▁همه', '▁است', '.']
```
#### Pytorch
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
config = AutoConfig.from_pretrained("m3hrdadfi/albert-fa-base-v2")
tokenizer = AutoTokenizer.from_pretrained("m3hrdadfi/albert-fa-base-v2")
model = AutoModel.from_pretrained("m3hrdadfi/albert-fa-base-v2")
```
## Training
ALBERT-Persian is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than `3.9M` documents, `73M` sentences, and `1.3B` words, like the way we did for [ParsBERT](https://github.com/hooshvare/parsbert).
## Goals
Objective goals during training are as below (after 140K steps).
``` bash
***** Eval results *****
global_step = 140000
loss = 2.0080082
masked_lm_accuracy = 0.6141017
masked_lm_loss = 1.9963315
sentence_order_accuracy = 0.985
sentence_order_loss = 0.06908702
```
## Derivative models
### Base Config
#### Albert Model
- [m3hrdadfi/albert-face-base-v2](https://huggingface.co/m3hrdadfi/albert-fa-base-v2)
#### Albert Sentiment Analysis
- [m3hrdadfi/albert-fa-base-v2-sentiment-digikala](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-digikala)
- [m3hrdadfi/albert-fa-base-v2-sentiment-snappfood](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-snappfood)
- [m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-binary)
- [m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi)
- [m3hrdadfi/albert-fa-base-v2-sentiment-binary](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-binary)
- [m3hrdadfi/albert-fa-base-v2-sentiment-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-multi)
- [m3hrdadfi/albert-fa-base-v2-sentiment-multi](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-sentiment-multi)
#### Albert Text Classification
- [m3hrdadfi/albert-fa-base-v2-clf-digimag](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-clf-digimag)
- [m3hrdadfi/albert-fa-base-v2-clf-persiannews](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-clf-persiannews)
#### Albert NER
- [m3hrdadfi/albert-fa-base-v2-ner](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner)
- [m3hrdadfi/albert-fa-base-v2-ner-arman](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner-arman)
- [m3hrdadfi/albert-fa-base-v2-ner-arman](https://huggingface.co/m3hrdadfi/albert-fa-base-v2-ner-arman)
## Eval results
The following tables summarize the F1 scores obtained by ALBERT-Persian as compared to other models and architectures.
### Sentiment Analysis (SA) Task
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| Digikala User Comments | 81.12 | 81.74 | 80.74 | - |
| SnappFood User Comments | 85.79 | 88.12 | 87.87 | - |
| SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 |
### Text Classification (TC) Task
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT |
|:-----------------:|:-----------------:|:-----------:|:-----:|
| Digikala Magazine | 92.33 | 93.59 | 90.72 |
| Persian News | 97.01 | 97.19 | 95.79 |
### Named Entity Recognition (NER) Task
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:-------:|:-----------------:|:-----------:|:-----:|:----------:|:------------:|:--------:|:--------------:|:----------:|
| PEYMA | 88.99 | 93.10 | 86.64 | - | 90.59 | - | 84.00 | - |
| ARMAN | 97.43 | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERT-Persian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
|
superb/wav2vec2-base-superb-ic | f80d68b3d5c0440b43174998ceffb1e83a10affe | 2021-09-02T22:03:59.000Z | [
"pytorch",
"wav2vec2",
"audio-classification",
"en",
"dataset:superb",
"arxiv:2105.01051",
"transformers",
"speech",
"audio",
"license:apache-2.0"
] | audio-classification | false | superb | null | superb/wav2vec2-base-superb-ic | 152 | null | transformers | 3,987 | ---
language: en
datasets:
- superb
tags:
- speech
- audio
- wav2vec2
license: apache-2.0
---
# Wav2Vec2-Base for Intent Classification
## Model description
This is a ported version of [S3PRL's Wav2Vec2 for the SUPERB Intent Classification task](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/fluent_commands).
The base model is [wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base), which is pretrained on 16kHz
sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
For more information refer to [SUPERB: Speech processing Universal PERformance Benchmark](https://arxiv.org/abs/2105.01051)
## Task and dataset description
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of
speakers. SUPERB uses the
[Fluent Speech Commands](https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/)
dataset, where each utterance is tagged with three intent labels: **action**, **object**, and **location**.
For the original model's training and evaluation instructions refer to the
[S3PRL downstream task README](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands).
## Usage examples
You can use the model directly like so:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2FeatureExtractor
def map_to_array(example):
speech, _ = librosa.load(example["file"], sr=16000, mono=True)
example["speech"] = speech
return example
# load a demo dataset and read audio files
dataset = load_dataset("anton-l/superb_demo", "ic", split="test")
dataset = dataset.map(map_to_array)
model = Wav2Vec2ForSequenceClassification.from_pretrained("superb/wav2vec2-base-superb-ic")
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("superb/wav2vec2-base-superb-ic")
# compute attention masks and normalize the waveform if needed
inputs = feature_extractor(dataset[:4]["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
logits = model(**inputs).logits
action_ids = torch.argmax(logits[:, :6], dim=-1).tolist()
action_labels = [model.config.id2label[_id] for _id in action_ids]
object_ids = torch.argmax(logits[:, 6:20], dim=-1).tolist()
object_labels = [model.config.id2label[_id + 6] for _id in object_ids]
location_ids = torch.argmax(logits[:, 20:24], dim=-1).tolist()
location_labels = [model.config.id2label[_id + 20] for _id in location_ids]
```
## Eval results
The evaluation metric is accuracy.
| | **s3prl** | **transformers** |
|--------|-----------|------------------|
|**test**| `0.9235` | `N/A` |
### BibTeX entry and citation info
```bibtex
@article{yang2021superb,
title={SUPERB: Speech processing Universal PERformance Benchmark},
author={Yang, Shu-wen and Chi, Po-Han and Chuang, Yung-Sung and Lai, Cheng-I Jeff and Lakhotia, Kushal and Lin, Yist Y and Liu, Andy T and Shi, Jiatong and Chang, Xuankai and Lin, Guan-Ting and others},
journal={arXiv preprint arXiv:2105.01051},
year={2021}
}
``` |
bertin-project/bertin-gpt-j-6B | f3115efa9e00c50e696c13f3c0293bfc25beb7eb | 2022-07-22T14:30:30.000Z | [
"pytorch",
"gptj",
"text-generation",
"es",
"dataset:bertin-project/mc4-es-sampled",
"arxiv:2104.09864",
"arxiv:2101.00027",
"transformers",
"causal-lm",
"license:apache-2.0"
] | text-generation | false | bertin-project | null | bertin-project/bertin-gpt-j-6B | 152 | 7 | transformers | 3,988 | ---
language:
- es
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- bertin-project/mc4-es-sampled
---
- [Version v1beta3](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3): July 22nd, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3-half)*, at step 850k)
- [Version v1beta2](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2): June 6th, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2-half)*, at step 616k)
- [Version v1beta1](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta1-half): April 28th, 2022 (*half-precision weights only*, at step 408k)
- <details><summary>All checkpoints</summary>
- [Checkpoint 130k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/6c116e533a00db027bf0a2e0b5e06d3e0772e2d0).
- [Checkpoint 275k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/20f424ebcc7c500d5328ed45a8c911a2a75583f1).
- [Checkpoint 408k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/c51db24abee958efe83e52fddf6d19e5f065b818).
- [Checkpoint 616k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/abafe00bfb03330e72a67ed9fc0958c7399f2181).
- [Checkpoint 850k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/59d5064b65043f2ff2b2549e4a076854eec22b2e).
</details>
# BERTIN GPT-J-6B
<div align=center>
<img alt="BERTIN logo" src="https://huggingface.co/bertin-project/bertin-roberta-base-spanish/resolve/main/images/bertin.png" width="200px">
</div>
## Demo: https://huggingface.co/spaces/bertin-project/bertin-gpt-j-6B
## Model Description
BERTIN-GPT-J-6B is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
BERTIN-GPT-J-6B was finetuned on [mC4-es-sampled (gaussian)](https://huggingface.co/datasets/bertin-project/mc4-es-sampled), a Spanish subset of mC4 sampled using perplexity values.
## Training procedure
This model was finetuned for 40 billion tokens (40,384,790,528) over 616,000 steps on a single TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
BERTIN-GPT-J-6B learns an inner representation of the Spanish language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("bertin-project/bertin-gpt-j-6B")
```
### Limitations and Biases
As the original GPT-J model, the core functionality of BERTIN-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting BERTIN-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon BERTIN-GPT-J-6B to produce factually accurate output.
The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending, although some preliminary remarks are given in the [BERTIN paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/download/6403/3818).
As with all language models, it is hard to predict in advance how BERTIN-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
We still have to find proper datasets to evaluate the model, so help is welcome!
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@article{BERTIN,
author = {Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury},
title = {{BERTIN}: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling},
journal = {Procesamiento del Lenguaje Natural},
volume = {68},
number = {0},
year = {2022},
keywords = {},
abstract = {The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pretraining sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name perplexity sampling that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget.},
issn = {1989-7553},
url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403},
pages = {13--23}
}
```
If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
## Team
- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Paulo Villegas ([paulo](https://huggingface.co/paulo))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))
## Acknowledgements
This project would not have been possible without compute generously provided by the National Library of Norway and Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. And specially, to [Stella Biderman](https://www.stellabiderman.com) for her general openness, and [Ben Wang](https://github.com/kingoflolz/mesh-transformer-jax) for the main codebase.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models be liable for any results arising from the use made by third parties of these models. |
ckiplab/bert-tiny-chinese | ca5496ebfd1b6f7c95740b0a06ecbc43f3135a3b | 2022-05-10T03:28:12.000Z | [
"pytorch",
"bert",
"fill-mask",
"zh",
"transformers",
"lm-head",
"license:gpl-3.0",
"autotrain_compatible"
] | fill-mask | false | ckiplab | null | ckiplab/bert-tiny-chinese | 152 | null | transformers | 3,989 | ---
language:
- zh
thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png
tags:
- pytorch
- lm-head
- bert
- zh
license: gpl-3.0
---
# CKIP BERT Tiny Chinese
This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition).
這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。
## Homepage
- https://github.com/ckiplab/ckip-transformers
## Contributers
- [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer)
## Usage
Please use BertTokenizerFast as tokenizer instead of AutoTokenizer.
請使用 BertTokenizerFast 而非 AutoTokenizer。
```
from transformers import (
BertTokenizerFast,
AutoModel,
)
tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese')
model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese')
```
For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers.
有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
|
g8a9/bert-base-cased_ami18 | 8341662327fe4c4f77f7ed4b32c2ffcd1ff053d4 | 2022-05-30T12:46:33.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | false | g8a9 | null | g8a9/bert-base-cased_ami18 | 152 | null | transformers | 3,990 | Entry not found |
jamie613/mt5_correct_puntuation | d7c1e9ce081b3e779d5526b0e4d574b4bf25fedc | 2022-07-20T10:00:44.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
] | text2text-generation | false | jamie613 | null | jamie613/mt5_correct_puntuation | 152 | null | transformers | 3,991 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: mt5_correct_puntuation_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5_correct_puntuation
本模型使用中文維基百科語料微調 [google/mt5-base](https://huggingface.co/google/mt5-base)預訓練模型之中文標點符號訂正器。目前之準確率為 0.794。
This is a [google/mt5-base](https://huggingface.co/google/mt5-base) model trained on Mandarin Wikipedia corpus and finetuned for Mandarin punctuation correction. Currently the accuracy is 0.794.
## Datasets
模型使用中文維基百科公開資料微調。將取得的文本以「。」或「,」切分為不超過100字的句子。因為逗號和句號數量壓倒性地多,為盡量平衡資料集,僅保留包含冒號、分號、驚嘆號、問號的句子,作為正確句。將正確句之「,。:;、!?」隨機以「,。:;、!?」,製作為不正確句。訓練用句子共有291,112句。
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bloom-testing/test-bloomd-350m-efficient-forward-backward | ec106fb608d6160354cf98e494c17dc8aed6734f | 2022-07-23T11:07:21.000Z | [
"pytorch",
"bloom",
"feature-extraction",
"transformers"
] | feature-extraction | false | bloom-testing | null | bloom-testing/test-bloomd-350m-efficient-forward-backward | 152 | null | transformers | 3,992 | Entry not found |
camembert/camembert-base-wikipedia-4gb | e766d74f3ac73805575b1ae2a0e72f374274dc75 | 2020-12-11T21:35:21.000Z | [
"pytorch",
"camembert",
"fr",
"arxiv:1911.03894",
"transformers"
] | null | false | camembert | null | camembert/camembert-base-wikipedia-4gb | 151 | null | transformers | 3,993 | ---
language: fr
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
For further information or requests, please go to [Camembert Website](https://camembert-model.fr/)
## Pre-trained models
| Model | #params | Arch. | Training data |
|--------------------------------|--------------------------------|-------|-----------------------------------|
| `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
| `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
| `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
| `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
| `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
| `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
## How to use CamemBERT with HuggingFace
##### Load CamemBERT and its sub-word tokenizer :
```python
from transformers import CamembertModel, CamembertTokenizer
# You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
tokenizer = CamembertTokenizer.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb")
camembert.eval() # disable dropout (or leave in train mode to finetune)
```
##### Filling masks using pipeline
```python
from transformers import pipeline
camembert_fill_mask = pipeline("fill-mask", model="camembert/camembert-base-wikipedia-4gb", tokenizer="camembert/camembert-base-wikipedia-4gb")
results = camembert_fill_mask("Le camembert est un fromage de <mask>!")
# results
#[{'sequence': '<s> Le camembert est un fromage de chèvre!</s>', 'score': 0.4937814474105835, 'token': 19370},
#{'sequence': '<s> Le camembert est un fromage de brebis!</s>', 'score': 0.06255942583084106, 'token': 30616},
#{'sequence': '<s> Le camembert est un fromage de montagne!</s>', 'score': 0.04340197145938873, 'token': 2364},
# {'sequence': '<s> Le camembert est un fromage de Noël!</s>', 'score': 0.02823255956172943, 'token': 3236},
#{'sequence': '<s> Le camembert est un fromage de vache!</s>', 'score': 0.021357402205467224, 'token': 12329}]
```
##### Extract contextual embedding features from Camembert output
```python
import torch
# Tokenize in sub-words with SentencePiece
tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
# ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
# 1-hot encode and add special starting and end tokens
encoded_sentence = tokenizer.encode(tokenized_sentence)
# [5, 221, 10, 10600, 14, 8952, 10540, 75, 1114, 6]
# NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
# Feed tokens to Camembert as a torch tensor (batch dim 1)
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
#tensor([[[-0.0928, 0.0506, -0.0094, ..., -0.2388, 0.1177, -0.1302],
# [ 0.0662, 0.1030, -0.2355, ..., -0.4224, -0.0574, -0.2802],
# [-0.0729, 0.0547, 0.0192, ..., -0.1743, 0.0998, -0.2677],
# ...,
```
##### Extract contextual embedding features from all Camembert layers
```python
from transformers import CamembertConfig
# (Need to reload the model with new config)
config = CamembertConfig.from_pretrained("camembert/camembert-base-wikipedia-4gb", output_hidden_states=True)
camembert = CamembertModel.from_pretrained("camembert/camembert-base-wikipedia-4gb", config=config)
embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
# all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
all_layer_embeddings[5]
# layer 5 contextual embedding : size torch.Size([1, 10, 768])
#tensor([[[-0.0059, -0.0227, 0.0065, ..., -0.0770, 0.0369, 0.0095],
# [ 0.2838, -0.1531, -0.3642, ..., -0.0027, -0.8502, -0.7914],
# [-0.0073, -0.0338, -0.0011, ..., 0.0533, -0.0250, -0.0061],
# ...,
```
## Authors
CamemBERT was trained and evaluated by Louis Martin\*, Benjamin Muller\*, Pedro Javier Ortiz Suárez\*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
## Citation
If you use our work, please cite:
```bibtex
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
funnel-transformer/medium-base | 9a54dc2185d6ca52bd3d951b259d113cf193eacf | 2020-12-11T21:40:34.000Z | [
"pytorch",
"tf",
"funnel",
"feature-extraction",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:gigaword",
"arxiv:2006.03236",
"transformers",
"license:apache-2.0"
] | feature-extraction | false | funnel-transformer | null | funnel-transformer/medium-base | 151 | null | transformers | 3,994 | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
- gigaword
---
# Funnel Transformer medium model (B6-3x2-3x2 without decoder)
Pretrained model on English language using a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in
[this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in
[this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing Funnel Transformer did not write a model card for this model so this model card has been
written by the Hugging Face team.
## Model description
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and
the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
**Note:** This model does not contain the decoder, so it ouputs hidden states that have a sequence length of one fourth
of the inputs. It's good to use for tasks requiring a summary of the sentence (like sentence classification) but not if
you need one input per initial token. You should use the `medium` model in that case.
## Intended uses & limitations
You can use the raw model to extract a vector representation of a given text, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=funnel-transformer) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import FunnelTokenizer, FunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium-base")
model = FunnelBaseModel.from_pretrained("funnel-transformer/medium-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import FunnelTokenizer, TFFunnelBaseModel
tokenizer = FunnelTokenizer.from_pretrained("funnel-transformer/medium-base")
model = TFFunnelBaseModel.from_pretrained("funnel-transformer/medium-base")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Training data
The BERT model was pretrained on:
- [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books,
- [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers),
- [Clue Web](https://lemurproject.org/clueweb12/), a dataset of 733,019,372 English web pages,
- [GigaWord](https://catalog.ldc.upenn.edu/LDC2011T07), an archive of newswire text data,
- [Common Crawl](https://commoncrawl.org/), a dataset of raw web pages.
### BibTeX entry and citation info
```bibtex
@misc{dai2020funneltransformer,
title={Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing},
author={Zihang Dai and Guokun Lai and Yiming Yang and Quoc V. Le},
year={2020},
eprint={2006.03236},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
hfl/rbt4 | a62c22ddef442fb4ea9c25fd26e26467e7bc5da0 | 2021-05-19T19:21:20.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"zh",
"arxiv:1906.08101",
"arxiv:2004.13922",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] | fill-mask | false | hfl | null | hfl/rbt4 | 151 | 2 | transformers | 3,995 | ---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# This is a re-trained 4-layer RoBERTa-wwm-ext model.
## Chinese BERT with Whole Word Masking
For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
This repository is developed based on:https://github.com/google-research/bert
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
- Primary: https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
- Secondary: https://arxiv.org/abs/1906.08101
```
@article{chinese-bert-wwm,
title={Pre-Training with Whole Word Masking for Chinese BERT},
author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
journal={arXiv preprint arXiv:1906.08101},
year={2019}
}
```
|
khalidalt/DeBERTa-v3-large-mnli | 918cffe0594a2dd41924d5e0dcccb8058ebb79a4 | 2021-11-22T08:38:23.000Z | [
"pytorch",
"deberta-v2",
"text-classification",
"en",
"arxiv:2006.03654",
"transformers",
"zero-shot-classification"
] | text-classification | false | khalidalt | null | khalidalt/DeBERTa-v3-large-mnli | 151 | 1 | transformers | 3,996 | ---
language:
- en
tags:
- text-classification
- zero-shot-classification
metrics:
- accuracy
widget:
- text: "The Movie have been criticized for the story. However, I think it is a great movie. [SEP] I liked the movie."
---
# DeBERTa-v3-large-mnli
## Model description
This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information.
The model used is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-large). The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on [official repository](https://github.com/microsoft/DeBERTa) and the [paper](https://arxiv.org/abs/2006.03654)
## Intended uses & limitations
#### How to use the model
```python
premise = "The Movie have been criticized for the story. However, I think it is a great movie."
hypothesis = "I liked the movie."
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
prediction = torch.softmax(output["logits"][0], -1)
label_names = ["entailment", "neutral", "contradiction"]
print(label_names[prediction.argmax(0).tolist()])
```
### Training data
This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement.
### Training procedure
DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters.
```
train_args = TrainingArguments(
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=3,
warmup_ratio=0.06,
weight_decay=0.1,
fp16=True,
seed=42,
)
```
### BibTeX entry and citation info
Please cite the [DeBERTa paper](https://arxiv.org/abs/2006.03654) and [MultiNLI Dataset](https://cims.nyu.edu/~sbowman/multinli/paper.pdf) if you use this model and include this Huggingface hub. |
mrm8488/spanbert-large-finetuned-squadv1 | c8429f15e6c6881a5aa5b1eafbbee42a57e2e9df | 2021-05-20T00:58:31.000Z | [
"pytorch",
"jax",
"bert",
"en",
"arxiv:1907.10529",
"transformers"
] | null | false | mrm8488 | null | mrm8488/spanbert-large-finetuned-squadv1 | 151 | null | transformers | 3,997 | ---
language: en
thumbnail:
---
# SpanBERT large fine-tuned on SQuAD v1
[SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 1.1](https://rajpurkar.github.io/SQuAD-explorer/explore/1.1/dev/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)).
## Details of SpanBERT
[SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529)
## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓
[SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer/)
## Model fine-tuning 🏋️
You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT)
```bash
python code/run_squad.py \
--do_train \
--do_eval \
--model spanbert-large-cased \
--train_file train-v1.1.json \
--dev_file dev-v1.1.json \
--train_batch_size 32 \
--eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 4 \
--max_seq_length 512 \
--doc_stride 128 \
--eval_metric f1 \
--output_dir squad_output \
--fp16
```
## Results Comparison 📝
| | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED |
| ---------------------- | ------------- | --------- | ------- | ------ |
| | F1 | F1 | avg. F1 | F1 |
| BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 |
| SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) |
| BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 |
| SpanBERT (large) | **94.6** (this) | [88.7](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv2) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) |
Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers.
## Model in action
Fast usage with **pipelines**:
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="mrm8488/spanbert-large-finetuned-squadv1",
tokenizer="SpanBERT/spanbert-large-cased"
)
qa_pipeline({
'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately",
'question': "How has been working Manuel Romero lately?"
})
# Output: {'answer': 'very hard in the repository hugginface/transformers',
'end': 82,
'score': 0.327230326857725,
'start': 31}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
lirondos/anglicisms-spanish-flair-cs | ce0e1a1e7ea5ffc10e34925d6ddfbefb341fb009 | 2022-05-16T14:02:15.000Z | [
"pytorch",
"es",
"dataset:coalas",
"flair",
"anglicisms",
"loanwords",
"borrowing",
"codeswitching",
"token-classification",
"sequence-tagger-model",
"arxiv:2203.16169",
"license:cc-by-4.0"
] | token-classification | false | lirondos | null | lirondos/anglicisms-spanish-flair-cs | 151 | null | flair | 3,998 | ---
language:
- es
license: cc-by-4.0
tags:
- anglicisms # Example: audio
- loanwords # Example: automatic-speech-recognition
- borrowing # Example: speech
- codeswitching # Example to specify a library: allennlp
- flair
- token-classification
- sequence-tagger-model
- arxiv:2203.16169
datasets:
- coalas # Example: common_voice. Use dataset id from https://hf.co/datasets
widget:
- text: "Las fake news sobre la celebrity se reprodujeron por los 'mass media' en prime time."
- text: "En la 'red carpet' lució un look muy urban con chunky shoes de inspiración anime."
- text: "Benching, estar en el banquillo de tu crush mientras otro juega de titular."
- text: "Recetas de noviembre para el batch cooking."
- text: "Buscamos data scientist con conocimientos de machine learning y blockchain."
---
# anglicisms-spanish-flair-cs
This is a pretrained model for detecting unassimilated English lexical borrowings (a.k.a. anglicisms) on Spanish newswire. This model labels words of foreign origin (fundamentally from English) used in Spanish language, words such as *fake news*, *machine learning*, *smartwatch*, *influencer* or *streaming*.
The model is a BiLSTM-CRF model fed with [Transformer-based embeddings pretrained on codeswitched data](https://huggingface.co/sagorsarker/codeswitch-spaeng-lid-lince) along subword embeddings (BPE and character embeddings). The model was trained on the [COALAS](https://github.com/lirondos/coalas/) corpus for the task of detecting lexical borrowings.
The model considers two labels:
* ``ENG``: For English lexical borrowings (*smartphone*, *online*, *podcast*)
* ``OTHER``: For lexical borrowings from any other language (*boutique*, *anime*, *umami*)
The model uses BIO encoding to account for multitoken borrowings.
**⚠ There is another [mBERT -based model](https://huggingface.co/lirondos/anglicisms-spanish-mbert) for this same task trained using the ``Transformers`` library**. That model however produced worse results than this Flair-based model (F1 = 83.55).
## Metrics (on the test set)
Results obtained on the test set of the [COALAS](https://github.com/lirondos/coalas/) corpus.
| LABEL | Precision | Recall | F1 |
|:-------|-----:|-----:|---------:|
| ALL | 90.14 | 81.79 | 85.76 |
| ENG | 90.16 | 84.34 | 87.16 |
| OTHER | 85.71 | 13.04 | 22.64 |
## Dataset
This model was trained on [COALAS](https://github.com/lirondos/coalas/), a corpus of Spanish newswire annotated with unassimilated lexical borrowings. The corpus contains 370,000 tokens and includes various written media written in European Spanish. The test set was designed to be as difficult as possible: it covers sources and dates not seen in the training set, includes a high number of OOV words (92% of the borrowings in the test set are OOV) and is very borrowing-dense (20 borrowings per 1,000 tokens).
|Set | Tokens | ENG | OTHER | Unique |
|:-------|-----:|-----:|---------:|---------:|
|Training |231,126 |1,493 | 28 |380 |
|Development |82,578 |306 |49 |316|
|Test |58,997 |1,239 |46 |987|
|**Total** |372,701 |3,038 |123 |1,683 |
## More info
More information about the dataset, model experimentation and error analysis can be found in the paper: *[Detecting Unassimilated Borrowings in Spanish: An Annotated Corpus and Approaches to Modeling](https://aclanthology.org/2022.acl-long.268/)*.
## How to use
```
from flair.data import Sentence
from flair.models import SequenceTagger
import pathlib
import os
if os.name == 'nt': # Minor patch needed if you are running from Windows
temp = pathlib.PosixPath
pathlib.PosixPath = pathlib.WindowsPath
tagger = SequenceTagger.load("lirondos/anglicisms-spanish-flair-cs")
text = "Las fake news sobre la celebrity se reprodujeron por los mass media en prime time."
sentence = Sentence(text)
# predict tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted borrowing spans
print('The following borrowing were found:')
for entity in sentence.get_spans():
print(entity)
```
## Citation
If you use this model, please cite the following reference:
```
@inproceedings{alvarez-mellado-lignos-2022-detecting,
title = "Detecting Unassimilated Borrowings in {S}panish: {A}n Annotated Corpus and Approaches to Modeling",
author = "{\'A}lvarez-Mellado, Elena and
Lignos, Constantine",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.268",
pages = "3868--3888",
abstract = "This work presents a new resource for borrowing identification and analyzes the performance and errors of several models on this task. We introduce a new annotated corpus of Spanish newswire rich in unassimilated lexical borrowings{---}words from one language that are introduced into another without orthographic adaptation{---}and use it to evaluate how several sequence labeling models (CRF, BiLSTM-CRF, and Transformer-based models) perform. The corpus contains 370,000 tokens and is larger, more borrowing-dense, OOV-rich, and topic-varied than previous corpora available for this task. Our results show that a BiLSTM-CRF model fed with subword embeddings along with either Transformer-based embeddings pretrained on codeswitched data or a combination of contextualized word embeddings outperforms results obtained by a multilingual BERT-based model.",
}
```
|
Souvikcmsa/BERT_sentiment_analysis | 1d2f8b455cc8f74f001bf9680f682dd5a6d0827b | 2022-04-21T17:17:04.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Souvikcmsa/autotrain-data-sentiment_analysis",
"transformers",
"autotrain",
"co2_eq_emissions"
] | text-classification | false | Souvikcmsa | null | Souvikcmsa/BERT_sentiment_analysis | 151 | null | transformers | 3,999 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
- Output: "Positive"
datasets:
- Souvikcmsa/autotrain-data-sentiment_analysis
co2_eq_emissions: 0.029363397844935534
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification (3-class Sentiment Classification)
## Validation Metrics
If you search sentiment analysis model in huggingface you find a model from finiteautomata. Their model provides micro and macro F1 score around 67%. Check out this model with around 80% of macro and micro F1 score.
- Loss: 0.4992932379245758
- Accuracy: 0.799017824663514
- Macro F1: 0.8021508522962549
- Micro F1: 0.799017824663514
- Weighted F1: 0.7993775463659935
- Macro Precision: 0.80406197665167
- Micro Precision: 0.799017824663514
- Weighted Precision: 0.8000374433849405
- Macro Recall: 0.8005261994732908
- Micro Recall: 0.799017824663514
- Weighted Recall: 0.799017824663514
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Souvikcmsa/autotrain-sentiment_analysis-762923428
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Souvikcmsa/autotrain-sentiment_analysis-762923428", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Souvikcmsa/autotrain-sentiment_analysis-762923428", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
OR
```
from transformers import pipeline
classifier = pipeline("text-classification", model = "Souvikcmsa/BERT_sentiment_analysis")
classifier("I loved Star Wars so much!")# Positive
classifier("A soccer game with multiple males playing. Some men are playing a sport.")# Neutral
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.