modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 00:45:16
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 00:43:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
m3hrdadfi/wav2vec2-large-xlsr-persian-v3
|
m3hrdadfi
| 2021-11-04T15:22:11Z | 1,900 | 37 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"fa",
"dataset:common_voice",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: fa
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/resolve/main/sample1.flac
- example_title: Common Voice sample 2978
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/resolve/main/sample2978.flac
- example_title: Common Voice sample 5168
src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3/resolve/main/sample5168.flac
model-index:
- name: XLSR Wav2Vec2 Persian (Farsi) V3 by Mehrdad Farahani
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fa
type: common_voice
args: fa
metrics:
- name: Test WER
type: wer
value: 10.36
---
# Wav2Vec2-Large-XLSR-53-Persian V3
## Usage
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Persian (Farsi) using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
**Requirements**
```bash
# requirement packages
!pip install git+https://github.com/huggingface/datasets.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install torchaudio
!pip install librosa
!pip install jiwer
!pip install parsivar
!pip install num2fawords
```
**Normalizer**
```bash
# Normalizer
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/"wav2vec2-large-xlsr-persian-v3/raw/main/dictionary.py
!wget -O normalizer.py https://huggingface.co/m3hrdadfi/"wav2vec2-large-xlsr-persian-v3/raw/main/normalizer.py
```
**Downloading data**
```bash
wget https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/fa.tar.gz
tar -xzf fa.tar.gz
rm -rf fa.tar.gz
```
**Cleaning**
```python
from normalizer import normalizer
def cleaning(text):
if not isinstance(text, str):
return None
return normalizer({"sentence": text}, return_dict=False)
data_dir = "/content/cv-corpus-6.1-2020-12-11/fa"
test = pd.read_csv(f"{data_dir}/test.tsv", sep=" ")
test["path"] = data_dir + "/clips/" + test["path"]
print(f"Step 0: {len(test)}")
test["status"] = test["path"].apply(lambda path: True if os.path.exists(path) else None)
test = test.dropna(subset=["path"])
test = test.drop("status", 1)
print(f"Step 1: {len(test)}")
test["sentence"] = test["sentence"].apply(lambda t: cleaning(t))
test = test.dropna(subset=["sentence"])
print(f"Step 2: {len(test)}")
test = test.reset_index(drop=True)
print(test.head())
test = test[["path", "sentence"]]
test.to_csv("/content/test.csv", sep=" ", encoding="utf-8", index=False)
```
**Prediction**
```python
import numpy as np
import pandas as pd
import librosa
import torch
import torchaudio
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset, load_metric
import IPython.display as ipd
model_name_or_path = "m3hrdadfi/wav2vec2-large-xlsr-persian-v3"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(model_name_or_path, device)
processor = Wav2Vec2Processor.from_pretrained(model_name_or_path)
model = Wav2Vec2ForCTC.from_pretrained(model_name_or_path).to(device)
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
speech_array = speech_array.squeeze().numpy()
speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, processor.feature_extractor.sampling_rate)
batch["speech"] = speech_array
return batch
def predict(batch):
features = processor(
batch["speech"],
sampling_rate=processor.feature_extractor.sampling_rate,
return_tensors="pt",
padding=True
)
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
return batch
dataset = load_dataset("csv", data_files={"test": "/content/test.csv"}, delimiter=" ")["test"]
dataset = dataset.map(speech_file_to_array_fn)
result = dataset.map(predict, batched=True, batch_size=4)
```
**WER Score**
```python
wer = load_metric("wer")
print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
```
**Output**
```python
max_items = np.random.randint(0, len(result), 20).tolist()
for i in max_items:
reference, predicted = result["sentence"][i], result["predicted"][i]
print("reference:", reference)
print("predicted:", predicted)
print('---')
```
```text
reference: ماجرا رو براش تعریف کردم اون گفت مریم اگه میدونی پسر خوبیه خب چه اشکالی داره باهاش بیشتر اشنا بشو
predicted: ماجرا رو براش تعریف کردم اون گفت مریم اگه میدونی پسر خوبیه خب چه اشکالی داره باهاش بیشتر اشنا بشو
---
reference: بیا پایین تو اجازه نداری بری اون بالا
predicted: بیا پایین تو اجازه نداری بری اون بالا
---
reference: هر روز یک دو مداد کش می رفتتم تااین که تا پایان ترم از تمامی دوستانم مداد برداشته بودم
predicted: هر روز یک دو مداد کش می رفتم تااین که تا پایین ترم از تمامی دوستان و مداد برداشته بودم
---
reference: فکر میکنی آروم میشینه
predicted: فکر میکنی آروم میشینه
---
reference: هرکسی با گوشی هوشمند خود میتواند با کایلا متصل گردد در یک محدوده مکانی
predicted: هرکسی با گوشی هوشمند خود میتواند با کایلا متصل گردد در یک محدوده مکانی
---
reference: برو از مهرداد بپرس
predicted: برو از مهرداد بپرس
---
reference: می خواهم شما را با این قدمها آشنا کنم
predicted: می خواهم شما را با این قدمها آشنا کنم
---
reference: میدونم یه روز دوباره می تونم تو رو ببینم
predicted: میدونم یه روز دوباره می تونم تو رو ببینم
---
reference: بسیار خوب خواهد بود دعوت او را بپذیری
predicted: بسیار خوب خواهد بود دعوت او را بپذیری
---
reference: بهت بگن آشغالی خوبه
predicted: بهت بگن آشغالی خوبه
---
reference: چرا معاشرت با هم ایمانان ما را محفوظ نگه میدارد
predicted: چرا معاشرت با هم ایمانان آ را م حفوظ نگه میدارد
---
reference: بولیوی پس از گویان فقیرترین کشور آمریکای جنوبی است
predicted: بولیوی پس از گویان فقیرترین کشور آمریکای جنوبی است
---
reference: بعد از مدتی اینکار برایم عادی شد
predicted: بعد از مدتی اینکار برایم عادو شد
---
reference: به نظر اون هم همینطوره
predicted: به نظر اون هم همینطوره
---
reference: هیچ مایونز ی دارید
predicted: هیچ مایونز ی دارید
---
reference: هیچ یک از انان کاری به سنگ نداشتند
predicted: هیچ شک از انان کاری به سنگ نداشتند
---
reference: می خواهم کمی کتاب شعر ببینم
predicted: می خواهم کتاب شعر ببینم
---
reference: همین شوهر فهیمه مگه نمی گفتی فرمانده بوده کو
predicted: همین شوهر فهیمه بینامی گفتی فهمانده بود کو
---
reference: اون جاها کسی رو نمیبینی که تو دستش کتاب نباشه
predicted: اون جاها کسی رو نمیبینی که تو دستش کتاب نباشه
---
reference: زندان رفتن من در این سالهای اخیر برام شانس بزرگی بود که معما و مشکل چندین سالهام را حل کرد
predicted: زندان رفتن من در این سالها اخی براب شانس بزرگی بود که معما و مشکل چندین سالهام را حل کرد
---
```
## Evaluation
**Test Result:**
- WER: 10.36%
|
patrickvonplaten/hello_2b_3
|
patrickvonplaten
| 2021-11-04T15:11:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: hello_2b_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hello_2b_3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5615
- Wer: 0.9808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6389 | 0.92 | 100 | 3.6218 | 1.0 |
| 1.6676 | 1.85 | 200 | 3.2655 | 1.0 |
| 0.3067 | 2.77 | 300 | 3.2273 | 1.0 |
| 0.1924 | 3.7 | 400 | 3.0238 | 0.9999 |
| 0.1777 | 4.63 | 500 | 2.1606 | 0.9991 |
| 0.1481 | 5.55 | 600 | 1.8742 | 0.9982 |
| 0.1128 | 6.48 | 700 | 2.0114 | 0.9994 |
| 0.1806 | 7.4 | 800 | 1.9032 | 0.9984 |
| 0.0399 | 8.33 | 900 | 2.0556 | 0.9996 |
| 0.0729 | 9.26 | 1000 | 2.0515 | 0.9987 |
| 0.0847 | 10.18 | 1100 | 2.2121 | 0.9995 |
| 0.0777 | 11.11 | 1200 | 1.7002 | 0.9923 |
| 0.0476 | 12.04 | 1300 | 1.5262 | 0.9792 |
| 0.0518 | 12.96 | 1400 | 1.5990 | 0.9832 |
| 0.071 | 13.88 | 1500 | 1.6326 | 0.9875 |
| 0.0333 | 14.81 | 1600 | 1.5955 | 0.9870 |
| 0.0369 | 15.74 | 1700 | 1.5577 | 0.9832 |
| 0.0689 | 16.66 | 1800 | 1.5415 | 0.9839 |
| 0.0227 | 17.59 | 1900 | 1.5450 | 0.9878 |
| 0.0472 | 18.51 | 2000 | 1.5642 | 0.9846 |
| 0.0214 | 19.44 | 2100 | 1.6103 | 0.9846 |
| 0.0289 | 20.37 | 2200 | 1.6467 | 0.9898 |
| 0.0182 | 21.29 | 2300 | 1.5268 | 0.9780 |
| 0.0439 | 22.22 | 2400 | 1.6001 | 0.9818 |
| 0.06 | 23.15 | 2500 | 1.5481 | 0.9813 |
| 0.0351 | 24.07 | 2600 | 1.5672 | 0.9820 |
| 0.0198 | 24.99 | 2700 | 1.6303 | 0.9856 |
| 0.0328 | 25.92 | 2800 | 1.5958 | 0.9831 |
| 0.0245 | 26.85 | 2900 | 1.5745 | 0.9809 |
| 0.0885 | 27.77 | 3000 | 1.5455 | 0.9809 |
| 0.0224 | 28.7 | 3100 | 1.5378 | 0.9824 |
| 0.0223 | 29.63 | 3200 | 1.5642 | 0.9810 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
aheba31/test-predictor
|
aheba31
| 2021-11-04T13:44:28Z | 8 | 0 |
speechbrain
|
[
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ECAPA",
"TDNN",
"en",
"dataset:voxceleb",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: "en"
thumbnail:
tags:
- speechbrain
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- ECAPA
- TDNN
license: "apache-2.0"
datasets:
- voxceleb
metrics:
- EER
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN embeddings on Voxceleb
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on Voxceleb 1+ Voxceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
| Release | EER(%) | minDCF |
|:-------------:|:--------------:|:--------------:|
| 05-03-21 | 0.69 | 0.08258 |
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
gh repo clone aheba/speechbrain-aheba-contribs
git checkout pretrain_new
pip install -r requirements.txt
pip install --editable .
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import Predictor
classifier = Predictor.import_model(source="aheba31/test-predictor")
signal, fs = torchaudio.load('samples/audio_samples/example1.wav')
embeddings = classifier.encode_batch(signal)
```
### Perform Speaker Verification
```python
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="aheba31/test-predictor", savedir="aheba31/test-predictor")
score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-voxceleb/example1.wav", "speechbrain/spkrec-ecapa-voxceleb/example2.flac")
```
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/VoxCeleb/SpeakerRec
python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
```
@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
|
TalTechNLP/voxlingua107-epaca-tdnn-ce
|
TalTechNLP
| 2021-11-04T13:37:25Z | 16 | 3 |
speechbrain
|
[
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
language: multilingual
thumbnail:
tags:
- audio-classification
- speechbrain
- embeddings
- Language
- Identification
- pytorch
- ECAPA-TDNN
- TDNN
- VoxLingua107
license: "apache-2.0"
datasets:
- VoxLingua107
metrics:
- Accuracy
widget:
- example_title: English Sample
src: https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac
---
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model (CE)
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses
more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training.
We observed that this improved the performance of extracted utterance embeddings for downstream tasks.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn-ce", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[-2.8646e+01, -3.0346e+01, -2.0748e+01, -2.9562e+01, -2.2187e+01,
-3.2668e+01, -3.6677e+01, -3.3573e+01, -3.2545e+01, -2.4365e+01,
-2.4688e+01, -3.1171e+01, -2.7743e+01, -2.9918e+01, -2.4770e+01,
-3.2250e+01, -2.4727e+01, -2.6087e+01, -2.1870e+01, -3.2821e+01,
-2.2128e+01, -2.2822e+01, -3.0888e+01, -3.3564e+01, -2.9906e+01,
-2.2392e+01, -2.5573e+01, -2.6443e+01, -3.2429e+01, -3.2652e+01,
-3.0030e+01, -2.4607e+01, -2.2967e+01, -2.4396e+01, -2.8578e+01,
-2.5153e+01, -2.8475e+01, -2.6409e+01, -2.5230e+01, -2.7957e+01,
-2.6298e+01, -2.3609e+01, -2.5863e+01, -2.8225e+01, -2.7225e+01,
-3.0486e+01, -2.1185e+01, -2.7938e+01, -3.3155e+01, -1.9076e+01,
-2.9181e+01, -2.2160e+01, -1.8352e+01, -2.5866e+01, -3.3636e+01,
-4.2016e+00, -3.1581e+01, -3.1894e+01, -2.7834e+01, -2.5429e+01,
-3.2235e+01, -3.2280e+01, -2.8786e+01, -2.3366e+01, -2.6047e+01,
-2.2075e+01, -2.3770e+01, -2.2518e+01, -2.8101e+01, -2.5745e+01,
-2.6441e+01, -2.9822e+01, -2.7109e+01, -3.0225e+01, -2.4566e+01,
-2.9268e+01, -2.7651e+01, -3.4221e+01, -2.9026e+01, -2.6009e+01,
-3.1968e+01, -3.1747e+01, -2.8156e+01, -2.9025e+01, -2.7756e+01,
-2.8052e+01, -2.9341e+01, -2.8806e+01, -2.1636e+01, -2.3992e+01,
-2.3794e+01, -3.3743e+01, -2.8332e+01, -2.7465e+01, -1.5085e-02,
-2.9094e+01, -2.1444e+01, -2.9780e+01, -3.6046e+01, -3.7401e+01,
-3.0888e+01, -3.3172e+01, -1.8931e+01, -2.2679e+01, -3.0225e+01,
-2.4995e+01, -2.1028e+01]]), tensor([-0.0151]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as log-likelihoods that
# the given utterance belongs to the given language (i.e., the larger the better)
# The linear-scale likelihood can be retrieved using the following:
print(prediction[1].exp())
tensor([0.9850])
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 6.7% on the VoxLingua107 development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
|
AkshaySg/langid
|
AkshaySg
| 2021-11-04T12:38:18Z | 1 | 5 |
speechbrain
|
[
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:04Z |
---
language: multilingual
thumbnail:
tags:
- audio-classification
- speechbrain
- embeddings
- Language
- Identification
- pytorch
- ECAPA-TDNN
- TDNN
- VoxLingua107
license: "apache-2.0"
datasets:
- VoxLingua107
metrics:
- Accuracy
widget:
- example_title: English Sample
src: https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac
---
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[0.3210, 0.3751, 0.3680, 0.3939, 0.4026, 0.3644, 0.3689, 0.3597, 0.3508,
0.3666, 0.3895, 0.3978, 0.3848, 0.3957, 0.3949, 0.3586, 0.4360, 0.3997,
0.4106, 0.3886, 0.4177, 0.3870, 0.3764, 0.3763, 0.3672, 0.4000, 0.4256,
0.4091, 0.3563, 0.3695, 0.3320, 0.3838, 0.3850, 0.3867, 0.3878, 0.3944,
0.3924, 0.4063, 0.3803, 0.3830, 0.2996, 0.4187, 0.3976, 0.3651, 0.3950,
0.3744, 0.4295, 0.3807, 0.3613, 0.4710, 0.3530, 0.4156, 0.3651, 0.3777,
0.3813, 0.6063, 0.3708, 0.3886, 0.3766, 0.4023, 0.3785, 0.3612, 0.4193,
0.3720, 0.4406, 0.3243, 0.3866, 0.3866, 0.4104, 0.4294, 0.4175, 0.3364,
0.3595, 0.3443, 0.3565, 0.3776, 0.3985, 0.3778, 0.2382, 0.4115, 0.4017,
0.4070, 0.3266, 0.3648, 0.3888, 0.3907, 0.3755, 0.3631, 0.4460, 0.3464,
0.3898, 0.3661, 0.3883, 0.3772, 0.9289, 0.3687, 0.4298, 0.4211, 0.3838,
0.3521, 0.3515, 0.3465, 0.4772, 0.4043, 0.3844, 0.3973, 0.4343]]), tensor([0.9289]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as cosine scores between
# the languages and the given utterance (i.e., the larger the better)
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
|
gayanin/bart-finetuned-pubmed
|
gayanin
| 2021-11-04T11:03:30Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-pubmed
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5363
- Rouge2 Precision: 0.3459
- Rouge2 Recall: 0.2455
- Rouge2 Fmeasure: 0.2731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 1.652 | 1.0 | 1125 | 1.5087 | 0.3647 | 0.2425 | 0.2772 |
| 1.4695 | 2.0 | 2250 | 1.5039 | 0.3448 | 0.2457 | 0.2732 |
| 1.3714 | 3.0 | 3375 | 1.4842 | 0.3509 | 0.2474 | 0.277 |
| 1.2734 | 4.0 | 4500 | 1.4901 | 0.3452 | 0.2426 | 0.2716 |
| 1.1853 | 5.0 | 5625 | 1.5152 | 0.3658 | 0.2371 | 0.2744 |
| 1.0975 | 6.0 | 6750 | 1.5133 | 0.3529 | 0.2417 | 0.2729 |
| 1.0448 | 7.0 | 7875 | 1.5203 | 0.3485 | 0.2464 | 0.275 |
| 0.9999 | 8.0 | 9000 | 1.5316 | 0.3437 | 0.2435 | 0.2719 |
| 0.9732 | 9.0 | 10125 | 1.5338 | 0.3464 | 0.2446 | 0.2732 |
| 0.954 | 10.0 | 11250 | 1.5363 | 0.3459 | 0.2455 | 0.2731 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
nikhil6041/wav2vec2-large-xlsr-hindi-demo-colab
|
nikhil6041
| 2021-11-04T09:21:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hindi-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindi-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
philschmid/RoBERTa-Banking77
|
philschmid
| 2021-11-04T09:12:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:banking77",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags: autonlp
language: en
widget:
- text: "I am still waiting on my card?"
datasets:
- banking77
model-index:
- name: RoBERTa-Banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: "BANKING77"
type: banking77
metrics:
- name: Accuracy
type: accuracy
value: 93.51
- name: Macro F1
type: macro-f1
value: 93.49
- name: Weighted F1
type: weighted-f1
value: 93.49
---
# `RoBERTa-Banking77` trained using autoNLP
- Problem type: Multi-class Classification
## Validation Metrics
- Loss: 0.27382662892341614
- Accuracy: 0.935064935064935
- Macro F1: 0.934939412967268
- Micro F1: 0.935064935064935
- Weighted F1: 0.934939412967268
- Macro Precision: 0.9372295644352715
- Micro Precision: 0.935064935064935
- Weighted Precision: 0.9372295644352717
- Macro Recall: 0.9350649350649349
- Micro Recall: 0.935064935064935
- Weighted Recall: 0.935064935064935
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/philschmid/RoBERTa-Banking77
```
Or Python API:
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/RoBERTa-Banking77'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')
```
|
nateraw/lightweight-gan-flowers-64
|
nateraw
| 2021-11-04T09:11:04Z | 0 | 4 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Flowers GAN
<a href="https://colab.research.google.com/github/nateraw/huggingface-hub-examples/blob/main/pytorch_lightweight_gan.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Give the [Github Repo](https://github.com/nateraw/huggingface-hub-examples) a ⭐️
### Generated Images
<video width="320" height="240" controls>
<source src="https://huggingface.co/nateraw/lightweight-gan-flowers-64/resolve/main/generated.mp4" type="video/mp4">
</video>
### EMA
<video width="320" height="240" controls>
<source src="https://huggingface.co/nateraw/lightweight-gan-flowers-64/resolve/main/ema.mp4" type="video/mp4">
</video>
|
Lucdi90/DialoGPT-medium-XiaoBot
|
Lucdi90
| 2021-11-04T08:54:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- conversational
---
# XiaoBot for Discord
[Tutorial](https://youtu.be/UjDpW_SOrlw) followed for this model.
|
histinct7002/distilbert-base-uncased-finetuned-ner
|
histinct7002
| 2021-11-04T07:14:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9334444444444444
- name: Recall
type: recall
value: 0.9398142969012194
- name: F1
type: f1
value: 0.9366185406098445
- name: Accuracy
type: accuracy
value: 0.9845425516704529
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0727
- Precision: 0.9334
- Recall: 0.9398
- F1: 0.9366
- Accuracy: 0.9845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0271 | 1.0 | 878 | 0.0656 | 0.9339 | 0.9339 | 0.9339 | 0.9840 |
| 0.0136 | 2.0 | 1756 | 0.0703 | 0.9268 | 0.9380 | 0.9324 | 0.9838 |
| 0.008 | 3.0 | 2634 | 0.0727 | 0.9334 | 0.9398 | 0.9366 | 0.9845 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
Roy029/japanese-roberta-base-finetuned-wikitext2
|
Roy029
| 2021-11-04T05:25:22Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: japanese-roberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# japanese-roberta-base-finetuned-wikitext2
This model is a fine-tuned version of [rinna/japanese-roberta-base](https://huggingface.co/rinna/japanese-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 18 | 3.4128 |
| No log | 2.0 | 36 | 3.1374 |
| No log | 3.0 | 54 | 3.2285 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
ashraq/dv-electra-small-news-classification
|
ashraq
| 2021-11-03T22:31:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
widget:
- text: 'ގޫގަލް ޕިކްސަލް 6 ގެ ކެމެރާ، އޭއައި ގެ ޖާދޫއިން ފުރިފައި'
---
# The [ELECTRA-small](https://huggingface.co/ashraq/dv-electra-small) fine-tuned for news classification in Dhivehi
|
patrickvonplaten/hello_2b
|
patrickvonplaten
| 2021-11-03T19:58:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: hello_2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hello_2b
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2725
- Wer: 0.9531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1646 | 0.92 | 100 | 3.2106 | 1.0 |
| 0.368 | 1.85 | 200 | 2.9963 | 1.0 |
| 0.2252 | 2.77 | 300 | 2.8078 | 0.9999 |
| 0.1546 | 3.7 | 400 | 2.3458 | 0.9996 |
| 0.1468 | 4.63 | 500 | 2.0086 | 0.9986 |
| 0.1261 | 5.55 | 600 | 1.8269 | 0.9985 |
| 0.1206 | 6.48 | 700 | 1.7347 | 0.9956 |
| 0.1959 | 7.4 | 800 | 1.6819 | 0.9955 |
| 0.0502 | 8.33 | 900 | 1.6809 | 0.9965 |
| 0.0811 | 9.26 | 1000 | 1.6674 | 0.9916 |
| 0.0534 | 10.18 | 1100 | 1.5719 | 0.9898 |
| 0.0402 | 11.11 | 1200 | 1.4620 | 0.9821 |
| 0.057 | 12.04 | 1300 | 1.3015 | 0.9554 |
| 0.0385 | 12.96 | 1400 | 1.3798 | 0.9600 |
| 0.0422 | 13.88 | 1500 | 1.3538 | 0.9699 |
| 0.014 | 14.81 | 1600 | 1.2507 | 0.9443 |
| 0.0232 | 15.74 | 1700 | 1.3318 | 0.9465 |
| 0.0554 | 16.66 | 1800 | 1.2784 | 0.9462 |
| 0.0316 | 17.59 | 1900 | 1.2503 | 0.9481 |
| 0.0524 | 18.51 | 2000 | 1.3920 | 0.9604 |
| 0.0142 | 19.44 | 2100 | 1.4224 | 0.9698 |
| 0.0288 | 20.37 | 2200 | 1.3475 | 0.9635 |
| 0.0106 | 21.29 | 2300 | 1.2232 | 0.9264 |
| 0.0396 | 22.22 | 2400 | 1.3323 | 0.9615 |
| 0.0349 | 23.15 | 2500 | 1.2741 | 0.9587 |
| 0.0121 | 24.07 | 2600 | 1.2671 | 0.9586 |
| 0.0224 | 24.99 | 2700 | 1.3001 | 0.9611 |
| 0.0449 | 25.92 | 2800 | 1.2777 | 0.9572 |
| 0.0186 | 26.85 | 2900 | 1.2766 | 0.9607 |
| 0.0365 | 27.77 | 3000 | 1.2935 | 0.9598 |
| 0.0105 | 28.7 | 3100 | 1.2761 | 0.9588 |
| 0.021 | 29.63 | 3200 | 1.2686 | 0.9528 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
whaleloops/phrase-bert
|
whaleloops
| 2021-11-03T15:04:02Z | 7,172 | 20 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2109.06304",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# whaleloops/phrase-bert
This is the official repository for the EMNLP 2021 long paper [Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration](https://arxiv.org/abs/2109.06304). We provide [code](https://github.com/sf-wa-326/phrase-bert-topic-model) for training and evaluating Phrase-BERT in addition to the datasets used in the paper.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Our model is tested on pytorch=1.9.0, tranformers=4.8.1, sentence-tranformers = 2.1.0 TODO
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
phrase_list = [ 'play an active role', 'participate actively', 'active lifestyle']
model = SentenceTransformer('whaleloops/phrase-bert')
phrase_embs = model.encode( phrase_list )
[p1, p2, p3] = phrase_embs
```
As in sentence-BERT, the default output is a list of numpy arrays:
````
for phrase, embedding in zip(phrase_list, phrase_embs):
print("Phrase:", phrase)
print("Embedding:", embedding)
print("")
````
An example of computing the dot product of phrase embeddings:
````
import numpy as np
print(f'The dot product between phrase 1 and 2 is: {np.dot(p1, p2)}')
print(f'The dot product between phrase 1 and 3 is: {np.dot(p1, p3)}')
print(f'The dot product between phrase 2 and 3 is: {np.dot(p2, p3)}')
````
An example of computing cosine similarity of phrase embeddings:
````
import torch
from torch import nn
cos_sim = nn.CosineSimilarity(dim=0)
print(f'The cosine similarity between phrase 1 and 2 is: {cos_sim( torch.tensor(p1), torch.tensor(p2))}')
print(f'The cosine similarity between phrase 1 and 3 is: {cos_sim( torch.tensor(p1), torch.tensor(p3))}')
print(f'The cosine similarity between phrase 2 and 3 is: {cos_sim( torch.tensor(p2), torch.tensor(p3))}')
````
The output should look like:
````
The dot product between phrase 1 and 2 is: 218.43600463867188
The dot product between phrase 1 and 3 is: 165.48483276367188
The dot product between phrase 2 and 3 is: 160.51708984375
The cosine similarity between phrase 1 and 2 is: 0.8142536282539368
The cosine similarity between phrase 1 and 3 is: 0.6130303144454956
The cosine similarity between phrase 2 and 3 is: 0.584893524646759
````
## Evaluation
Given the lack of a unified phrase embedding evaluation benchmark, we collect the following five phrase semantics evaluation tasks, which are described further in our paper:
* Turney [[Download](https://storage.googleapis.com/phrase-bert/turney/data.txt) ]
* BiRD [[Download](https://storage.googleapis.com/phrase-bert/bird/data.txt)]
* PPDB [[Download](https://storage.googleapis.com/phrase-bert/ppdb/examples.json)]
* PPDB-filtered [[Download](https://storage.googleapis.com/phrase-bert/ppdb_exact/examples.json)]
* PAWS-short [[Download Train-split](https://storage.googleapis.com/phrase-bert/paws_short/train_examples.json) ] [[Download Dev-split](https://storage.googleapis.com/phrase-bert/paws_short/dev_examples.json) ] [[Download Test-split](https://storage.googleapis.com/phrase-bert/paws_short/test_examples.json) ]
Change `config/model_path.py` with the model path according to your directories and
* For evaluation on Turney, run `python eval_turney.py`
* For evaluation on BiRD, run `python eval_bird.py`
* for evaluation on PPDB / PPDB-filtered / PAWS-short, run `eval_ppdb_paws.py` with:
````
nohup python -u eval_ppdb_paws.py \
--full_run_mode \
--task <task-name> \
--data_dir <input-data-dir> \
--result_dir <result-storage-dr> \
>./output.txt 2>&1 &
````
## Train your own Phrase-BERT
If you would like to go beyond using the pre-trained Phrase-BERT model, you may train your own Phrase-BERT using data from the domain you are interested in. Please refer to
`phrase-bert/phrase_bert_finetune.py`
The datasets we used to fine-tune Phrase-BERT are here: [training data csv file](https://storage.googleapis.com/phrase-bert/phrase-bert-ft-data/pooled_context_para_triples_p%3D0.8_train.csv) and [validation data csv file](https://storage.googleapis.com/phrase-bert/phrase-bert-ft-data/pooled_context_para_triples_p%3D0.8_valid.csv).
To re-produce the trained Phrase-BERT, please run:
export INPUT_DATA_PATH=<directory-of-phrasebert-finetuning-data>
export TRAIN_DATA_FILE=<training-data-filename.csv>
export VALID_DATA_FILE=<validation-data-filename.csv>
export INPUT_MODEL_PATH=bert-base-nli-stsb-mean-tokens
export OUTPUT_MODEL_PATH=<directory-of-saved-model>
python -u phrase_bert_finetune.py \
--input_data_path $INPUT_DATA_PATH \
--train_data_file $TRAIN_DATA_FILE \
--valid_data_file $VALID_DATA_FILE \
--input_model_path $INPUT_MODEL_PATH \
--output_model_path $OUTPUT_MODEL_PATH
## Citation:
Please cite us if you find this useful:
````
@inproceedings{phrasebertwang2021,
author={Shufan Wang and Laure Thompson and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2021",
Title={Phrase-BERT: Improved Phrase Embeddings from BERT with an Application to Corpus Exploration}
}
````
|
Roy029/distilroberta-base-finetuned-wikitext2
|
Roy029
| 2021-11-03T15:01:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 58 | 2.2650 |
| No log | 2.0 | 116 | 2.2408 |
| No log | 3.0 | 174 | 2.1696 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
KoichiYasuoka/roberta-small-japanese-aozora
|
KoichiYasuoka
| 2021-11-03T14:44:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"japanese",
"masked-lm",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- "ja"
tags:
- "japanese"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
widget:
- text: "日本に着いたら[MASK]を訪ねなさい。"
---
# roberta-small-japanese-aozora
## Model Description
This is a RoBERTa model pre-trained on 青空文庫 texts with [Japanese-LUW-Tokenizer](https://github.com/KoichiYasuoka/Japanese-LUW-Tokenizer). You can fine-tune `roberta-small-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-luw-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-small-japanese-aozora")
```
|
Ahmad/parsT5-base
|
Ahmad
| 2021-11-03T13:47:07Z | 168 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
A monolingual T5 model for Persian trained on OSCAR 21.09 (https://oscar-corpus.com/) corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model.
It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task.
Example code:
```
from transformers import T5ForConditionalGeneration,AutoTokenizer
import torch
model_name = "Ahmad/parsT5-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
input_ids = tokenizer.encode('دانش آموزان به <extra_id_0> میروند و <extra_id_1> میخوانند.', return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(input_ids)
for h in hypotheses:
print(tokenizer.decode(h))
```
Steps: 725000
Accuracy: 0.66
Training More?
========
To train the model further please refer to its github repository at:
https://github.com/puraminy/parsT5
|
kloon99/KML_Eula_generate_v1
|
kloon99
| 2021-11-03T10:07:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: trained_model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1
- Datasets 1.14.0
- Tokenizers 0.10.3
|
josmunpen/mt5-small-spanish-summarization
|
josmunpen
| 2021-11-03T09:47:51Z | 150 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"spanish",
"es",
"dataset:larazonpublico",
"dataset:es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- es
thumbnail:
tags:
- summarization
- mt5
- spanish
license: apache-2.0
datasets:
- larazonpublico
- es
metrics:
- rouge
widget:
- text: "La Guardia Civil ha desarticulado un grupo organizado dedicado a copiar en los examenes teoricos para la obtencion del permiso de conducir. Para ello, empleaban receptores y camaras de alta tecnologia y operaban desde la misma sede del Centro de examenes de la Direccion General de Trafico (DGT) en Mostoles. Es lo que han llamado la Operacion pinga.
El grupo desarticulado ofrecia el servicio de transporte y tecnologia para copiar y poder aprobar. Por dicho servicio cobraban 1.000 euros. Los investigadores sorprendieron in fraganti a una mujer intentando copiar en el examen. Portaba una chaqueta con dispositivos electronicos ocultos, concretamente un telefono movil al que estaba conectada una camara que habia sido insertada en la parte frontal de la chaqueta para transmitir online el examen y que orientada al ordenador del Centro de Examenes en el que aparecen las preguntas, permitia visualizar las imagenes en otro ordenador alojado en el interior de un vehiculo estacionado en las inmediaciones del centro. En este vehiculo, se encontraban el resto del grupo desarticulado con varios ordenadores portatiles y tablets abiertos y conectados a paginas de test de la DGT para consultar las respuestas. Estos, comunicaban con la mujer que estaba en el aula haciendo el examen a traves de un diminuto receptor bluetooth que portaba en el interior de su oido.
Luis de Lama, portavoz de la Guardia Civil de Trafico destaca que los ciudadanos, eran de origen chino, y copiaban en el examen utilizando la tecnologia facilitada por una organizacion. Destaca que, ademas de parte del fraude que supone copiar en un examen muchos de estos ciudadanos desconocian el idioma, no hablan ni entienden el español lo que supone un grave riesgo para la seguridad vial por desconocer las señales y letreros que avisan en carretera de muchas incidencias.
"
---
# mt5-small-spanish-summarization
## Model description
This is a mt5-small model finetuned for generating headlines from the body of the news in Spanish.
## Training data
The model was trained with 58425 news extracted from the La Razón (31477) and Público (26948) newspapers. These news belong to the following categories: "España", "Cultura", "Economía", "Igualdad" and "Política".
## Training procedure
It was trained with Google Colab's GPU Tesla P100-PCIE-16GB for 2 epochs.
### Hyperparameters
{evaluation_strategy = "epoch",
learning_rate = 2e-4,
per_device_train_batch_size = 6,
per_device_eval_batch_size = 6,
weight_decay = 0.01,
save_total_limi t= 3,
num_train_epochs = 2,
predict_with_generate = True,
fp16 = False}
## Eval results
| metric | score |
| --- | ----- |
| rouge1 | 44.03 |
| rouge2 | 28.2900 |
| rougeL | 40.54 |
| rougeLsum | 40.5587 |
### BibTeX entry and citation info
```bibtex
@inproceedings{ mt5lrpjosmunpen,
year={2020},
}
```
|
Zichuu/spert
|
Zichuu
| 2021-11-03T04:45:41Z | 68 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# SpERT
SpERT is the Relation Extraction model [(SpERT)Span-based Entity and Relation Transformer](https://github.com/lavis-nlp/spert).This is the model trained with CoNLL04 Dataset.
## Use
## References
```
Markus Eberts, Adrian Ulges. Span-based Joint Entity and Relation Extraction with Transformer Pre-training. 24th European Conference on Artificial Intelligence, 2020.
```
|
mikaelsouza/msft-regular-model
|
mikaelsouza
| 2021-11-02T23:05:40Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:wikitext",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: msft-regular-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# msft-regular-model
This model is a fine-tuned version of [](https://huggingface.co/) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 9.1224 | 0.17 | 200 | 8.0736 |
| 7.5229 | 0.34 | 400 | 7.1536 |
| 7.0122 | 0.51 | 600 | 6.9072 |
| 6.8296 | 0.69 | 800 | 6.7582 |
| 6.709 | 0.86 | 1000 | 6.6436 |
| 6.5882 | 1.03 | 1200 | 6.5563 |
| 6.4807 | 1.2 | 1400 | 6.4784 |
| 6.4172 | 1.37 | 1600 | 6.4165 |
| 6.3403 | 1.54 | 1800 | 6.3555 |
| 6.2969 | 1.71 | 2000 | 6.3107 |
| 6.2346 | 1.89 | 2200 | 6.2691 |
| 6.1767 | 2.06 | 2400 | 6.2299 |
| 6.1326 | 2.23 | 2600 | 6.1937 |
| 6.1035 | 2.4 | 2800 | 6.1602 |
| 6.0624 | 2.57 | 3000 | 6.1241 |
| 6.0393 | 2.74 | 3200 | 6.0971 |
| 5.9982 | 2.91 | 3400 | 6.0656 |
| 5.9526 | 3.08 | 3600 | 6.0397 |
| 5.9086 | 3.26 | 3800 | 6.0104 |
| 5.8922 | 3.43 | 4000 | 5.9888 |
| 5.8631 | 3.6 | 4200 | 5.9661 |
| 5.8396 | 3.77 | 4400 | 5.9407 |
| 5.8055 | 3.94 | 4600 | 5.9177 |
| 5.7763 | 4.11 | 4800 | 5.9007 |
| 5.7314 | 4.28 | 5000 | 5.8834 |
| 5.7302 | 4.46 | 5200 | 5.8620 |
| 5.6987 | 4.63 | 5400 | 5.8451 |
| 5.6754 | 4.8 | 5600 | 5.8242 |
| 5.6571 | 4.97 | 5800 | 5.8059 |
| 5.615 | 5.14 | 6000 | 5.7871 |
| 5.596 | 5.31 | 6200 | 5.7817 |
| 5.5738 | 5.48 | 6400 | 5.7570 |
| 5.5641 | 5.66 | 6600 | 5.7431 |
| 5.5503 | 5.83 | 6800 | 5.7271 |
| 5.5214 | 6.0 | 7000 | 5.7108 |
| 5.4712 | 6.17 | 7200 | 5.7018 |
| 5.48 | 6.34 | 7400 | 5.6936 |
| 5.4527 | 6.51 | 7600 | 5.6812 |
| 5.4514 | 6.68 | 7800 | 5.6669 |
| 5.4454 | 6.86 | 8000 | 5.6509 |
| 5.399 | 7.03 | 8200 | 5.6408 |
| 5.3747 | 7.2 | 8400 | 5.6327 |
| 5.3667 | 7.37 | 8600 | 5.6197 |
| 5.3652 | 7.54 | 8800 | 5.6084 |
| 5.3394 | 7.71 | 9000 | 5.5968 |
| 5.3349 | 7.88 | 9200 | 5.5870 |
| 5.2994 | 8.05 | 9400 | 5.5826 |
| 5.2793 | 8.23 | 9600 | 5.5710 |
| 5.2716 | 8.4 | 9800 | 5.5623 |
| 5.275 | 8.57 | 10000 | 5.5492 |
| 5.264 | 8.74 | 10200 | 5.5449 |
| 5.241 | 8.91 | 10400 | 5.5322 |
| 5.2285 | 9.08 | 10600 | 5.5267 |
| 5.2021 | 9.25 | 10800 | 5.5187 |
| 5.1934 | 9.43 | 11000 | 5.5158 |
| 5.1737 | 9.6 | 11200 | 5.5044 |
| 5.1774 | 9.77 | 11400 | 5.5008 |
| 5.1841 | 9.94 | 11600 | 5.4960 |
| 5.1414 | 10.11 | 11800 | 5.4895 |
| 5.1491 | 10.28 | 12000 | 5.4849 |
| 5.1184 | 10.45 | 12200 | 5.4738 |
| 5.1136 | 10.63 | 12400 | 5.4690 |
| 5.1199 | 10.8 | 12600 | 5.4598 |
| 5.1056 | 10.97 | 12800 | 5.4536 |
| 5.0648 | 11.14 | 13000 | 5.4496 |
| 5.0598 | 11.31 | 13200 | 5.4449 |
| 5.0656 | 11.48 | 13400 | 5.4422 |
| 5.0664 | 11.65 | 13600 | 5.4367 |
| 5.0675 | 11.83 | 13800 | 5.4286 |
| 5.0459 | 12.0 | 14000 | 5.4249 |
| 5.0073 | 12.17 | 14200 | 5.4260 |
| 5.0229 | 12.34 | 14400 | 5.4175 |
| 5.0079 | 12.51 | 14600 | 5.4119 |
| 5.0 | 12.68 | 14800 | 5.4194 |
| 5.0094 | 12.85 | 15000 | 5.4068 |
| 4.9967 | 13.02 | 15200 | 5.3995 |
| 4.9541 | 13.2 | 15400 | 5.4002 |
| 4.9753 | 13.37 | 15600 | 5.3965 |
| 4.9732 | 13.54 | 15800 | 5.3925 |
| 4.9624 | 13.71 | 16000 | 5.3888 |
| 4.9559 | 13.88 | 16200 | 5.3824 |
| 4.9559 | 14.05 | 16400 | 5.3851 |
| 4.9109 | 14.22 | 16600 | 5.3815 |
| 4.9211 | 14.4 | 16800 | 5.3784 |
| 4.9342 | 14.57 | 17000 | 5.3735 |
| 4.9271 | 14.74 | 17200 | 5.3711 |
| 4.9328 | 14.91 | 17400 | 5.3646 |
| 4.8994 | 15.08 | 17600 | 5.3664 |
| 4.8932 | 15.25 | 17800 | 5.3642 |
| 4.8886 | 15.42 | 18000 | 5.3620 |
| 4.8997 | 15.6 | 18200 | 5.3584 |
| 4.8846 | 15.77 | 18400 | 5.3551 |
| 4.8993 | 15.94 | 18600 | 5.3516 |
| 4.8648 | 16.11 | 18800 | 5.3552 |
| 4.8838 | 16.28 | 19000 | 5.3512 |
| 4.8575 | 16.45 | 19200 | 5.3478 |
| 4.8623 | 16.62 | 19400 | 5.3480 |
| 4.8631 | 16.8 | 19600 | 5.3439 |
| 4.8576 | 16.97 | 19800 | 5.3428 |
| 4.8265 | 17.14 | 20000 | 5.3420 |
| 4.8523 | 17.31 | 20200 | 5.3410 |
| 4.8477 | 17.48 | 20400 | 5.3396 |
| 4.8507 | 17.65 | 20600 | 5.3380 |
| 4.8498 | 17.82 | 20800 | 5.3333 |
| 4.8261 | 17.99 | 21000 | 5.3342 |
| 4.8201 | 18.17 | 21200 | 5.3324 |
| 4.8214 | 18.34 | 21400 | 5.3341 |
| 4.8195 | 18.51 | 21600 | 5.3315 |
| 4.8216 | 18.68 | 21800 | 5.3335 |
| 4.8243 | 18.85 | 22000 | 5.3291 |
| 4.832 | 19.02 | 22200 | 5.3295 |
| 4.8085 | 19.19 | 22400 | 5.3309 |
| 4.8094 | 19.37 | 22600 | 5.3283 |
| 4.815 | 19.54 | 22800 | 5.3280 |
| 4.8219 | 19.71 | 23000 | 5.3270 |
| 4.8117 | 19.88 | 23200 | 5.3280 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
chisadi/nice-distilbert-v2
|
chisadi
| 2021-11-02T19:21:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
### Distibert model finetuned on the task of classifying product descriptions to one of 45 broad [NICE classifications](https://www.wipo.int/classifications/nice/en/)
|
huggingartists/phish
|
huggingartists
| 2021-11-02T19:07:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/phish",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/phish
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/df85b83684e95f87794aa09580ee0463.919x919x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Phish</div>
<a href="https://genius.com/artists/phish">
<div style="text-align: center; font-size: 14px;">@phish</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Phish.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/phish).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/phish")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/22sghxz4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Phish's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/340yi6e5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/340yi6e5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/phish')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/phish")
model = AutoModelWithLMHead.from_pretrained("huggingartists/phish")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
jambo/marker-associations-binary-base
|
jambo
| 2021-11-02T12:52:24Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:marker-associations-binary-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- marker-associations-binary-base
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: marker-associations-binary-base
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: marker-associations-binary-base
type: marker-associations-binary-base
metrics:
- name: Precision
type: precision
value: 0.7981651376146789
- name: Recall
type: recall
value: 0.9560439560439561
- name: F1
type: f1
value: 0.87
- name: Accuracy
type: accuracy
value: 0.8884120171673819
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marker-associations-binary-base
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the marker-associations-binary-base dataset.
It achieves the following results on the evaluation set:
### Gene Results
- Precision = 0.808
- Recall = 0.940
- F1 = 0.869
- Accuracy = 0.862
- AUC = 0.944
### Chemical Results
- Precision = 0.774
- Recall = 1.0
- F1 = 0.873
- Accuracy = 0.926
- AUC = 0.964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:------:|
| No log | 1.0 | 88 | 0.3266 | 0.8191 | 0.8462 | 0.8324 | 0.8670 | 0.9313 |
| No log | 2.0 | 176 | 0.3335 | 0.7870 | 0.9341 | 0.8543 | 0.8755 | 0.9465 |
| No log | 3.0 | 264 | 0.4243 | 0.7982 | 0.9560 | 0.87 | 0.8884 | 0.9516 |
| No log | 4.0 | 352 | 0.5388 | 0.825 | 0.7253 | 0.7719 | 0.8326 | 0.9384 |
| No log | 5.0 | 440 | 0.7101 | 0.8537 | 0.7692 | 0.8092 | 0.8584 | 0.9416 |
| 0.1824 | 6.0 | 528 | 0.6175 | 0.8242 | 0.8242 | 0.8242 | 0.8627 | 0.9478 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Tokenizers 0.10.3
|
abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased
|
abhijithneilabraham
| 2021-11-02T12:23:53Z | 162 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
model = AutoModel.from_pretrained('abhijithneilabraham/stsb_multi_mt_distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 25,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 900,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
z-uo/it5-squadv1-it
|
z-uo
| 2021-11-01T19:49:46Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text2text_generation",
"question_answering",
"it",
"dataset:z-uo/squad-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- text2text_generation
- question_answering
language:
- it
model-index:
- name: it5-squadv1-it
results: []
datasets:
- z-uo/squad-it
---
# Question and Answer with Italian T5
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on [Thoroughly Cleaned Italian mC4 Corpus](https://huggingface.co/datasets/gsarti/clean_mc4_it) (~41B words, ~275GB).
To use add a question + context in the same string for example:
```
In quale anno si è verificato il terremoto nel Sichuan?
Il terremoto del Sichuan del 2008 o il terremoto del Gran Sichuan, misurato a 8.0 Ms e 7.9 Mw, e si è verificato alle 02:28:01 PM China Standard Time all' epicentro (06:28:01 UTC) il 12 maggio nella provincia del Sichuan, ha ucciso 69.197 persone e lasciato 18.222 dispersi.
```
The train achieves the following results/params:
- epoch: 2.0
- train_loss: 0.1064
- train_samples: 87599
- eval_samples : 10570
- eval_gen_len : 9.2974
- eval_loss : 0.5939
- eval_rouge1 : 17.5052
- eval_rouge2 : 5.8714
- eval_rougeL : 17.4487
- eval_rougeLsum : 17.4528
# Train the model
To train the model use [this repo](https://gitlab.com/nicolalandro/qandatrain), inside you find the requirements.txt and the src to create train.
|
Jungwoo/distilbert-base-uncased-finetuned-cola
|
Jungwoo
| 2021-11-01T19:03:45Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.541356878970505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7470
- Matthews Correlation: 0.5414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5237 | 1.0 | 535 | 0.5327 | 0.4248 |
| 0.347 | 2.0 | 1070 | 0.5105 | 0.5239 |
| 0.2344 | 3.0 | 1605 | 0.6639 | 0.5224 |
| 0.1672 | 4.0 | 2140 | 0.7470 | 0.5414 |
| 0.1228 | 5.0 | 2675 | 0.8352 | 0.5377 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
mvonwyl/roberta-base-finetuned-squad2
|
mvonwyl
| 2021-11-01T17:51:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: roberta-base-finetuned-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.88 | 1.0 | 8160 | 0.8129 |
| 0.6643 | 2.0 | 16320 | 0.8567 |
| 0.5096 | 3.0 | 24480 | 0.9325 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
tiennvcs/layoutlmv2-base-uncased-finetuned-infovqa
|
tiennvcs
| 2021-11-01T16:13:10Z | 12 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased-finetuned-infovqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased-finetuned-infovqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 250500
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8677 | 0.16 | 500 | 3.2829 |
| 3.0395 | 0.33 | 1000 | 2.8431 |
| 2.561 | 0.49 | 1500 | 2.5633 |
| 2.41 | 0.65 | 2000 | 2.3548 |
| 2.247 | 0.82 | 2500 | 2.2983 |
| 2.1538 | 0.98 | 3000 | 2.2059 |
| 1.7 | 1.14 | 3500 | 2.2006 |
| 1.5705 | 1.31 | 4000 | 2.2736 |
| 1.604 | 1.47 | 4500 | 2.1415 |
| 1.5509 | 1.63 | 5000 | 2.0853 |
| 1.5053 | 1.79 | 5500 | 2.1389 |
| 1.4787 | 1.96 | 6000 | 2.0870 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
PolyakovMaxim/ModelGptTS
|
PolyakovMaxim
| 2021-11-01T11:46:06Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
This model generate the time shift's text of Norbit Company also generate the same ending of the textes of any phrases like base gpt model.
|
pere/norwegian-gpt2-social
|
pere
| 2021-11-01T11:01:55Z | 26 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"norwegian",
"GPT2",
"casual language modeling",
"no",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: no
license: cc-by-4.0
tags:
- norwegian
- GPT2
- casual language modeling
---
# Norwegian GPT-2 - Social
## Description
Experimental Norwegian GPT-2-model trained on a 37GB mainly social corpus.
The following sub-corpora are used:
```bash
wikipedia_download_nb.jsonl
wikipedia_download_nn.jsonl
newspapers_online_nb.jsonl
newspapers_online_nn.jsonl
twitter_2016_2018_no.jsonl
twitter_news_2016_2018_no.jsonl
open_subtitles_no.jsonl
facebook_no.jsonl
reddit_no.jsonl
vgdebatt_no.jsonl
```
|
NhatPham/wav2vec2-base-finetuned-ks
|
NhatPham
| 2021-11-01T04:32:59Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1258
- Accuracy: 0.9793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1561 | 1.0 | 399 | 1.1127 | 0.6643 |
| 0.4803 | 2.0 | 798 | 0.3547 | 0.9687 |
| 0.2855 | 3.0 | 1197 | 0.1663 | 0.9763 |
| 0.1987 | 4.0 | 1596 | 0.1258 | 0.9793 |
| 0.2097 | 5.0 | 1995 | 0.1171 | 0.9791 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Philipuss/GPT-Macbeth
|
Philipuss
| 2021-11-01T02:16:42Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
### **GPT-Macbeth**
A custom finetune of GPT-2 trained on a custom dataset of victorian literature
## Information
The goal of this finetune is to output high-quality victorian literature, while being customizable with Author's Note and being light to run (aka not being a GPT-Neo or GPT-Jax finetune, for now at least).
## Authors Note
Author's Note was added manually, so please appreciate it. :)
The format of it is [ Author: George Eliot; Genre: Horror, fantasy, novel; Tags: scary, magical, victorian ]
Some words will work well, some won't. Please make sure to have spaces before each ][.
Most popular victorian authors should work, but keep in mind that some authors (e.g. Mark Twain) will result in a somewhat weird behavior due to a quirk in the dataset that will be addressed in the next version of the finetune.
When it comes to the genres, "novel", "fiction", "horror" and "romance" work best, but from playing around with it, I've noticed that most other not too specific genres work pretty well too.
The tags are a bit complicated. Adding "normal" will result in a story without anything special (like no magic or fantasy element) and tends to be pretty low-pace. Using "real-life" will push the AI towards a historical/biographical path. Almost all tags should work. Using "man" or "woman" is supposed to semi-determine what gender the main character is, but it heavily depends on the chosen author.
## History
Version 0 - This was the first test version of the finetune, trained on GPT-2-small and with a really small dataset. The name was GPT-Kelini before it was renamed to GPT-Macbeth in V1.
Version 1 - The current version of the finetune. Trained on GPT-2-medium with a much, much bigger dataset compared to V0. Supports Author's Note
### Notes
Please use a very low temperature/randomness when using it, if you want to get anything out of it. Pumping the repetition penalty up helps a lot too.
The model was specifically converted to PyTorch so that most front-end GUI's should run it. It has been only tested on KoboldAI, but should theoretically work on others too.
For some odd reason, my finetune is capable of writing victorian NSFW content, if used the right way. No NSFW was in the dataset and considering the size of the model, it's really odd to see it do so. Perhaps the countless romantic novels in the dataset had something naughty in them, but I highly doubt it.
You may sometimes get roman numerals on random occasions, this shouldn't happen often, but if it does, it's again something that will be (manually, unfortunately) addressed in the next version of the finetune.
If you are wondering why I renamed my finetune to Macbeth, there are a few reasons: First, it sounds much better and smoother than Kelini, second, it's a play by Shakespeare that closely matches the writing style of some of the authors in my dataset, and third, the most important reason, it's was mentioned in Hamilton, so yes, my love with Hamilton is bleeding everywhere and yes, the next version of the dataset will try to have a Hamilton easter egg featuring the Author's Note.
### Credits
I want to thank HuggingFace for their tokenizer and everything they've done to make everything easier. Then is OpenAI for making GPT-2. I also want to thank most active people on the AIM Discord server in the community-projects channel. Thanks to Bran for finding a way to convert checkpoints to a PyTorch model, thanks to Mr. Seeker and Aedial for helping me in cleaning the dataset and to *finetune* from the NovelAI team for perhaps making my finetune output much better quality by telling me about the magic of the <\|endoftext\|> token.
P.S. If you happen to use it in something commercial or in an online demo or in any other way that is not for personal use, a credit will be greatly appreciated (and if you do something exciting with it, make sure to let me know, I'd be more than happy to see it being used by someone!).
|
huggingtweets/staticmeganito
|
huggingtweets
| 2021-11-01T01:13:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/staticmeganito/1635729212511/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1453022424610525186/0AbfRVqP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">megan ito</div>
<div style="text-align: center; font-size: 14px;">@staticmeganito</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from megan ito.
| Data | megan ito |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 137 |
| Short tweets | 416 |
| Tweets kept | 2695 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2w99u9jm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @staticmeganito's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ss7y2ip) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ss7y2ip/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/staticmeganito')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/_f1rewalker_-staticmeganito
|
huggingtweets
| 2021-10-31T23:56:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421614250116763648/1kZwzXTB_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1453022424610525186/0AbfRVqP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">PARKER MACMILLAN I & megan ito</div>
<div style="text-align: center; font-size: 14px;">@_f1rewalker_-staticmeganito</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from PARKER MACMILLAN I & megan ito.
| Data | PARKER MACMILLAN I | megan ito |
| --- | --- | --- |
| Tweets downloaded | 2420 | 3248 |
| Retweets | 8 | 137 |
| Short tweets | 297 | 416 |
| Tweets kept | 2115 | 2695 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1avcuseb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_f1rewalker_-staticmeganito's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3hsk5egr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3hsk5egr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_f1rewalker_-staticmeganito')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/2wyatt2mason
|
huggingtweets
| 2021-10-31T23:45:40Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/2wyatt2mason/1635723936956/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1441261735004966923/Slec8aEM_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">di!!! 🎮🕹️🎤</div>
<div style="text-align: center; font-size: 14px;">@2wyatt2mason</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from di!!! 🎮🕹️🎤.
| Data | di!!! 🎮🕹️🎤 |
| --- | --- |
| Tweets downloaded | 389 |
| Retweets | 11 |
| Short tweets | 49 |
| Tweets kept | 329 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/26ny09im/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @2wyatt2mason's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1rslzcw9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1rslzcw9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/2wyatt2mason')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/dril-kanyewest-ph4370n
|
huggingtweets
| 2021-10-31T21:42:34Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/dril-kanyewest-ph4370n/1635716550756/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1404915829427212289/9npX2HXW_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/847818629840228354/VXyQHfn0_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lexi & wint & ye</div>
<div style="text-align: center; font-size: 14px;">@dril-kanyewest-ph4370n</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lexi & wint & ye.
| Data | lexi | wint | ye |
| --- | --- | --- | --- |
| Tweets downloaded | 2679 | 3226 | 1856 |
| Retweets | 1274 | 468 | 186 |
| Short tweets | 199 | 319 | 573 |
| Tweets kept | 1206 | 2439 | 1097 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g14a01v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-kanyewest-ph4370n's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gh1q6ja) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gh1q6ja/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-kanyewest-ph4370n')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Trixzy/rickai-v1
|
Trixzy
| 2021-10-31T20:17:36Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
Rick chatbot made with GPT2 ai from the show Rick and Morty, discord bot available now!
https://discord.com/oauth2/authorize?client_id=894569097818431519&permissions=1074113536&scope=bot
(v1 is no longer supported with RickBot)
|
huggingtweets/ph4370n
|
huggingtweets
| 2021-10-31T18:55:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/ph4370n/1635706503727/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1404915829427212289/9npX2HXW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">lexi</div>
<div style="text-align: center; font-size: 14px;">@ph4370n</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from lexi.
| Data | lexi |
| --- | --- |
| Tweets downloaded | 2674 |
| Retweets | 1269 |
| Short tweets | 199 |
| Tweets kept | 1206 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2oj3ctzo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ph4370n's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/yjm8doqr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/yjm8doqr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ph4370n')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ttop324/wav2vec2-live-japanese
|
ttop324
| 2021-10-31T15:34:55Z | 14 | 4 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ja",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: ja
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: wav2vec2-live-japanese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Japanese
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 21.48%
- name: Test CER
type: cer
value: 9.82%
---
# wav2vec2-live-japanese
https://github.com/ttop32/wav2vec2-live-japanese-translator
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese hiragana using the
- [common_voice](https://huggingface.co/datasets/common_voice)
- [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut)
- [CSS10](https://github.com/Kyubyong/css10)
- [TEDxJP-10K](https://github.com/laboroai/TEDxJP-10K)
- [JVS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jvs_corpus)
- [JSSS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jsss_corpus)
## Inference
```python
#usage
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
model = Wav2Vec2ForCTC.from_pretrained("ttop324/wav2vec2-live-japanese")
processor = Wav2Vec2Processor.from_pretrained("ttop324/wav2vec2-live-japanese")
test_dataset = load_dataset("common_voice", "ja", split="test")
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.functional.resample(speech_array, sampling_rate, 16000)[0].numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import pykakasi
import MeCab
wer = load_metric("wer")
cer = load_metric("cer")
model = Wav2Vec2ForCTC.from_pretrained("ttop324/wav2vec2-live-japanese").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("ttop324/wav2vec2-live-japanese")
test_dataset = load_dataset("common_voice", "ja", split="test")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\�‘、。.!,・―─~「」『』\\\\※\[\]\{\}「」〇?…]'
wakati = MeCab.Tagger("-Owakati")
kakasi = pykakasi.kakasi()
kakasi.setMode("J","H") # kanji to hiragana
kakasi.setMode("K","H") # katakana to hiragana
conv = kakasi.getConverter()
FULLWIDTH_TO_HALFWIDTH = str.maketrans(
' 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!゛#$%&()*+、ー。/:;〈=〉?@[]^_‘{|}~',
' 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&()*+,-./:;<=>?@[]^_`{|}~',
)
def fullwidth_to_halfwidth(s):
return s.translate(FULLWIDTH_TO_HALFWIDTH)
def preprocessData(batch):
batch["sentence"] = fullwidth_to_halfwidth(batch["sentence"])
batch["sentence"] = re.sub(chars_to_ignore_regex,' ', batch["sentence"]).lower() #remove special char
batch["sentence"] = wakati.parse(batch["sentence"]) #add space
batch["sentence"] = conv.do(batch["sentence"]) #covert to hiragana
batch["sentence"] = " ".join(batch["sentence"].split())+" " #remove multiple space
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = torchaudio.functional.resample(speech_array, sampling_rate, 16000)[0].numpy()
return batch
test_dataset = test_dataset.map(preprocessData)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
|
huggingartists/linkin-park
|
huggingartists
| 2021-10-30T14:56:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/linkin-park",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/linkin-park
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/a865aac7693c39977b9b402dc364908e.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Linkin Park</div>
<a href="https://genius.com/artists/linkin-park">
<div style="text-align: center; font-size: 14px;">@linkin-park</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Linkin Park.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/linkin-park).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/linkin-park")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/3mtr0u4z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Linkin Park's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/fxn4brd6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/fxn4brd6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/linkin-park')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/linkin-park")
model = AutoModelWithLMHead.from_pretrained("huggingartists/linkin-park")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
huggingtweets/rufandom
|
huggingtweets
| 2021-10-30T09:37:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/rufandom/1635586623585/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1375014984799944705/bcaZBnKn_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Грейс| Мультифандом✨</div>
<div style="text-align: center; font-size: 14px;">@rufandom</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Грейс| Мультифандом✨.
| Data | Грейс| Мультифандом✨ |
| --- | --- |
| Tweets downloaded | 977 |
| Retweets | 549 |
| Short tweets | 15 |
| Tweets kept | 413 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wthxx9x/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rufandom's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10tid4s1) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10tid4s1/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rufandom')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mys/bert-base-turkish-cased-nli-mean
|
mys
| 2021-10-30T05:22:32Z | 117 | 2 |
transformers
|
[
"transformers",
"tf",
"bert",
"feature-extraction",
"arxiv:2004.14963",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
## Acknowledgement
Google supported this work by providing Google Cloud credit. Thank you Google for supporting the open source! 🎉
## What is this?
This model is a finetuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) to be used in zero-shot tasks in Turkish. It is finetuned with an NLI task by using `sentence-transformers` and uses `mean` of the token embeddings as the aggregation function. I also converted it to TensorFlow with the aggregation function rewritten in TF to use it in [my `ai-aas` repo on GitHub](https://github.com/monatis/ai-aas) for production-grade deployment, but a simple usage example is as follows:
## Usage
```python
import time
import tensorflow as tf
from transformers import TFAutoModel, AutoTokenizer
texts = ["Galatasaray, bu akşamki maçın ardından şampiyonluğunu ilan etmeye hazırlanıyor."]
labels = ["spor", "siyaset", "kültür"]
model_name = 'mys/bert-base-turkish-cased-nli-mean'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModel.from_pretrained(model_name)
def label_text(model, tokenizer, texts, labels):
texts_length = len(texts)
tokens = tokenizer(texts + labels, padding=True, return_tensors='tf')
embs = model(**tokens)[0]
attention_masks = tf.cast(tokens['attention_mask'], tf.float32)
sample_length = tf.reduce_sum(attention_masks, axis=-1, keepdims=True)
masked_embs = embs * tf.expand_dims(attention_masks, axis=-1)
masked_embs = tf.reduce_sum(masked_embs, axis=1) / tf.cast(sample_length, tf.float32)
dists = tf.experimental.numpy.inner(masked_embs[:texts_length], masked_embs[texts_length:])
scores = tf.nn.softmax(dists)
results = list(zip(labels, scores.numpy().squeeze().tolist()))
sorted_results = sorted(results, key=lambda x: x[1], reverse=True)
sorted_results = [{"label": label, "score": f"{score:.4f}"} for label, score in sorted_results]
return sorted_results
start = time.time()
sorted_results = label_text(model, tokenizer, texts, labels)
alapsed = time.time() - start
print(sorted_results)
print(f"Processed in {alapsed:.2f} secs")
```
Output:
```shell
[{'label': 'spor', 'score': '1.0000'}, {'label': 'siyaset', 'score': '0.0000'}, {'label': 'kültür', 'score': '0.0000'}]
Processed in 0.22 secs
```
## How it works
`label_text()` function runs the BERT model with a concatenation of `texts` and `labels` as the input, and it agregates per-token hidden states outputted by the BERT model to produce a single vector per sequence. Then, the inner product of text embeddings and label embeddings is calculated as the similarity metric, and `softmax` is applied to convert these distance values to probabilities.
## Dataset
>[Emrah Budur](https://scholar.google.com/citations?user=zSNd03UAAAAJ), [Rıza Özçelik](https://www.cmpe.boun.edu.tr/~riza.ozcelik), [Tunga Güngör](https://www.cmpe.boun.edu.tr/~gungort/) and [Christopher Potts](https://web.stanford.edu/~cgpotts). 2020.
Data and Representation for Turkish Natural Language Inference. To appear in Proceedings of EMNLP. [[pdf]](https://arxiv.org/abs/2004.14963) [[bib]](https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/nli-tr.bib)
```
@inproceedings{budur-etal-2020-data,
title = "Data and Representation for Turkish Natural Language Inference",
author = "Budur, Emrah and
\"{O}z\c{c}elik, R{\i}za and
G\"{u}ng\"{o}r, Tunga",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics"
}
```
|
huggingtweets/elonmusk-kanyewest
|
huggingtweets
| 2021-10-29T17:29:10Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/elonmusk-kanyewest/1635528546431/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442634650703237120/mXIcYtIs_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & ye</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-kanyewest</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & ye.
| Data | Elon Musk | ye |
| --- | --- | --- |
| Tweets downloaded | 3249 | 1856 |
| Retweets | 185 | 186 |
| Short tweets | 853 | 573 |
| Tweets kept | 2211 | 1097 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ceinvzc/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-kanyewest's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/16csk8qn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/16csk8qn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-kanyewest')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/incharmuese-sadsocrates-vvangone
|
huggingtweets
| 2021-10-29T15:35:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/incharmuese-sadsocrates-vvangone/1635521727120/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/581592941124153346/5nfUJyU2_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/561419401145376768/7OIwxUCC_400x400.jpeg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1190256978007904257/TsXH7_nP_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Charmeuse & Sad Socrates & Vincent Van Gone</div>
<div style="text-align: center; font-size: 14px;">@incharmuese-sadsocrates-vvangone</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Charmeuse & Sad Socrates & Vincent Van Gone.
| Data | Charmeuse | Sad Socrates | Vincent Van Gone |
| --- | --- | --- | --- |
| Tweets downloaded | 3238 | 3197 | 3233 |
| Retweets | 1165 | 40 | 1054 |
| Short tweets | 248 | 161 | 266 |
| Tweets kept | 1825 | 2996 | 1913 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13ochftk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @incharmuese-sadsocrates-vvangone's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/173sb7ob) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/173sb7ob/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/incharmuese-sadsocrates-vvangone')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/cnn-elonmusk-kanyewest
|
huggingtweets
| 2021-10-29T15:21:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1276461929934942210/cqNhNk6v_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442634650703237120/mXIcYtIs_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1278259160644227073/MfCyF7CG_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ye & Elon Musk & CNN</div>
<div style="text-align: center; font-size: 14px;">@cnn-elonmusk-kanyewest</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ye & Elon Musk & CNN.
| Data | ye | Elon Musk | CNN |
| --- | --- | --- | --- |
| Tweets downloaded | 1856 | 3250 | 3250 |
| Retweets | 186 | 186 | 104 |
| Short tweets | 573 | 853 | 18 |
| Tweets kept | 1097 | 2211 | 3128 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ehxjxud/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cnn-elonmusk-kanyewest's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1dcouz7e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1dcouz7e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cnn-elonmusk-kanyewest')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Monsia/camembert-fr-covid-tweet-classification
|
Monsia
| 2021-10-29T15:17:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"classification",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- fr
tags:
- classification
license: apache-2.0
metrics:
- accuracy
widget:
- text: "tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les 'ont dit'..."
---
# camembert-fr-covid-tweet-classification
This model is a fine-tune checkpoint of [Yanzhu/bertweetfr-base](https://huggingface.co/Yanzhu/bertweetfr-base), fine-tuned on SST-2.
This model reaches an accuracy of 66.00% on the dev set.
In this dataset, given a tweet, the goal was to infer the underlying topic of the tweet by choosing from four topics classes:
- chiffres : this means, the tweet talk about statistics of covid.
- mesures : this means, the tweet talk about measures take by government of covid
- opinions : this means, the tweet talk about opinion of people like fake new.
- symptomes : this means, the tweet talk about symptoms or variant of covid.
- divers : or other
# Pipelining the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
tokenizer = AutoTokenizer.from_pretrained("Monsia/camembert-fr-covid-tweet-classification")
model = AutoModelForSequenceClassification.from_pretrained("Monsia/camembert-fr-covid-tweet-classification")
nlp_topic_classif = transformers.pipeline('topics-classification', model = model, tokenizer = tokenizer)
nlp_topic_classif("tchai on est morts. on va se faire vacciner et ils vont contrôler comme les marionnettes avec des fils. d'après les '' ont dit ''...")
# Output: [{'label': 'opinions', 'score': 0.831]
```
|
huggingtweets/thewenbo
|
huggingtweets
| 2021-10-29T14:01:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/993188507547037697/AMn40mi2_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Wenbo Chen</div>
<div style="text-align: center; font-size: 14px;">@thewenbo</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Wenbo Chen.
| Data | Wenbo Chen |
| --- | --- |
| Tweets downloaded | 2025 |
| Retweets | 142 |
| Short tweets | 1223 |
| Tweets kept | 660 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/waemeu18/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thewenbo's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3g74hagb) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3g74hagb/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thewenbo')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/yierpaen
|
huggingtweets
| 2021-10-29T14:00:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/yierpaen/1635516027908/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1428772517347479552/fT9QUaOy_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Erpan Pardon</div>
<div style="text-align: center; font-size: 14px;">@yierpaen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Erpan Pardon.
| Data | Erpan Pardon |
| --- | --- |
| Tweets downloaded | 3025 |
| Retweets | 2613 |
| Short tweets | 106 |
| Tweets kept | 306 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jk3rfqi/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @yierpaen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/y2mm5kxj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/y2mm5kxj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/yierpaen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
furyhawk/t5-small-finetuned-bbc
|
furyhawk
| 2021-10-29T11:01:51Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-bbc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-bbc
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3238
- Rouge1: 21.2266
- Rouge2: 16.0927
- Rougel: 19.6785
- Rougelsum: 19.8849
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.4882 | 1.0 | 1001 | 0.3238 | 21.2266 | 16.0927 | 19.6785 | 19.8849 | 19.0 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.10.0
- Datasets 1.14.0
- Tokenizers 0.10.3
|
thomaszz/distilbert-base-uncased-finetuned-ner
|
thomaszz
| 2021-10-29T09:51:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9244616234124793
- name: Recall
type: recall
value: 0.9364582168027744
- name: F1
type: f1
value: 0.9304212515282871
- name: Accuracy
type: accuracy
value: 0.9833987322668276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9245
- Recall: 0.9365
- F1: 0.9304
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2377 | 1.0 | 878 | 0.0711 | 0.9176 | 0.9254 | 0.9215 | 0.9813 |
| 0.0514 | 2.0 | 1756 | 0.0637 | 0.9213 | 0.9346 | 0.9279 | 0.9831 |
| 0.031 | 3.0 | 2634 | 0.0623 | 0.9245 | 0.9365 | 0.9304 | 0.9834 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
shiqing/opus-mt-en-zh-finetuned-en-to-zh
|
shiqing
| 2021-10-29T08:38:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-en-zh-finetuned-en-to-zh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-zh-finetuned-en-to-zh
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 10 | 4.0166 | 1.3628 | 416.6867 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cpu
- Datasets 1.14.0
- Tokenizers 0.10.3
|
aguilara42/openl3-labeler-w-timestamps
|
aguilara42
| 2021-10-29T01:38:54Z | 0 | 1 | null |
[
"audacity",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- audacity
inference: false
---
# Labeler With Timestamps
## Being used for the `Audio Labeler` effect in Audacity
This is a audio labeler model which is used in Audacity's labeler effect.
metadata:
```
{
"sample_rate": 48000,
"domain_tags": ["Music"],
"tags": ["Audio Labeler"],
"effect_type": "waveform-to-labels",
"multichannel": false,
"labels": ["Acoustic Guitar", "Auxiliary Percussion", "Brass", "Clean Electric Guitar", "Distorted Electric Guitar", "Double Bass", "Drum Set", "Electric Bass", "Flute", "piano", "Reeds", "Saxophone", "Strings", "Trumpet", "Voice"],
"short_description": "Use me to label some instruments!",
"long_description": "An audio labeler, which outputs label predictions and time ranges for the labels. This model can label various instruments listed in the labels section."
}
```
|
bochaowei/t5-small-finetuned-cnn-wei1
|
bochaowei
| 2021-10-28T20:24:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn-wei1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 41.1796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn-wei1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6819
- Rouge1: 41.1796
- Rouge2: 18.9426
- Rougel: 29.2338
- Rougelsum: 38.4087
- Gen Len: 72.7607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8582 | 1.0 | 23927 | 1.6819 | 41.1796 | 18.9426 | 29.2338 | 38.4087 | 72.7607 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
patrickvonplaten/sew-d-small-100k-ft-timit-2
|
patrickvonplaten
| 2021-10-28T15:51:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-d-small-100k-ft-timit-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-small-100k-ft-timit-2
This model is a fine-tuned version of [asapp/sew-d-small-100k](https://huggingface.co/asapp/sew-d-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7357
- Wer: 0.7935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.1554 | 0.69 | 100 | 4.0531 | 1.0 |
| 2.9584 | 1.38 | 200 | 2.9775 | 1.0 |
| 2.9355 | 2.07 | 300 | 2.9412 | 1.0 |
| 2.9048 | 2.76 | 400 | 2.9143 | 1.0 |
| 2.8568 | 3.45 | 500 | 2.8786 | 1.0 |
| 2.7248 | 4.14 | 600 | 2.7553 | 0.9833 |
| 2.6124 | 4.83 | 700 | 2.5874 | 1.0511 |
| 2.5463 | 5.52 | 800 | 2.4630 | 1.0883 |
| 2.3302 | 6.21 | 900 | 2.3948 | 1.0651 |
| 2.0669 | 6.9 | 1000 | 2.2228 | 0.9920 |
| 2.1991 | 7.59 | 1100 | 2.0815 | 0.9185 |
| 2.293 | 8.28 | 1200 | 2.0229 | 0.8674 |
| 2.0366 | 8.97 | 1300 | 1.9590 | 0.9165 |
| 1.767 | 9.66 | 1400 | 1.9129 | 0.8125 |
| 1.6222 | 10.34 | 1500 | 1.8868 | 0.8259 |
| 2.173 | 11.03 | 1600 | 1.8691 | 0.8661 |
| 1.8614 | 11.72 | 1700 | 1.8388 | 0.8250 |
| 1.5928 | 12.41 | 1800 | 1.8528 | 0.7772 |
| 1.5978 | 13.1 | 1900 | 1.8002 | 0.7892 |
| 1.9886 | 13.79 | 2000 | 1.7848 | 0.8448 |
| 1.8042 | 14.48 | 2100 | 1.7819 | 0.8156 |
| 1.5488 | 15.17 | 2200 | 1.7615 | 0.8228 |
| 1.4468 | 15.86 | 2300 | 1.7565 | 0.7946 |
| 1.8153 | 16.55 | 2400 | 1.7537 | 0.8341 |
| 1.77 | 17.24 | 2500 | 1.7527 | 0.7958 |
| 1.4742 | 17.93 | 2600 | 1.7592 | 0.7850 |
| 1.4088 | 18.62 | 2700 | 1.7421 | 0.8149 |
| 1.7066 | 19.31 | 2800 | 1.7382 | 0.7977 |
| 1.7068 | 20.0 | 2900 | 1.7357 | 0.7935 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
furyhawk/t5-base-finetuned-bbc-headline
|
furyhawk
| 2021-10-28T15:44:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-bbc-headline
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-bbc-headline
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 167 | 2.2978 | 31.8313 | 10.3824 | 29.6182 | 29.4336 | 10.3153 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
asapp/sew-d-small-100k
|
asapp
| 2021-10-28T14:05:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-small
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-mid-k127-400k
|
asapp
| 2021-10-28T14:04:35Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-mid
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-base-plus-400k
|
asapp
| 2021-10-28T13:55:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-base+
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
asapp/sew-d-base-100k
|
asapp
| 2021-10-28T13:44:39Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew-d",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-D-base
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
SajjadAyoubi/distil-bigbird-fa-zwnj
|
SajjadAyoubi
| 2021-10-28T13:14:34Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"big_bird",
"fill-mask",
"arxiv:1810.04805",
"arxiv:2005.12515",
"arxiv:2007.14062",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
<span align="center">
<a href="https://huggingface.co/SajjadAyoubi/"><img src="https://img.shields.io/static/v1?label=%F0%9F%A4%97%20Hugging%20Face&message=SajjadAyoubi&color=yellow"></a>
<a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a>
</span>
# ParsBigBird: Persian Bert For **Long-Range** Sequences
The [Bert](https://arxiv.org/abs/1810.04805) and [ParsBert](https://arxiv.org/abs/2005.12515) algorithms can handle texts with token lengths of up to 512, however, many tasks such as summarizing and answering questions require longer texts. In our work, we have trained the [BigBird](https://arxiv.org/abs/2007.14062) model for the Persian language to process texts up to 4096 in the Farsi (Persian) language using sparse attention.
## Evaluation: 🌡️
We have evaluated the model on three tasks with different sequence lengths
| Name | Params | SnappFood (F1) | Digikala Magazine(F1) | PersianQA (F1) |
| :--------------------------------------------------------------: | :----: | :-----------------: | :---------------: | :--------------: |
| [distil-bigbird-fa-zwnj](https://github.com/sajjjadayobi/ParsBigBird) | 78M | 85.43% | **94.05%** | **73.34%** |
| [bert-base-fa](https://github.com/hooshvare/parsbert) | 118M | **87.98%** | 93.65% | 70.06% |
- Despite being as big as distill-bert, the model performs equally well as ParsBert and is much better on PersianQA which requires much more context
- This evaluation was based on `max_lentgh=2048` (It can be changed up to 4096)
## How to use❓
### As Contextualized Word Embedding
```python
from transformers import BigBirdModel, AutoTokenizer
MODEL_NAME = "SajjadAyoubi/distil-bigbird-fa-zwnj"
# by default its in `block_sparse` block_size=32
model = BigBirdModel.from_pretrained(MODEL_NAME, block_size=32)
# you can use full attention like the following: use this when input isn't longer than 512
model = BigBirdModel.from_pretrained(MODEL_NAME, attention_type="original_full")
text = "😃 امیدوارم مدل بدردبخوری باشه چون خیلی طول کشید تا ترین بشه"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens) # contextualized embedding
```
### As Fill Blank
```python
from transformers import pipeline
MODEL_NAME = 'SajjadAyoubi/distil-bigbird-fa-zwnj'
fill = pipeline('fill-mask', model=MODEL_NAME, tokenizer=MODEL_NAME)
results = fill('تهران پایتخت [MASK] است.')
print(results[0]['token_str'])
>>> 'ایران'
```
## Pretraining details: 🔭
This model was pretrained using a masked language model (MLM) objective on the Persian section of the Oscar dataset. Following the original BERT training, 15% of tokens were masked. This was first described in this [paper](https://arxiv.org/abs/2007.14062) and released in this [repository](https://github.com/google-research/bigbird). Documents longer than 4096 were split into multiple documents, while documents much smaller than 4096 were merged using the [SEP] token. Model is warm started from `distilbert-fa`’s [checkpoint](https://huggingface.co/HooshvareLab/distilbert-fa-zwnj-base).
- For more details, you can take a look at config.json at the model card in 🤗 Model Hub
## Fine Tuning Recommendations: 🐤
Due to the model's memory requirements, `gradient_checkpointing` and `gradient_accumulation` should be used to maintain a reasonable batch size. Considering this model isn't really big, it's a good idea to first fine-tune it on your dataset using Masked LM objective (also called intermediate fine-tuning) before implementing the main task. In block_sparse mode, it doesn't matter how many tokens are input. It just attends to 256 tokens. Furthermore, original_full should be used up to 512 sequence lengths (instead of block sparse).
### Fine Tuning Examples 👷♂️👷♀️
| Dataset | Fine Tuning Example |
| ------------------------------------- | ------------------------------------------------------------ |
| Digikala Magazine Text Classification | <a href="https://colab.research.google.com/github/sajjjadayobi/PersianQA/blob/main/notebooks/Demo.ipynb"><img src="https://img.shields.io/static/v1?label=Colab&message=Fine-tuning Example&logo=Google%20Colab&color=f9ab00"></a> |
## Contact us: 🤝
If you have a technical question regarding the model, pretraining, code or publication, please create an issue in the repository. This is the fastest way to reach us.
## Citation: ↩️
we didn't publish any papers on the work. However, if you did, please cite us properly with an entry like one below.
```bibtex
@misc{ParsBigBird,
author = {Ayoubi, Sajjad},
title = {ParsBigBird: Persian Bert For Long-Range Sequences},
year = 2021,
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/SajjjadAyobi/ParsBigBird}},
}
```
|
dkurt/bert-large-uncased-whole-word-masking-squad-int8-0001
|
dkurt
| 2021-10-28T12:09:25Z | 6 | 0 |
transformers
|
[
"transformers",
"bert",
"question-answering",
"arxiv:1810.04805",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
# OpenVINO model bert-large-uncased-whole-word-masking-squad-int8-0001
This is a BERT-large model pre-trained on lower-cased English text using Whole-Word-Masking and fine-tuned on the SQuAD v1.1 training set. The model performs question answering for English language; the input is a concatenated premise and question for the premise, and the output is the location of the answer to the question inside the premise. For details about the original floating-point model, check out [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805).
The model has been further quantized to INT8 precision using quantization-aware fine-tuning with [NNCF](https://github.com/openvinotoolkit/nncf).
Model source: [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/bert-large-uncased-whole-word-masking-squad-int8-0001)
|
anton-l/sew-mid-100k-ft-common-language
|
anton-l
| 2021-10-28T10:52:41Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew",
"audio-classification",
"generated_from_trainer",
"dataset:common_language",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: sew-mid-100k-ft-common-language
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-mid-100k-ft-common-language
This model is a fine-tuned version of [asapp/sew-mid-100k](https://huggingface.co/asapp/sew-mid-100k) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1189
- Accuracy: 0.3842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.608 | 1.0 | 173 | 3.7266 | 0.0540 |
| 3.1298 | 2.0 | 346 | 3.2180 | 0.1654 |
| 2.8481 | 3.0 | 519 | 2.9270 | 0.2019 |
| 2.648 | 4.0 | 692 | 2.6991 | 0.2619 |
| 2.5 | 5.0 | 865 | 2.5236 | 0.3004 |
| 2.2578 | 6.0 | 1038 | 2.4019 | 0.3212 |
| 2.2782 | 7.0 | 1211 | 2.1698 | 0.3658 |
| 2.1665 | 8.0 | 1384 | 2.1976 | 0.3631 |
| 2.1626 | 9.0 | 1557 | 2.1473 | 0.3791 |
| 2.1514 | 10.0 | 1730 | 2.1189 | 0.3842 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Tejas003/distillbert_base_uncased_amazon_review_sentiment_300
|
Tejas003
| 2021-10-28T09:07:25Z | 4 | 3 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
Product Review Sentiment Classification
1. Label0 - Negative
2. Label1 - Positive
Trained so far on 20000 Balanced Positive and Negative Reviews
|
quangtran199hust/layoutlmv2_e
|
quangtran199hust
| 2021-10-28T08:17:21Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2_e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2_e
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Tokenizers 0.10.3
|
hchc/distilbert-base-uncased-finetuned-cola
|
hchc
| 2021-10-28T08:08:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5451837431775948
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8508
- Matthews Correlation: 0.5452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5221 | 1.0 | 535 | 0.5370 | 0.4246 |
| 0.3462 | 2.0 | 1070 | 0.5157 | 0.5183 |
| 0.2332 | 3.0 | 1605 | 0.6324 | 0.5166 |
| 0.1661 | 4.0 | 2140 | 0.7616 | 0.5370 |
| 0.1263 | 5.0 | 2675 | 0.8508 | 0.5452 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
quangtran199hust/layoutlmv2_roige
|
quangtran199hust
| 2021-10-28T07:32:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2_roige
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2_roige
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.14.0
- Tokenizers 0.10.3
|
patrickvonplaten/sew-d-mid-400k-librispeech-clean-100h-ft
|
patrickvonplaten
| 2021-10-27T23:44:33Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew-d",
"automatic-speech-recognition",
"librispeech_asr",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: sew-d-mid-400k-librispeech-clean-100h-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-d-mid-400k-librispeech-clean-100h-ft
This model is a fine-tuned version of [asapp/sew-d-mid-400k](https://huggingface.co/asapp/sew-d-mid-400k) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3540
- Wer: 1.0536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.319 | 0.11 | 100 | 11.0572 | 1.0 |
| 3.6726 | 0.22 | 200 | 4.2003 | 1.0 |
| 2.981 | 0.34 | 300 | 3.5742 | 0.9919 |
| 2.9411 | 0.45 | 400 | 3.2599 | 1.0 |
| 2.903 | 0.56 | 500 | 2.9350 | 1.0 |
| 2.8597 | 0.67 | 600 | 2.9514 | 1.0 |
| 2.7771 | 0.78 | 700 | 2.8521 | 1.0 |
| 2.7926 | 0.9 | 800 | 2.7821 | 1.0120 |
| 2.6623 | 1.01 | 900 | 2.7027 | 0.9924 |
| 2.5893 | 1.12 | 1000 | 2.6667 | 1.0240 |
| 2.5733 | 1.23 | 1100 | 2.6341 | 1.0368 |
| 2.5455 | 1.35 | 1200 | 2.5928 | 1.0411 |
| 2.4919 | 1.46 | 1300 | 2.5695 | 1.0817 |
| 2.5182 | 1.57 | 1400 | 2.5559 | 1.1072 |
| 2.4766 | 1.68 | 1500 | 2.5229 | 1.1257 |
| 2.4267 | 1.79 | 1600 | 2.4991 | 1.1151 |
| 2.3919 | 1.91 | 1700 | 2.4768 | 1.1139 |
| 2.3883 | 2.02 | 1800 | 2.4452 | 1.0636 |
| 2.3737 | 2.13 | 1900 | 2.4304 | 1.0594 |
| 2.3569 | 2.24 | 2000 | 2.4095 | 1.0539 |
| 2.3641 | 2.35 | 2100 | 2.3997 | 1.0511 |
| 2.3281 | 2.47 | 2200 | 2.3856 | 1.0414 |
| 2.2912 | 2.58 | 2300 | 2.3750 | 1.0696 |
| 2.3028 | 2.69 | 2400 | 2.3684 | 1.0436 |
| 2.2906 | 2.8 | 2500 | 2.3613 | 1.0538 |
| 2.2822 | 2.91 | 2600 | 2.3558 | 1.0506 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.4.dev0
- Tokenizers 0.10.3
|
anton-l/wav2vec2-base-ft-keyword-spotting
|
anton-l
| 2021-10-27T22:16:42Z | 468 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ft-keyword-spotting
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0824
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8972 | 1.0 | 399 | 0.7023 | 0.8174 |
| 0.3274 | 2.0 | 798 | 0.1634 | 0.9773 |
| 0.1993 | 3.0 | 1197 | 0.1048 | 0.9788 |
| 0.1777 | 4.0 | 1596 | 0.0824 | 0.9826 |
| 0.1527 | 5.0 | 1995 | 0.0812 | 0.9810 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
anton-l/distilhubert-ft-common-language
|
anton-l
| 2021-10-27T21:29:13Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:common_language",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- common_language
metrics:
- accuracy
model-index:
- name: distilhubert-ft-common-language
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-ft-common-language
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the common_language dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7214
- Accuracy: 0.2797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 4
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.6543 | 1.0 | 173 | 3.7611 | 0.0491 |
| 3.2221 | 2.0 | 346 | 3.4868 | 0.1352 |
| 2.9332 | 3.0 | 519 | 3.2732 | 0.1861 |
| 2.7299 | 4.0 | 692 | 3.0944 | 0.2172 |
| 2.5638 | 5.0 | 865 | 2.9790 | 0.2400 |
| 2.3871 | 6.0 | 1038 | 2.8668 | 0.2590 |
| 2.3384 | 7.0 | 1211 | 2.7972 | 0.2653 |
| 2.2648 | 8.0 | 1384 | 2.7625 | 0.2695 |
| 2.2162 | 9.0 | 1557 | 2.7405 | 0.2782 |
| 2.1915 | 10.0 | 1730 | 2.7214 | 0.2797 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
huggingtweets/void_vomicae
|
huggingtweets
| 2021-10-27T21:01:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/void_vomicae/1635368467642/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1452295981517742087/v8HfhHLT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">《 𝚟 o̶ 𝚒 𝚍 》</div>
<div style="text-align: center; font-size: 14px;">@void_vomicae</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 《 𝚟 o̶ 𝚒 𝚍 》.
| Data | 《 𝚟 o̶ 𝚒 𝚍 》 |
| --- | --- |
| Tweets downloaded | 2083 |
| Retweets | 417 |
| Short tweets | 422 |
| Tweets kept | 1244 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/fju0lp9t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @void_vomicae's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1wos3ytc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1wos3ytc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/void_vomicae')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
anton-l/distilhubert-ft-keyword-spotting
|
anton-l
| 2021-10-27T19:00:06Z | 93 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: distilhubert-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-ft-keyword-spotting
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1163
- Accuracy: 0.9706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 256
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8176 | 1.0 | 200 | 0.7718 | 0.8116 |
| 0.2364 | 2.0 | 400 | 0.2107 | 0.9662 |
| 0.1198 | 3.0 | 600 | 0.1374 | 0.9678 |
| 0.0891 | 4.0 | 800 | 0.1163 | 0.9706 |
| 0.085 | 5.0 | 1000 | 0.1180 | 0.9690 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
prajjwal1/bert-small
|
prajjwal1
| 2021-10-27T18:31:52Z | 442,830 | 23 |
transformers
|
[
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-small), [bert-mini]([bert-small](https://huggingface.co/prajjwal1/bert-mini) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
Other models to check out:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-medium
|
prajjwal1
| 2021-10-27T18:30:16Z | 37,177 | 3 |
transformers
|
[
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny), [bert-mini](https://huggingface.co/prajjwal1/bert-mini) and [bert-small](https://huggingface.co/prajjwal1/bert-small). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Other models to check out:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
prajjwal1/bert-tiny
|
prajjwal1
| 2021-10-27T18:29:01Z | 487,487 | 103 |
transformers
|
[
"transformers",
"pytorch",
"BERT",
"MNLI",
"NLI",
"transformer",
"pre-training",
"en",
"arxiv:1908.08962",
"arxiv:2110.01518",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language:
- en
license:
- mit
tags:
- BERT
- MNLI
- NLI
- transformer
- pre-training
---
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
Other models to check out:
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
patrickvonplaten/wav2vec2-large-xlsr-129-turkish-colab
|
patrickvonplaten
| 2021-10-27T17:08:13Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-129-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-129-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-129](https://huggingface.co/facebook/wav2vec2-large-xlsr-129) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3149
- Wer: 0.4748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.4837 | 3.67 | 400 | 3.2526 | 1.0 |
| 3.0896 | 7.34 | 800 | 2.8037 | 1.0 |
| 1.5604 | 11.01 | 1200 | 0.5688 | 0.6613 |
| 0.6511 | 14.68 | 1600 | 0.3998 | 0.5580 |
| 0.4798 | 18.35 | 2000 | 0.3505 | 0.5118 |
| 0.4047 | 22.02 | 2400 | 0.3273 | 0.4858 |
| 0.3519 | 25.69 | 2800 | 0.3224 | 0.4796 |
| 0.343 | 29.36 | 3200 | 0.3149 | 0.4748 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.13.3
- Tokenizers 0.10.3
|
en/distilbert-base-uncased-finetuned-squad
|
en
| 2021-10-27T15:09:11Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2065 | 1.0 | 5577 | 1.1289 |
| 0.9226 | 2.0 | 11154 | 1.1019 |
| 0.7411 | 3.0 | 16731 | 1.1453 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
philschmid/pt-tblard-tf-allocine
|
philschmid
| 2021-10-27T13:54:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: fr
---
# Pytorch Fork of [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine)
A french sentiment analysis model, based on [CamemBERT](https://camembert-model.fr/), and finetuned on a large-scale dataset scraped from [Allociné.fr](http://www.allocine.fr/) user reviews.
## Results
| Validation Accuracy | Validation F1-Score | Test Accuracy | Test F1-Score |
|--------------------:| -------------------:| -------------:|--------------:|
| 97.39 | 97.36 | 97.44 | 97.34 |
The dataset and the evaluation code are available on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert).
## Usage
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("tblard/tf-allocine")
model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine")
nlp = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer)
print(nlp("Alad'2 est clairement le meilleur film de l'année 2018.")) # POSITIVE
print(nlp("Juste whoaaahouuu !")) # POSITIVE
print(nlp("NUL...A...CHIER ! FIN DE TRANSMISSION.")) # NEGATIVE
print(nlp("Je m'attendais à mieux de la part de Franck Dubosc !")) # NEGATIVE
```
## Author
Théophile Blard – :email: [email protected]
If you use this work (code, model or dataset), please cite as:
> Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <https://github.com/TheophileBlard/french-sentiment-analysis-with-bert>
|
doc2query/yahoo_answers-t5-base-v1
|
doc2query
| 2021-10-27T12:56:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:1904.08375",
"arxiv:2104.08663",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- datasets/sentence-transformers/embedding-training-data
widget:
- text: "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
license: apache-2.0
---
# doc2query/yahoo_answers-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
## Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/yahoo_answers-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
## Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 111k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, answer) pairs from [Yahoo Answers](https://huggingface.co/datasets/sentence-transformers/embedding-training-data).
|
patrickvonplaten/unispeech-large-1500h-cv-timit
|
patrickvonplaten
| 2021-10-27T10:50:16Z | 5,699 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"unispeech",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: unispeech-large-1500h-cv-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unispeech-large-1500h-cv-timit
This model is a fine-tuned version of [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3099
- Wer: 0.2196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.64 | 0.69 | 100 | 3.9717 | 0.9981 |
| 2.6793 | 1.38 | 200 | 2.6264 | 1.0 |
| 1.2221 | 2.07 | 300 | 0.9999 | 0.7167 |
| 0.9009 | 2.76 | 400 | 0.6509 | 0.5570 |
| 0.4352 | 3.45 | 500 | 0.4682 | 0.4332 |
| 0.227 | 4.14 | 600 | 0.3661 | 0.3565 |
| 0.2169 | 4.83 | 700 | 0.3244 | 0.3203 |
| 0.2687 | 5.52 | 800 | 0.3137 | 0.2981 |
| 0.127 | 6.21 | 900 | 0.3220 | 0.2828 |
| 0.0922 | 6.9 | 1000 | 0.3075 | 0.2708 |
| 0.0965 | 7.59 | 1100 | 0.2779 | 0.2576 |
| 0.1298 | 8.28 | 1200 | 0.3111 | 0.2480 |
| 0.0855 | 8.97 | 1300 | 0.3021 | 0.2421 |
| 0.0629 | 9.66 | 1400 | 0.3122 | 0.2511 |
| 0.0471 | 10.34 | 1500 | 0.2965 | 0.2368 |
| 0.0871 | 11.03 | 1600 | 0.3247 | 0.2387 |
| 0.0503 | 11.72 | 1700 | 0.3359 | 0.2363 |
| 0.0402 | 12.41 | 1800 | 0.2976 | 0.2332 |
| 0.0336 | 13.1 | 1900 | 0.3139 | 0.2321 |
| 0.0634 | 13.79 | 2000 | 0.3188 | 0.2309 |
| 0.0429 | 14.48 | 2100 | 0.3145 | 0.2335 |
| 0.028 | 15.17 | 2200 | 0.3244 | 0.2242 |
| 0.0255 | 15.86 | 2300 | 0.2914 | 0.2196 |
| 0.0406 | 16.55 | 2400 | 0.3249 | 0.2202 |
| 0.0512 | 17.24 | 2500 | 0.3037 | 0.2198 |
| 0.0269 | 17.93 | 2600 | 0.3218 | 0.2242 |
| 0.0287 | 18.62 | 2700 | 0.3106 | 0.2185 |
| 0.0319 | 19.31 | 2800 | 0.3124 | 0.2217 |
| 0.0494 | 20.0 | 2900 | 0.3099 | 0.2196 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-base-timit-fine-tuned
|
patrickvonplaten
| 2021-10-27T10:49:08Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: wav2vec2-base-timit-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-fine-tuned
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3457
- Wer: 0.2151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1621 | 0.69 | 100 | 3.1102 | 1.0 |
| 2.9592 | 1.38 | 200 | 2.9603 | 1.0 |
| 2.9116 | 2.07 | 300 | 2.8921 | 1.0 |
| 2.1332 | 2.76 | 400 | 1.9718 | 0.9958 |
| 0.8477 | 3.45 | 500 | 0.7813 | 0.5237 |
| 0.4251 | 4.14 | 600 | 0.5166 | 0.3982 |
| 0.3743 | 4.83 | 700 | 0.4400 | 0.3578 |
| 0.4194 | 5.52 | 800 | 0.4077 | 0.3370 |
| 0.23 | 6.21 | 900 | 0.4018 | 0.3142 |
| 0.1554 | 6.9 | 1000 | 0.3623 | 0.2995 |
| 0.1511 | 7.59 | 1100 | 0.3433 | 0.2697 |
| 0.1983 | 8.28 | 1200 | 0.3539 | 0.2715 |
| 0.1443 | 8.97 | 1300 | 0.3622 | 0.2551 |
| 0.0971 | 9.66 | 1400 | 0.3580 | 0.2519 |
| 0.0764 | 10.34 | 1500 | 0.3529 | 0.2437 |
| 0.1203 | 11.03 | 1600 | 0.3455 | 0.2431 |
| 0.0881 | 11.72 | 1700 | 0.3648 | 0.2415 |
| 0.0521 | 12.41 | 1800 | 0.3564 | 0.2320 |
| 0.0434 | 13.1 | 1900 | 0.3485 | 0.2270 |
| 0.0864 | 13.79 | 2000 | 0.3517 | 0.2228 |
| 0.0651 | 14.48 | 2100 | 0.3506 | 0.2285 |
| 0.0423 | 15.17 | 2200 | 0.3428 | 0.2247 |
| 0.0302 | 15.86 | 2300 | 0.3372 | 0.2198 |
| 0.0548 | 16.55 | 2400 | 0.3496 | 0.2196 |
| 0.0674 | 17.24 | 2500 | 0.3407 | 0.2166 |
| 0.0291 | 17.93 | 2600 | 0.3512 | 0.2171 |
| 0.0298 | 18.62 | 2700 | 0.3363 | 0.2158 |
| 0.0419 | 19.31 | 2800 | 0.3493 | 0.2145 |
| 0.046 | 20.0 | 2900 | 0.3457 | 0.2151 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/sew-small-100k-timit
|
patrickvonplaten
| 2021-10-27T10:44:41Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"sew",
"automatic-speech-recognition",
"timit_asr",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- timit_asr
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: sew-small-100k-timit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sew-small-100k-timit
This model is a fine-tuned version of [asapp/sew-small-100k](https://huggingface.co/asapp/sew-small-100k) on the TIMIT_ASR - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4926
- Wer: 0.2988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.071 | 0.69 | 100 | 3.0262 | 1.0 |
| 2.9304 | 1.38 | 200 | 2.9297 | 1.0 |
| 2.8823 | 2.07 | 300 | 2.8367 | 1.0 |
| 1.5668 | 2.76 | 400 | 1.2310 | 0.8807 |
| 0.7422 | 3.45 | 500 | 0.7080 | 0.5957 |
| 0.4121 | 4.14 | 600 | 0.5829 | 0.5073 |
| 0.3981 | 4.83 | 700 | 0.5153 | 0.4461 |
| 0.5038 | 5.52 | 800 | 0.4908 | 0.4151 |
| 0.2899 | 6.21 | 900 | 0.5122 | 0.4111 |
| 0.2198 | 6.9 | 1000 | 0.4908 | 0.3803 |
| 0.2129 | 7.59 | 1100 | 0.4668 | 0.3789 |
| 0.3007 | 8.28 | 1200 | 0.4788 | 0.3562 |
| 0.2264 | 8.97 | 1300 | 0.5113 | 0.3635 |
| 0.1536 | 9.66 | 1400 | 0.4950 | 0.3441 |
| 0.1206 | 10.34 | 1500 | 0.5062 | 0.3421 |
| 0.2021 | 11.03 | 1600 | 0.4900 | 0.3283 |
| 0.1458 | 11.72 | 1700 | 0.5019 | 0.3307 |
| 0.1151 | 12.41 | 1800 | 0.4989 | 0.3270 |
| 0.0985 | 13.1 | 1900 | 0.4925 | 0.3173 |
| 0.1412 | 13.79 | 2000 | 0.4868 | 0.3125 |
| 0.1579 | 14.48 | 2100 | 0.4983 | 0.3147 |
| 0.1043 | 15.17 | 2200 | 0.4914 | 0.3091 |
| 0.0773 | 15.86 | 2300 | 0.4858 | 0.3102 |
| 0.1327 | 16.55 | 2400 | 0.5084 | 0.3064 |
| 0.1281 | 17.24 | 2500 | 0.5017 | 0.3025 |
| 0.0845 | 17.93 | 2600 | 0.5001 | 0.3012 |
| 0.0717 | 18.62 | 2700 | 0.4894 | 0.3004 |
| 0.0835 | 19.31 | 2800 | 0.4963 | 0.2998 |
| 0.1181 | 20.0 | 2900 | 0.4926 | 0.2988 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
patrickvonplaten/wav2vec2-xlarge-dotdotdot-common_voice-tr-demo
|
patrickvonplaten
| 2021-10-27T10:41:06Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- tr
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xlarge-...-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlarge-...-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-xlarge-xlsr-...](https://huggingface.co/facebook/wav2vec2-xlarge-xlsr-...) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2701
- Wer: 0.2309
- Cer: 0.0527
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4388 | 3.7 | 400 | 1.366 | 0.9701 |
| 0.3766 | 7.4 | 800 | 0.4914 | 0.5374 |
| 0.2295 | 11.11 | 1200 | 0.3934 | 0.4125 |
| 0.1121 | 14.81 | 1600 | 0.3264 | 0.2904 |
| 0.1473 | 18.51 | 2000 | 0.3103 | 0.2671 |
| 0.1013 | 22.22 | 2400 | 0.2589 | 0.2324 |
| 0.0704 | 25.92 | 2800 | 0.2826 | 0.2339 |
| 0.0537 | 29.63 | 3200 | 0.2704 | 0.2309 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
suwani/BERT_NER_Ep6_PAD_50-finetuned-ner
|
suwani
| 2021-10-27T10:28:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERT_NER_Ep6_PAD_50-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_NER_Ep6_PAD_50-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3741
- Precision: 0.6510
- Recall: 0.7399
- F1: 0.6926
- Accuracy: 0.9020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 288 | 0.3648 | 0.5949 | 0.5907 | 0.5928 | 0.8792 |
| 0.4815 | 2.0 | 576 | 0.3400 | 0.5860 | 0.7390 | 0.6536 | 0.8867 |
| 0.4815 | 3.0 | 864 | 0.3217 | 0.6404 | 0.7129 | 0.6747 | 0.8992 |
| 0.2206 | 4.0 | 1152 | 0.3430 | 0.6413 | 0.7321 | 0.6837 | 0.8995 |
| 0.2206 | 5.0 | 1440 | 0.3560 | 0.6464 | 0.7377 | 0.6890 | 0.9010 |
| 0.1487 | 6.0 | 1728 | 0.3741 | 0.6510 | 0.7399 | 0.6926 | 0.9020 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
peter2000/xlm-roberta-base-finetuned-ecoicop
|
peter2000
| 2021-10-27T09:02:06Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-ecoicop
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ecoicop
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1685
- Acc: 0.9659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4224 | 1.0 | 2577 | 0.3612 | 0.9132 |
| 0.2313 | 2.0 | 5154 | 0.2510 | 0.9441 |
| 0.1746 | 3.0 | 7731 | 0.1928 | 0.9569 |
| 0.1325 | 4.0 | 10308 | 0.1731 | 0.9640 |
| 0.0946 | 5.0 | 12885 | 0.1685 | 0.9659 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
VariableZee/DialoGPT-small-ivylia03
|
VariableZee
| 2021-10-27T08:50:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
tags:
- conversational
---
|
espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char
|
espnet
| 2021-10-27T02:55:53Z | 3 | 11 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:wenetspeech",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: zh
datasets:
- wenetspeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char`
This model was trained by Pengcheng Guo using wenetspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 5c21f63e45e0961a5d817017c282b0cafd68a3aa
pip install -e .
cd egs2/wenetspeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 6 15:11:20 CST 2021`
- python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]`
- espnet version: `espnet 0.10.2a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_conformer_raw_zh_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|7176|67.1|32.9|0.0|0.1|33.0|32.9|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/dev|13825|16684|32.1|54.1|13.8|0.1|68.0|64.2|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|8599|13.4|84.6|2.0|0.1|86.7|86.8|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|25995|46.2|50.4|3.4|1.1|54.9|52.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|104765|96.3|3.6|0.1|0.2|3.9|32.9|
|decode_asr_rnn_asr_model_valid.acc.ave_10bestdev|13825|333357|90.7|3.4|5.9|0.4|9.7|64.2|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|220614|84.6|5.0|10.4|0.5|15.9|86.8|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|416968|91.8|5.3|2.9|0.6|8.8|52.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_zh_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 44205
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 30000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char/train/speech_shape
- exp/asr_stats_raw_zh_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char/valid/speech_shape
- exp/asr_stats_raw_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_l/wav.scp
- speech
- sound
- - dump/raw/train_l/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0015
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- 的
- 我
- 是
- 你
- 了
- 一
- 不
- 这
- 个
- 有
- 就
- 们
- 在
- 他
- 人
- 么
- 来
- 说
- 那
- 要
- 好
- 啊
- 大
- 到
- 上
- 也
- 没
- 都
- 去
- 能
- 子
- 会
- 为
- 得
- 时
- 还
- 可
- 以
- 什
- 家
- 后
- 看
- 呢
- 对
- 事
- 天
- 下
- 过
- 想
- 多
- 小
- 出
- 自
- 儿
- 生
- 给
- 里
- 现
- 着
- 然
- 吧
- 样
- 道
- 吗
- 心
- 跟
- 中
- 很
- 点
- 年
- 和
- 地
- 怎
- 知
- 十
- 老
- 当
- 把
- 话
- 别
- 所
- 之
- 情
- 实
- 开
- 面
- 回
- 行
- 国
- 做
- 己
- 经
- 如
- 真
- 起
- 候
- 些
- 让
- 发
- 她
- 觉
- 但
- 成
- 定
- 意
- 二
- 长
- 最
- 方
- 三
- 前
- 因
- 用
- 呀
- 种
- 只
- 走
- 其
- 问
- 再
- 果
- 而
- 分
- 两
- 打
- 学
- 间
- 您
- 本
- 于
- 明
- 手
- 公
- 听
- 比
- 作
- 女
- 太
- 今
- 从
- 关
- 妈
- 同
- 法
- 动
- 已
- 见
- 才
- 孩
- 感
- 吃
- 常
- 次
- 它
- 进
- 先
- 找
- 身
- 全
- 理
- 又
- 力
- 正
- 主
- 应
- 高
- 被
- 钱
- 快
- 等
- 头
- 重
- 车
- 谢
- 日
- 东
- 放
- 无
- 工
- 咱
- 哪
- 五
- 者
- 像
- 西
- 该
- 干
- 相
- 信
- 机
- 百
- 特
- 业
- 活
- 师
- 边
- 爱
- 友
- 新
- 外
- 位
- 更
- 直
- 几
- 第
- 非
- 四
- 题
- 接
- 少
- 哥
- 死
- 完
- 刚
- 电
- 气
- 安
- 爸
- 白
- 告
- 美
- 解
- 叫
- 月
- 带
- 欢
- 谁
- 体
- 喜
- 部
- 场
- 姐
- 军
- 万
- 结
- 合
- 难
- 八
- 每
- 目
- 亲
- 朋
- 认
- 总
- 加
- 通
- 办
- 马
- 件
- 受
- 任
- 请
- 住
- 王
- 思
- 门
- 名
- 平
- 系
- 文
- 帮
- 路
- 变
- 记
- 水
- 九
- 算
- 将
- 口
- 男
- 度
- 报
- 六
- 张
- 管
- 够
- 性
- 表
- 提
- 何
- 讲
- 期
- 拿
- 保
- 嘛
- 司
- 原
- 始
- 此
- 诉
- 处
- 清
- 内
- 产
- 金
- 晚
- 早
- 交
- 离
- 眼
- 队
- 七
- 入
- 山
- 代
- 市
- 海
- 物
- 零
- 望
- 世
- 婚
- 命
- 越
- 收
- 向
- 花
- 房
- 错
- 节
- 父
- 反
- 战
- 买
- 量
- 或
- 员
- 号
- 千
- 怕
- 底
- 且
- 品
- 民
- 化
- 爷
- 并
- 与
- 服
- 需
- 资
- 求
- 教
- 娘
- 医
- 数
- 院
- 书
- 利
- 往
- 确
- 各
- 单
- 风
- 送
- 必
- 条
- 包
- 准
- 光
- 整
- 病
- 弟
- 嗯
- 计
- 照
- 强
- 务
- 影
- 城
- 夫
- 俩
- 决
- 声
- 连
- 乐
- 息
- 远
- 北
- 至
- 饭
- 留
- 宝
- 神
- 近
- 考
- 备
- 案
- 界
- 容
- 况
- 母
- 较
- 持
- 证
- 选
- 制
- 程
- 喝
- 害
- 字
- 失
- 立
- 台
- 玩
- 查
- 块
- 便
- 挺
- 段
- 周
- 由
- 句
- 紧
- 李
- 据
- 杀
- 南
- 商
- 识
- 网
- 式
- 愿
- 传
- 流
- 消
- 伤
- 根
- 演
- 希
- 故
- 坐
- 建
- 注
- 许
- 调
- 共
- 空
- 半
- 却
- 酒
- 联
- 微
- 言
- 肯
- 赶
- 跑
- 笑
- 区
- 岁
- 红
- 达
- 官
- 轻
- 易
- 火
- 线
- 拉
- 首
- 导
- 团
- 慢
- 指
- 写
- 深
- 论
- 片
- 改
- 啥
- 满
- 步
- 音
- 功
- 聊
- 客
- 未
- 格
- 基
- 睡
- 观
- 份
- 视
- 色
- 价
- 政
- 转
- 终
- 复
- 啦
- 呃
- 阿
- 倒
- 义
- 警
- 林
- 使
- 科
- 运
- 苦
- 待
- 费
- 随
- 救
- 试
- 班
- 敢
- 精
- 及
- 术
- 造
- 续
- 养
- 展
- 答
- 绝
- 众
- 站
- 妹
- 差
- 谈
- 卖
- 播
- 创
- 领
- 象
- 志
- 投
- 习
- 兄
- 元
- 皇
- 专
- 态
- 急
- 局
- 兴
- 楚
- 飞
- 护
- 装
- 热
- 奶
- 取
- 设
- 游
- 读
- 福
- 药
- 担
- 历
- 忙
- 规
- 掉
- 刘
- 切
- 断
- 尽
- 社
- 久
- 支
- 板
- 星
- 姑
- 曾
- 突
- 除
- 华
- 责
- 排
- 京
- 值
- 士
- 统
- 换
- 德
- 衣
- 组
- 示
- 脸
- 刻
- 黑
- 遇
- 虽
- 顾
- 戏
- 怪
- 懂
- 叔
- 夜
- 陈
- 亮
- 江
- 兵
- 负
- 布
- 青
- 落
- 推
- 假
- 类
- 令
- 技
- 英
- 质
- 黄
- 治
- 形
- 助
- 球
- 歌
- 参
- 广
- 继
- 简
- 画
- 奇
- 陪
- 阳
- 险
- 须
- 念
- 迎
- 幸
- 抓
- 破
- 另
- 争
- 竟
- 户
- 律
- 择
- 究
- 龙
- 足
- 店
- 脑
- 斯
- 党
- 权
- 约
- 疑
- 议
- 严
- 密
- 克
- 存
- 穿
- 承
- 校
- 击
- 际
- 标
- 云
- 营
- 察
- 超
- 食
- 集
- 级
- 礼
- 静
- 背
- 武
- 初
- 拍
- 梦
- 验
- 响
- 角
- 石
- 股
- 追
- 怀
- 婆
- 适
- 独
- 忘
- 血
- 醒
- 具
- 罪
- 享
- 毛
- 香
- 状
- 配
- 靠
- 语
- 仅
- 低
- 细
- 米
- 既
- 钟
- 极
- 停
- 味
- 则
- 油
- 器
- 楼
- 菜
- 研
- 互
- 压
- 贵
- 村
- 属
- 派
- 乎
- 坏
- 控
- 显
- 图
- 双
- 职
- 永
- 哈
- 鬼
- 依
- 料
- 按
- 府
- 坚
- 某
- 甚
- 居
- 练
- 顺
- 模
- 即
- 州
- 引
- 乱
- 速
- 庭
- 朝
- 室
- 似
- 付
- 划
- 尔
- 境
- 犯
- 烦
- 环
- 伙
- 巴
- 春
- 古
- 妇
- 势
- 款
- 增
- 财
- 河
- 守
- 虑
- 汉
- 枪
- 妻
- 爹
- 弄
- 委
- 企
- 冲
- 置
- 麻
- 育
- 项
- 防
- 胡
- 杨
- 致
- 辈
- 括
- 毕
- 卫
- 修
- 史
- 型
- 牌
- 嘴
- 苏
- 群
- 举
- 痛
- 座
- 概
- 搞
- 围
- 土
- 毒
- 唱
- 冷
- 累
- 玉
- 获
- 误
- 跳
- 脚
- 雨
- 剧
- 休
- 皮
- 止
- 济
- 肉
- 丽
- 借
- 铁
- 牛
- 哭
- 招
- 闹
- 银
- 优
- 温
- 狗
- 退
- 洗
- 拜
- 否
- 票
- 偷
- 抱
- 博
- 般
- 效
- 套
- 维
- 普
- 康
- 富
- 宫
- 索
- 罗
- 堂
- 智
- 省
- 介
- 孙
- 灵
- 评
- 藏
- 称
- 课
- 货
- 姨
- 艺
- 骗
- 雪
- 赛
- 景
- 昨
- 健
- 鱼
- 激
- 危
- 熟
- 圈
- 闻
- 监
- 替
- 君
- 恋
- 良
- 掌
- 草
- 松
- 供
- 努
- 例
- 短
- 帝
- 姓
- 率
- 族
- 亿
- 赵
- 蛋
- 判
- 预
- 频
- 卡
- 架
- 纪
- 弃
- 秀
- 兰
- 层
- 检
- 伴
- 抗
- 讨
- 源
- 夏
- 咋
- 惊
- 录
- 善
- 补
- 刀
- 充
- 升
- 章
- 午
- 若
- 私
- 吴
- 素
- 旅
- 临
- 挑
- 唐
- 露
- 树
- 斗
- 舞
- 左
- 叶
- 副
- 晓
- 厂
- 弹
- 印
- 秘
- 屋
- 田
- 木
- 困
- 园
- 封
- 逃
- 批
- 馆
- 疼
- 败
- 陆
- 敌
- 散
- 采
- 翻
- 缺
- 胜
- 免
- 销
- 鸡
- 降
- 波
- 测
- 限
- 释
- 忍
- 归
- 床
- 餐
- 茶
- 码
- 宁
- 乡
- 辛
- 彩
- 亚
- 浪
- 漂
- 庆
- 训
- 范
- 烧
- 词
- 吵
- 媳
- 探
- 余
- 恐
- 积
- 农
- 遍
- 舒
- 顶
- 构
- 呼
- 丝
- 执
- 雅
- 惯
- 右
- 脱
- 恩
- 野
- 折
- 趣
- 笔
- 谓
- 盘
- 贝
- 宣
- 绍
- 嘉
- 宋
- 抢
- 嫌
- 尊
- 碰
- 绪
- 丢
- 厉
- 沙
- 轮
- 施
- 织
- 托
- 县
- 策
- 杯
- 逼
- 傻
- 束
- 街
- 疗
- 益
- 骨
- 迷
- 姻
- 恶
- 默
- 寻
- 搜
- 哦
- 材
- 吸
- 劳
- 勇
- 占
- 暴
- 船
- 徐
- 虎
- 融
- 异
- 审
- 攻
- 雷
- 稳
- 呗
- 输
- 睛
- 臣
- 端
- 威
- 秋
- 欧
- 冰
- 韩
- 减
- <space>
- 操
- 混
- 汽
- 暗
- 隐
- 嫂
- 沉
- 烟
- 顿
- 凭
- 洋
- 嫁
- 购
- 粉
- 遗
- 杂
- 协
- 尝
- 键
- 亡
- 秦
- 纸
- 拥
- 革
- 猫
- 伯
- 祝
- 签
- 傅
- 牙
- 湖
- 莫
- 杰
- 旁
- 港
- 劲
- 宗
- 偏
- 触
- 唯
- 吓
- 辆
- 沈
- 列
- 梅
- 祖
- 舍
- 尤
- 赚
- 疫
- 腾
- 拼
- 奖
- 刺
- 齐
- 诚
- 媒
- 戴
- 账
- 炸
- 骂
- 避
- 麦
- 爆
- 域
- 烈
- 暖
- 季
- 猜
- 佳
- 净
- 腿
- 磨
- 曲
- 虚
- 阵
- 荣
- 访
- 核
- 鲜
- 阶
- 镇
- 灯
- 估
- 剩
- 硬
- 租
- 敬
- 损
- 惜
- 挂
- 董
- 巨
- 忆
- 登
- 丈
- 帅
- 童
- 耳
- 央
- 软
- 移
- 略
- 额
- 厅
- 挥
- 透
- 络
- 弱
- 珍
- 恨
- 巧
- 丁
- 谋
- 孤
- 豆
- 诗
- 冒
- 狼
- 渐
- 峰
- 售
- 凡
- 聚
- 洞
- 抽
- 劝
- 闭
- 摆
- 冬
- 凶
- 魔
- 灭
- 雄
- 挣
- 搬
- 龄
- 朱
- 编
- 航
- 席
- 驾
- 授
- 鼓
- 握
- 隔
- 猪
- 仙
- 颜
- 镜
- 胖
- 赢
- 仇
- 晨
- 欺
- 刑
- 谷
- 旦
- 亏
- 盖
- 症
- 喊
- 蓝
- 讯
- 殿
- 梁
- 躲
- 旧
- 针
- 箱
- 丰
- 洲
- 鞋
- 征
- 蒙
- 伟
- 袋
- 庄
- 患
- 怨
- 佛
- 稍
- 朵
- 纳
- 吉
- 川
- 典
- 迹
- 瑞
- 废
- 搭
- 涨
- 汤
- 启
- 桌
- 摸
- 赔
- 宜
- 纯
- 贴
- 聪
- 熊
- 延
- 瓶
- 版
- 缘
- 距
- 甜
- 析
- 盛
- 孕
- 彻
- 桥
- 尚
- 染
- 撞
- 途
- 沟
- 疯
- 敏
- 瞧
- 漫
- 胆
- 诺
- 刷
- 饿
- 仍
- 喂
- 辞
- 迟
- 淡
- 郑
- 歉
- 扰
- 宾
- 圆
- 赞
- 肚
- 慧
- 泪
- 吹
- 拖
- 遭
- 穷
- 罚
- 悔
- 绿
- 忽
- 唉
- 毫
- 绩
- 暂
- 射
- 岛
- 拾
- 珠
- 欠
- 忠
- 陷
- 阴
- 尼
- 悲
- 糊
- 撤
- 徒
- 剑
- 币
- 娜
- 违
- 泡
- 仗
- 粮
- 培
- 趟
- 菲
- 拒
- 棒
- 脾
- 赏
- 窗
- 宇
- 闲
- 附
- 踏
- 彼
- 涉
- 锁
- 撒
- 魂
- 羊
- 述
- 屈
- 库
- 滚
- 凉
- 颗
- 寒
- 呐
- 墙
- 娃
- 序
- 迪
- 丹
- 扬
- 瞎
- 递
- 凤
- 碗
- 屁
- 锅
- 奔
- 幅
- 债
- 糖
- 奋
- 汇
- 圣
- 订
- 偶
- 残
- 宽
- 狂
- 鼠
- 狠
- 幕
- 固
- 竞
- 蜜
- 吐
- 摄
- 骑
- 篇
- 毁
- 尾
- 摇
- 奥
- 厚
- 妖
- 禁
- 逐
- 均
- 尸
- 冠
- 阅
- 辑
- 捕
- 载
- 郭
- 俺
- 诊
- 欲
- 扎
- 鸟
- 柔
- 迫
- 豪
- 踪
- 扔
- 碎
- 末
- 娶
- 扫
- 朕
- 励
- 乔
- 闺
- 档
- 厨
- 倍
- 湾
- 郎
- 幼
- 纷
- 奴
- 阻
- 饮
- 怒
- 妙
- 琴
- 曹
- 脏
- 牵
- 瓜
- 滴
- 炮
- 缓
- 含
- 献
- 柜
- 仔
- 艾
- 潜
- 赌
- 震
- 础
- 添
- 兔
- 焦
- 躺
- 森
- 肥
- 洪
- 孝
- 偿
- 悉
- 撑
- 甘
- 桃
- 苹
- 魏
- 鲁
- 池
- 狱
- 厌
- 纠
- 朗
- 贷
- 铺
- 殊
- 坦
- 爬
- 擦
- 酸
- 钢
- 咖
- 瞒
- 蛮
- 谅
- 耐
- 申
- 夸
- 欣
- 诶
- 驶
- 屏
- 烂
- 凌
- 甲
- 胎
- 仪
- 貌
- 番
- 涂
- 抬
- 舅
- 扯
- 鹿
- 摩
- 诸
- 秒
- 泽
- 埋
- 蒋
- 隆
- 赖
- 奸
- 咬
- 恢
- 宿
- 乖
- 邀
- 抵
- 臭
- 闪
- 莉
- 熬
- 链
- 盯
- 侦
- 灾
- 堆
- 灰
- 卷
- 盾
- 障
- 截
- 恰
- 佩
- 戒
- 莲
- 裁
- 芬
- 戚
- 匪
- 滑
- 趁
- 询
- 绑
- 辣
- 挖
- 俗
- 祸
- 符
- 扣
- 插
- 仁
- 壁
- 腰
- 斤
- 燕
- 筑
- 柱
- 夺
- 援
- 映
- 壮
- 杜
- 摔
- 润
- 恭
- 乌
- 慰
- 啡
- 著
- 井
- 跌
- 牢
- 荐
- 拔
- 惹
- 侯
- 玲
- 炎
- 胸
- 旗
- 牲
- 喽
- 涛
- 衡
- 矛
- 伍
- 贤
- 惨
- 糟
- 慌
- 伏
- 醉
- 仓
- 拆
- 乘
- 疾
- 鼻
- 潮
- 予
- 奉
- 伦
- 劫
- 伊
- 怜
- 孟
- 肺
- 忧
- 倾
- 矩
- 荒
- 奏
- 塔
- 塞
- 迅
- 轨
- 瞬
- 丫
- 狐
- 叛
- 繁
- 眠
- 孔
- 谱
- 悄
- 泰
- 姜
- 侵
- 妃
- 冯
- 柳
- 洛
- 岸
- 凯
- 陛
- 幺
- 仿
- 氏
- 窝
- 曼
- 挡
- 浩
- 盟
- 轩
- 牺
- 贫
- 绕
- 谎
- 措
- 扶
- 梯
- 炼
- 勤
- 霸
- 横
- 罢
- 呆
- 税
- 桂
- 哎
- 慕
- 植
- 允
- 荡
- 洁
- 肖
- 耗
- 贼
- 艰
- 贺
- 幻
- 饱
- 胃
- 袭
- 廷
- 泥
- 丧
- 缩
- 砸
- 姥
- 拦
- 扮
- 糕
- 肤
- 猴
- 脆
- 炒
- 耀
- 盗
- 邓
- 扩
- 纵
- 振
- 敲
- 鹏
- 姆
- 湿
- 丑
- 召
- 苗
- 伸
- 惑
- 碍
- 萨
- 瘦
- 闯
- 迁
- 坑
- 弯
- 卑
- 尖
- 遥
- 侠
- 犹
- 押
- 冤
- 钻
- 汗
- 闷
- 邻
- 淘
- 抛
- 妆
- 贾
- 侧
- 傲
- 描
- 耍
- 猛
- 薇
- 裤
- 憾
- 督
- 贸
- 墨
- 勒
- 薄
- 嘞
- 渡
- 紫
- 悟
- 锦
- 溜
- 逆
- 惠
- 辉
- 贪
- 圾
- 垃
- 券
- 燃
- 虫
- 悠
- 伪
- 尿
- 懒
- 俊
- 寄
- 歇
- 盒
- 潘
- 储
- 愈
- 脉
- 粗
- 返
- 昌
- 泉
- 蔡
- 愧
- 赤
- 岳
- 婷
- 猎
- 饼
- 肩
- 勾
- 巡
- 竹
- 催
- 陌
- 踩
- 促
- 扭
- 堵
- 酷
- 芳
- 逛
- 陵
- 耽
- 凑
- 寿
- 缝
- 剪
- 郁
- 宅
- 抚
- 筹
- 沿
- 烤
- 奈
- 挨
- 晋
- 崩
- 浮
- 阁
- 彭
- 裂
- 崇
- 眉
- 桑
- 辩
- 漏
- 稀
- 液
- 汪
- 袁
- 掩
- 浑
- 坡
- 晕
- 缠
- 仰
- 挤
- 睁
- 羽
- 岗
- 捡
- 墓
- 综
- 矿
- 妥
- 厕
- 辱
- 惧
- 逗
- 帽
- 寸
- 搁
- 跨
- 渴
- 饰
- 璃
- 琳
- 爽
- 愤
- 饶
- 卧
- 誓
- 滋
- 鉴
- 腐
- 鸭
- 蛇
- 妮
- 莱
- 哟
- 钥
- 甄
- 肠
- 畅
- 慎
- 悬
- 逻
- 胁
- 辰
- 呈
- 棋
- 寨
- 萌
- 覆
- 姚
- 津
- 笨
- 轰
- 乏
- 匙
- 摊
- 陶
- 恼
- 昏
- 抑
- 姿
- 愁
- 誉
- 椅
- 羞
- 澡
- 踢
- 晶
- 萧
- 箭
- 罩
- 宠
- 羡
- 亦
- 祥
- 串
- 昆
- 煮
- 疏
- 纹
- 泄
- 痕
- 喷
- 册
- 跃
- 卢
- 岩
- 跪
- 兽
- 桶
- 飘
- 漠
- 堪
- 哄
- 寂
- 崔
- 腹
- 癌
- 拳
- 驻
- 霍
- 拨
- 诞
- 捐
- 御
- 榜
- 唤
- 荷
- 径
- 署
- 锋
- 玛
- 匆
- 恒
- 吕
- 邮
- 圳
- 黎
- 掏
- 莎
- 寞
- 佐
- 诈
- 牧
- 盐
- 叹
- 尬
- 匹
- 狸
- 膀
- 谨
- 尘
- 驱
- 乳
- 晒
- 宴
- 辜
- 哲
- 铜
- 薪
- 盆
- 割
- 忌
- 旋
- 翼
- 哀
- 咨
- 遵
- 夹
- 侣
- 译
- 胞
- 浅
- 邦
- 俄
- 弗
- 豫
- 甭
- 乃
- 扛
- 杭
- 瓦
- 槽
- 污
- 尴
- 琢
- 枝
- 详
- 柴
- 佑
- 盼
- 抖
- 惩
- 捷
- 葬
- 贡
- 艳
- 塑
- 茫
- 叨
- 浓
- 拐
- 捉
- 憋
- 稿
- 苍
- 葛
- 扑
- 娱
- 赋
- 杆
- 绘
- 聆
- 肌
- 婴
- 摘
- 岂
- 呵
- 冻
- 泳
- 揭
- 坤
- 盈
- 毅
- 撕
- 娇
- 唠
- 宏
- 吊
- 籍
- 楠
- 肃
- 抹
- 玄
- 湘
- 迈
- 酱
- 骄
- 咐
- 扇
- 幽
- 疲
- 邪
- 吞
- 趋
- 尺
- 玻
- 溃
- 诱
- 翠
- 兼
- 辅
- 岭
- 栏
- 柏
- 址
- 寺
- 逢
- 琪
- 慈
- 愣
- 契
- 渠
- 齿
- 薛
- 拟
- 填
- 坛
- 抄
- 痴
- 绳
- 役
- 擅
- 晃
- 斌
- 愉
- 届
- 悦
- 旨
- 砍
- 弥
- 挽
- 肝
- 鸣
- 庙
- 烫
- 聘
- 皆
- 婶
- 舌
- 枉
- 赫
- 蓉
- 瞅
- 阔
- 俱
- 循
- 鸿
- 彪
- 伺
- 堡
- 谦
- 剂
- 洒
- 赴
- 妨
- 磊
- 嘱
- 蝶
- 兆
- 豹
- 绣
- 篮
- 锻
- 陕
- 霉
- 涵
- 疆
- 丸
- 蠢
- 铃
- 浙
- 庞
- 萝
- 泛
- 芝
- 煤
- 甩
- 氛
- 页
- 逸
- 袖
- 携
- 躁
- 夕
- 匠
- 蹈
- 坊
- 雾
- 蹲
- 颠
- 脂
- 塌
- 棵
- 鹰
- 澳
- 哇
- 筋
- 纽
- 脖
- 棉
- 渣
- 寡
- 践
- 侄
- 披
- 魅
- 虹
- 肿
- 胶
- 霞
- 罐
- 晴
- 拓
- 卿
- 耻
- 砖
- 宪
- 歪
- 兜
- 衰
- 捧
- 歹
- 雕
- 穆
- 栋
- 瑶
- 毙
- 衷
- 膜
- 囊
- 莹
- 垫
- 吻
- 嘟
- 舰
- 虾
- 壳
- 穴
- 勉
- 裙
- 旺
- 柯
- 磕
- 贩
- 腻
- 蹦
- 卜
- 茹
- 驴
- 臂
- 删
- 菌
- 妾
- 蜂
- 祭
- 菊
- 咸
- 淑
- 笼
- 涯
- 碧
- 宙
- 骚
- 皓
- 赐
- 晰
- 腔
- 龟
- 泼
- 鹅
- 啪
- 巾
- 炉
- 沾
- 醋
- 澜
- 朴
- 棍
- 伞
- 雀
- 赠
- 妞
- 淋
- 刮
- 汁
- 椒
- 埃
- 嚷
- 盲
- 窃
- 辽
- 贱
- 滩
- 昭
- 贯
- 珊
- 涌
- 辨
- 捞
- 仲
- 拘
- 碑
- 侍
- 剿
- 搅
- 狮
- 藤
- 旭
- 翅
- 滨
- 禀
- 遮
- 瑟
- 斩
- 攒
- 犬
- 挫
- 僧
- 吩
- 渊
- 蒂
- 萍
- 庸
- 蓄
- 鼎
- 咪
- 姬
- 溪
- 郡
- 镖
- 怡
- 杉
- 畏
- 瓷
- 枚
- 煎
- 劣
- 饺
- 妄
- 卓
- 蔽
- 蒸
- 垂
- 嘲
- 慨
- 谊
- 蹭
- 逮
- 锐
- 钉
- 舟
- 沃
- 凝
- 翔
- 颈
- 靖
- 灌
- 膊
- 崖
- 娟
- 胳
- 铭
- 灿
- 亭
- 粒
- 卸
- 咕
- 坎
- 攀
- 婿
- 奢
- 茂
- 趴
- 耿
- 捏
- 怖
- 浴
- 婉
- 煌
- 霖
- 揍
- 昂
- 驰
- 壶
- 械
- 卦
- 粥
- 尹
- 瘾
- 雇
- 翰
- 肆
- 寇
- 曦
- 厢
- 杠
- 屠
- 芒
- 谣
- 沫
- 掘
- 酬
- 讼
- 乾
- 玫
- 瑰
- 逊
- 惦
- 儒
- 肾
- 粹
- 愚
- 渔
- 暑
- 伐
- 潇
- 喘
- 敦
- 翁
- 斥
- 帖
- 纱
- 梳
- 缴
- 茅
- 谭
- 氧
- 遣
- 履
- 刹
- 枕
- 婢
- 徽
- 轿
- 寓
- 咽
- 叉
- 嗓
- 捣
- 裹
- 览
- 拯
- 疚
- 蜀
- 丛
- 框
- 斑
- 宵
- 郝
- 蛙
- 熙
- 祁
- 哑
- 葱
- 唇
- 韦
- 媛
- 魄
- 锤
- 绵
- 炫
- 吨
- 稻
- 碌
- 刊
- 漆
- 搏
- 讶
- 痒
- 枫
- 妒
- 冥
- 郊
- 爵
- 逝
- 栽
- 叠
- 蚁
- 裕
- 帕
- 剥
- 谐
- 巫
- 颇
- 娥
- 廊
- 蕾
- 丘
- 丞
- 葡
- 坠
- 鸦
- 糗
- 虐
- 唬
- 屎
- 顽
- 巷
- 硅
- 罕
- 殖
- 嘿
- 韵
- 歧
- 垮
- 淮
- 馈
- 昊
- 宰
- 钦
- 霜
- 兑
- 萄
- 塘
- 胀
- 樱
- 枯
- 咳
- 窑
- 募
- 缸
- 昧
- 仑
- 恕
- 氓
- 叮
- 吼
- 坟
- 轴
- 贞
- 赎
- 帆
- 嫩
- 蚂
- 僵
- 颖
- 噜
- 咒
- 琐
- 勃
- 芯
- 绸
- 哼
- 仨
- 挪
- 狡
- 禅
- 粘
- 雯
- 扒
- 恳
- 蔬
- 匈
- 钓
- 桐
- 菇
- 哒
- 稚
- 膏
- 纲
- 狄
- 硕
- 廉
- 衙
- 艘
- 廖
- 腊
- 蟹
- 邱
- 缉
- 曝
- 桩
- 啤
- 嫉
- 棚
- 矮
- 汰
- 衍
- 拽
- 削
- 彤
- 斜
- 揉
- 樊
- 馨
- 钩
- 浦
- 肢
- 敷
- 喻
- 鞭
- 瞪
- 耕
- 掐
- 屡
- 榴
- 勋
- 泊
- 竭
- 鹤
- 溢
- 淳
- 倩
- 驳
- 抠
- 捅
- 筒
- 窄
- 鄙
- 嗦
- 袍
- 劈
- 炖
- 裸
- 贬
- 敞
- 嘎
- 淹
- 耶
- 秩
- 舱
- 厦
- 叙
- 孽
- 筷
- 浇
- 饥
- 噩
- 蚊
- 兮
- 皱
- 侃
- 辟
- 弊
- 袜
- 吾
- 俘
- 芸
- 夷
- 芦
- 囚
- 倡
- 琦
- 哨
- 巢
- 烛
- 帐
- 燥
- 讽
- 俞
- 馅
- 柿
- 墅
- 妍
- 瘤
- 沦
- 衬
- 瑜
- 蒜
- 蛛
- 窟
- 勿
- 沛
- 磁
- 狭
- 栈
- 懵
- 酿
- 戈
- 邵
- 龚
- 衫
- 勺
- 哗
- 叽
- 畜
- 爪
- 惫
- 颁
- 浸
- 摧
- 勘
- 惕
- 蔓
- 馒
- 挠
- 陀
- 豁
- 帘
- 淀
- 藩
- 蜡
- 凳
- 蘑
- 琼
- 棺
- 蝴
- 骆
- 掰
- 枣
- 遂
- 飙
- 咧
- 掀
- 梨
- 杏
- 嗑
- 棠
- 绽
- 捆
- 舆
- 肇
- 葩
- 呦
- 膝
- 鹊
- 揣
- 瓣
- 靓
- 卵
- 鲍
- 炭
- 戳
- 颤
- 禄
- 菩
- 崛
- 驸
- 佣
- 眨
- 聂
- 乙
- 嘻
- 拧
- 喵
- 佟
- 靳
- 阎
- 拢
- 厘
- 凰
- 疤
- 螺
- 淇
- 涩
- 拎
- 嗨
- 魁
- 薯
- 歼
- 沪
- 筛
- 谍
- 揪
- 刁
- 秃
- 谜
- 撇
- 肪
- 绊
- 逞
- 滥
- 寝
- 麟
- 奕
- 侮
- 喉
- 柄
- 荆
- 撼
- 窦
- 姗
- 乞
- 艇
- 竖
- 剖
- 嗽
- 捂
- 腕
- 鸽
- 刃
- 弓
- 辙
- 粤
- 泣
- 梗
- 茄
- 茜
- 驼
- 冈
- 倔
- 啃
- 蹄
- 唧
- 祈
- 腺
- 焰
- 睿
- 崽
- A
- 苛
- 窍
- 凿
- 倭
- 骤
- 槛
- 碳
- 诏
- 芽
- 浆
- 隶
- 搂
- 睦
- 彬
- 岔
- 诀
- 嚼
- 掺
- 殷
- 吁
- 啰
- 侈
- 亩
- 纤
- 倦
- 揽
- 媚
- 潭
- 莽
- 赃
- 睹
- 脊
- 逍
- 淼
- 沸
- 峡
- 仆
- 眷
- 屯
- 璐
- 雁
- 澄
- 渗
- 咔
- 啸
- 怂
- 娄
- 惶
- 恍
- 锡
- 秉
- 猾
- 挟
- 舔
- 弦
- 阱
- 俭
- 嚣
- 搓
- 懈
- 诡
- 隙
- 苟
- 倘
- 瘫
- 扁
- 鑫
- 撩
- 蓬
- 铲
- 峥
- 巅
- 葫
- 膳
- 狙
- 晏
- 祠
- 峻
- 尉
- 毯
- 沧
- 熏
- 咯
- 株
- 沐
- 奎
- 锣
- 霄
- 彦
- 叭
- 臻
- 昔
- 灶
- 傍
- 腥
- 屑
- 禾
- 彰
- 冉
- 矫
- 滞
- 瘩
- 匀
- 椎
- 槐
- 岚
- 跷
- 剔
- 倪
- 盏
- 泌
- 灸
- 隧
- 函
- 壤
- 剃
- 蹊
- 葵
- 拌
- 琅
- 炳
- 跋
- 瑾
- 哩
- 蔷
- 鳌
- 莺
- 诵
- 疙
- 吱
- 蓓
- 绎
- 匿
- 铮
- 怼
- 踹
- 嗅
- 焚
- 躯
- 蝇
- 橘
- 祟
- 辖
- 砂
- 韧
- 粪
- 诬
- 擒
- 黏
- 衔
- 溺
- 蜘
- 篷
- 贿
- 闫
- 焕
- 邢
- 兹
- 窖
- 旬
- 铸
- 咚
- 惭
- 佬
- 裴
- 裳
- 犀
- 弘
- 莓
- 钏
- 鄂
- 陋
- 伽
- 鞠
- 氪
- 垒
- 窜
- 橙
- 讳
- 甥
- 淫
- 拱
- 袱
- 坨
- 暧
- 渺
- 蕉
- 晗
- 茬
- 盔
- 妓
- 蚕
- 僻
- 朽
- 呛
- 挚
- 擎
- 绅
- 喇
- 鳄
- 巩
- 蜗
- 遛
- 俯
- 汹
- 猩
- 奠
- 钙
- 悍
- 躬
- 菱
- 翘
- 琉
- 虏
- 凄
- 稼
- 炕
- 皂
- 漱
- 斋
- 撂
- 敛
- 阮
- 芭
- 阀
- 缚
- 懦
- 亨
- 螃
- 侥
- 膨
- 筝
- 惟
- 黛
- 眯
- 茨
- 怠
- 辐
- 捎
- 殴
- 桓
- 瞄
- 冀
- 雍
- 霾
- 酵
- 檬
- 哺
- 裔
- 兢
- 麒
- 烹
- 绒
- 丐
- 娅
- 钞
- 垄
- 笛
- 赣
- 蕊
- 暮
- 噪
- 沮
- 肋
- 庇
- 橡
- 摁
- 痘
- 棘
- 拂
- 绷
- 刨
- 晾
- 蹬
- 鸥
- 璇
- 掠
- 瘟
- 俐
- 糙
- 骏
- 牡
- 撵
- 嘘
- 沥
- 庶
- 赁
- 喧
- 涡
- 瞳
- 迭
- 肘
- 颂
- 珑
- 觅
- 埔
- G
- 跤
- 朔
- 詹
- 梭
- 暇
- 惺
- 甸
- 怯
- 聋
- 赦
- 屉
- 闸
- 坝
- 吟
- 凸
- 拴
- 堤
- 矣
- 斧
- 呸
- 啼
- 韬
- 钧
- 坞
- 纺
- 氢
- 嵩
- 镯
- 髓
- 檐
- 涕
- 剁
- 稽
- 烨
- 钮
- 闽
- 仕
- 驯
- 吭
- 漓
- 眸
- 鞅
- 枢
- 煞
- 昕
- 畔
- 疹
- 矶
- 呱
- 熄
- 吏
- 泻
- 拙
- 蛤
- 禽
- 甫
- 厮
- 乍
- 蝉
- 撬
- 嘀
- 衅
- 鲨
- 萱
- 霹
- 旷
- 辫
- 坷
- 眶
- 蟆
- 呜
- 猬
- 嬷
- 萎
- 靶
- 雳
- 煲
- 溯
- 蚀
- 狈
- 滤
- 恙
- 瑛
- 栓
- 嫣
- 碟
- 祷
- 驿
- 犊
- 灼
- 哆
- 宛
- 榨
- 寥
- 翟
- 栗
- 滔
- 馋
- 杖
- 茉
- 饲
- 庐
- 隋
- 旱
- 崎
- 颅
- 焉
- 墩
- 篱
- 晟
- 扳
- 咎
- 竿
- 僚
- 溶
- 俏
- 霆
- 堕
- 冕
- 叩
- 绰
- 洽
- 襄
- 蛊
- 缅
- 侨
- 伶
- 蕴
- 酥
- 坂
- 拇
- 庚
- 卒
- 诛
- 禧
- 瓢
- 锯
- 扉
- 饷
- 诅
- 烘
- 浏
- 痰
- 榆
- 窥
- 鲸
- 捋
- 戎
- 笋
- 璋
- 诫
- 珈
- 癫
- 囤
- 厥
- 癖
- 翩
- 芹
- 匣
- 噬
- 栖
- 蝎
- 锄
- 玺
- 疮
- 缕
- 猥
- 槿
- 蔑
- 汝
- 珂
- 撮
- 坪
- 蒲
- 倚
- 嗷
- 撰
- 荧
- 芙
- 豚
- 筱
- 敖
- 孵
- 猝
- D
- 弈
- 徊
- 辗
- 赘
- 徘
- 烙
- 娲
- 嚎
- 迢
- 绥
- 羁
- 屌
- 铅
- 澎
- S
- 嬛
- 晦
- 煽
- 逾
- 饵
- 虞
- 筐
- 哧
- 抒
- 醇
- 祀
- 瑕
- 岐
- 潼
- 惚
- C
- 苑
- 靡
- 菠
- 赡
- 惰
- 梓
- 铛
- 澈
- 莞
- 呕
- 驭
- 邝
- 砰
- 轼
- 窒
- 慷
- 绞
- 絮
- 虔
- 惮
- 柬
- 嗡
- 拣
- 羲
- 蹋
- 隘
- 帜
- 卤
- 雌
- 唾
- 邹
- 俑
- 碾
- 婪
- 咏
- 粟
- 崭
- 钝
- 彝
- 陡
- 谛
- 秤
- 磅
- 淌
- 炊
- 鲤
- 羹
- 殉
- 曰
- 萤
- 阐
- 鬟
- 拭
- T
- 沁
- 滇
- 梧
- 烁
- 瞻
- 淤
- 凹
- 撸
- 棕
- 腌
- 缪
- 祺
- 痊
- 忑
- 柠
- 矜
- 忐
- 讹
- 瀚
- 尧
- 昼
- 芊
- 憨
- 鳞
- 匮
- 鸳
- 鸯
- 湃
- 屿
- 馍
- 沽
- 栾
- 蝠
- 窘
- 绛
- 巍
- 悯
- 焊
- 谴
- 浊
- 娴
- 畴
- 湛
- 螂
- 韭
- 哮
- 拷
- 攥
- 凛
- 颓
- 恺
- 蝙
- 襟
- 粑
- 洼
- 笃
- 渝
- 骁
- 殃
- 酌
- 乒
- 臊
- 疵
- 诧
- 谬
- 锈
- 袄
- 膛
- 瘸
- 嫖
- 梢
- 沼
- 棱
- 嚓
- 耸
- 喳
- 舵
- 橱
- 涮
- 檀
- 瞩
- 腑
- 岑
- 痪
- 墟
- 蔚
- 捍
- 徙
- 棣
- 猖
- 掷
- 恬
- 嫦
- 噔
- 饪
- 掂
- 恤
- 叱
- 芷
- 弩
- 楷
- 镶
- 茧
- 诠
- 咙
- 匡
- 擂
- 亵
- 杞
- 乓
- 渤
- 藉
- 憔
- 渭
- 禹
- 睐
- 趾
- 抉
- 悴
- 忒
- 茸
- 纬
- 懊
- 浚
- 溅
- 遏
- 琛
- 靴
- 戮
- 翎
- 谕
- 濒
- 锵
- 嬉
- 籽
- 殆
- 叼
- 苔
- 灏
- 嗖
- 俪
- 亢
- 冶
- 嗜
- 磋
- 汀
- 讪
- 萃
- 菁
- 镑
- 紊
- 脯
- 缆
- 哉
- 赂
- 婊
- B
- 蕃
- 迄
- 蜓
- 舜
- 嚏
- 昱
- 黔
- 犟
- 汐
- 昵
- 嗣
- 唆
- 蛾
- 黯
- 绯
- 瀑
- 憬
- 狩
- 掖
- 崴
- 褪
- 髦
- 酝
- 弧
- 咄
- 吝
- 馄
- 娩
- 窿
- 蜻
- 袒
- 玮
- 阙
- 篡
- 邯
- 朦
- 邑
- 喃
- 粽
- 捶
- 嫔
- 钗
- 穗
- 骼
- 胭
- 寐
- 噎
- M
- 碱
- 荤
- 笙
- 矢
- 芥
- 廓
- 扼
- 厄
- 毋
- 糯
- 惋
- 纶
- 碜
- 胧
- 懿
- 偃
- 沏
- 痹
- 慑
- 鹦
- 娠
- 铐
- 绢
- 傀
- 孜
- 饨
- 儡
- 孰
- 焱
- 峭
- 伎
- 幌
- 椰
- 譬
- 藕
- 坍
- 铝
- 鞍
- 蘸
- 貂
- 猿
- 炙
- 琊
- 峙
- 硝
- 幂
- 钰
- 眩
- 亥
- 簇
- 鹉
- 睫
- 斟
- 簧
- 颐
- 薰
- 癞
- 祛
- 燎
- 缎
- 簸
- 咣
- 绚
- 簿
- 邋
- 嵌
- 肮
- 稷
- 辍
- 闵
- 枸
- 撅
- 曙
- 苇
- K
- 悼
- 汶
- 匕
- 皖
- 腮
- 琶
- 汲
- 鼹
- 礁
- 颊
- 怔
- 汕
- 喀
- 砌
- 釜
- 畸
- 鹃
- 峨
- 奄
- 骡
- 斐
- 芈
- 莘
- 蟑
- 荔
- 缇
- 犒
- 宓
- 汾
- 沌
- 宦
- 憧
- 咤
- 吆
- 攘
- 漩
- 梵
- 阂
- 吒
- 芜
- 缔
- 秧
- 翊
- 晌
- 剐
- 蜕
- 芋
- 彷
- 牟
- 诲
- 臀
- 徨
- Q
- 杵
- 荫
- 榄
- 蹿
- 豌
- 迂
- 琵
- 拗
- 帷
- 楞
- 嘶
- 橄
- 胺
- 圭
- 砚
- 藻
- 凋
- 啄
- 褒
- 嗝
- 殡
- 嫡
- 恃
- 濡
- 缜
- 孺
- 泸
- 妊
- 衩
- 驹
- 榻
- 腆
- 鹂
- 箍
- 璧
- 熔
- 悚
- 遢
- 弛
- 诋
- 羚
- 鹭
- 嘚
- 骸
- 瘪
- 铠
- 瞿
- 屹
- 邸
- 痨
- 辘
- 浒
- 忏
- 钊
- 潦
- 怅
- 肴
- 蚯
- 胚
- 茵
- 蚓
- 戬
- 瘀
- 翡
- 恪
- 卉
- 蝌
- 雏
- 祯
- 谏
- 蚪
- 钵
- 馊
- 嗒
- 犁
- 寅
- V
- 锥
- 娼
- 晖
- 啬
- 纣
- 淆
- 丙
- 夯
- 竣
- 褚
- 褥
- 轧
- 氨
- 褂
- 钳
- 轲
- 竺
- 疡
- 淞
- 胤
- 摹
- 鳅
- 珀
- 偕
- 匾
- 觑
- 扈
- 傣
- 绫
- 枷
- 阑
- 柚
- 烊
- 怦
- 腼
- 珺
- 缀
- 裘
- 碉
- 峪
- 俸
- 羯
- 姊
- 疟
- 砺
- 盎
- 嘣
- 釉
- 溥
- 熠
- 垢
- 摞
- 哽
- 槟
- 囧
- 胰
- 遁
- 痞
- 熹
- 忡
- 稠
- 顷
- 瑚
- 卯
- 渎
- 炅
- 褶
- 烽
- 瞑
- 嘈
- 硫
- 壹
- 悖
- 酪
- 跺
- 阜
- 帛
- 漪
- 蝗
- 迦
- 蟒
- 咀
- 谤
- 睬
- 辕
- 绮
- 搀
- 裆
- 鳖
- 囡
- 羔
- 痣
- 滕
- 佘
- 樟
- 韶
- 霓
- 劾
- 赈
- 唏
- 闰
- 脐
- 沓
- 瓮
- 篓
- 笠
- 暄
- 涅
- 诽
- 洱
- 栅
- 蚱
- 囔
- 攸
- 酣
- 阪
- 榕
- 骇
- 婧
- 陨
- 憎
- 沂
- 磷
- 壕
- 醺
- 惬
- 璀
- 璨
- 喋
- P
- 炽
- 瘁
- 羿
- 褐
- 簪
- 冽
- 驮
- 芮
- 辄
- 咆
- 渍
- 觐
- 炷
- 蛰
- 驷
- 帚
- 蜷
- O
- X
- 邂
- 逅
- 缭
- 秽
- 琰
- 龌
- 龊
- 俨
- 涟
- 噼
- 掇
- 哔
- 炬
- 佯
- 粱
- 霁
- 鱿
- 夭
- 擀
- 陇
- 瞥
- 壑
- 盹
- 馁
- 蚌
- 焖
- 蛟
- 囱
- 蚝
- 抿
- 脓
- 蒿
- 飓
- 渲
- 宸
- 酗
- 荻
- 缥
- 弑
- 偎
- 宕
- 耘
- 瞌
- 瘴
- 溉
- 涝
- 咿
- 垛
- 垦
- 缈
- 苞
- 惆
- 汛
- 鹑
- 町
- 抡
- 慵
- 浣
- 耙
- 砥
- 噱
- 孬
- 札
- 弼
- 酋
- 镳
- 萦
- 泾
- 挞
- 钾
- 讷
- 圃
- 舶
- 穹
- 戾
- 汴
- 锂
- 昀
- 镀
- 眺
- 捺
- 猕
- 阚
- 骋
- 悸
- 蜚
- 咩
- 讥
- 篆
- 鸠
- 哐
- 锚
- 幢
- 翱
- 螳
- 徇
- 踞
- 蔗
- 蔼
- 漉
- 衲
- N
- 漳
- 枭
- 漾
- 歆
- 烬
- 曳
- 岌
- 孚
- 戛
- 呲
- 箫
- 娓
- 桨
- 涓
- 獭
- 芃
- 摒
- 戍
- 踝
- 轱
- 沱
- 锢
- 堰
- 抨
- 昙
- 鹌
- 蔻
- 迸
- 泯
- 龈
- 痔
- 骛
- 淄
- 泵
- 烯
- 蔫
- F
- 胥
- 忱
- 纫
- 搪
- 茎
- 暨
- 泞
- 踵
- 璞
- 佗
- 荃
- 鬓
- 蚣
- 罔
- 臆
- 贻
- 橇
- 麓
- 槌
- 琥
- I
- 纥
- 薅
- 樵
- 苓
- 熨
- 钨
- 骞
- 诣
- 涤
- 踊
- 醛
- 碴
- 蹴
- 缤
- 赊
- 岖
- 戊
- 禺
- 坯
- 戟
- 楂
- 隅
- 酶
- 邃
- 蛀
- 皎
- 炯
- 垣
- 锹
- 镰
- 夙
- 甬
- 叵
- 茁
- 珞
- 妲
- 涸
- 兀
- 嘤
- 谙
- 噗
- 榔
- 稣
- 剽
- 奚
- 啕
- 袅
- 讧
- 钠
- 怄
- 晤
- 肛
- 氰
- 迥
- 唰
- 诩
- 籁
- 砒
- 谩
- 诟
- 斓
- 泷
- 幡
- 爻
- 痫
- 眈
- 漕
- 惘
- 挎
- 噶
- 喱
- 氯
- U
- 跆
- 嗤
- 锏
- 睽
- 缮
- 蟋
- 蠕
- 扪
- 狞
- 飒
- 吮
- 弋
- 奘
- 蟠
- 梆
- 拈
- 帧
- 蟀
- 胯
- 掳
- 蝈
- 帼
- 瞰
- 嵇
- 阉
- 篝
- 笆
- 亘
- L
- 喔
- 愕
- 谚
- 轶
- 岱
- 丕
- 婕
- 羌
- 毡
- 呻
- 鼾
- 蜥
- 偌
- 庵
- 敝
- 蛐
- 麝
- 鞘
- 拮
- 涣
- 葆
- 雹
- 踌
- 蜈
- 馥
- 跻
- 狰
- 桀
- 毗
- 皿
- 缨
- 磐
- 啾
- 牒
- 缰
- 躇
- 踮
- 糠
- 嗲
- 刽
- 咫
- 殇
- 瀛
- 胱
- 炀
- 虱
- 砾
- 獒
- 涎
- 袤
- 鄱
- 瓯
- 锭
- 塾
- 蹉
- 珏
- 豺
- 锌
- 蜿
- 牦
- 瓒
- 莆
- 蜴
- 氮
- 跎
- 咛
- 骜
- 郸
- 搐
- 堑
- 涞
- 寰
- 跛
- 鸵
- 毂
- 妩
- 铤
- 薏
- 烩
- 遐
- 煦
- 仃
- 髅
- 酮
- 榷
- 腋
- 珩
- 臃
- 愫
- 蜒
- 荼
- 侬
- 淬
- 婵
- 偻
- 焯
- 骊
- 恻
- 濮
- 泱
- 庖
- 惴
- 鲫
- 硌
- 肓
- 芪
- 礴
- 磺
- 腱
- 冢
- 谪
- 骷
- 哏
- 腩
- 蓦
- 焙
- 桢
- 阖
- 睾
- 疱
- 郴
- 铿
- 铡
- 祉
- 跄
- 桦
- 椭
- 拄
- 皙
- 膈
- 裱
- 髋
- 伢
- 罹
- 鳍
- 赝
- 嬴
- 痤
- 藿
- 镐
- 铎
- 瘠
- 簌
- 杳
- 铢
- 阡
- 忤
- 舀
- 悻
- 媲
- 茗
- 湍
- 舫
- 瘙
- 瞟
- 擞
- 荀
- 刍
- J
- 潍
- 莴
- 斛
- 郦
- 栩
- 绾
- 蕙
- 黜
- 湄
- 藓
- 躏
- 锱
- 捻
- 佼
- 砝
- E
- 罡
- 忻
- 鹜
- 滟
- 傥
- 蛳
- W
- 铀
- 魇
- 觎
- 蹂
- 佞
- 诃
- 灞
- 镣
- 痱
- 侏
- 峦
- 榛
- 饽
- 龋
- 嗔
- 芍
- 椿
- 璎
- 渥
- 蟾
- 骰
- 吠
- 挛
- 倜
- 鳝
- 糜
- 噢
- 黝
- 藐
- 绡
- 掣
- 鳗
- 璜
- 犷
- 痉
- 膺
- 罄
- 阄
- 纨
- 纭
- 彗
- 嵘
- 埠
- 潢
- 桔
- 耷
- 逵
- 诓
- 怵
- 蚤
- 苯
- 邈
- 谑
- 颌
- 珐
- 踱
- 髻
- 倏
- 啷
- 篑
- 冗
- 蹶
- 荥
- 涧
- 镂
- 踉
- 呷
- 衢
- 荟
- 箴
- 桧
- 恿
- 坳
- 瑙
- 珅
- 莅
- 膘
- 宥
- 氟
- 秆
- 诙
- 蹑
- 茴
- 翳
- 渚
- H
- 唁
- 诿
- 窈
- 窕
- 膻
- 荨
- 蛔
- 筵
- 钛
- 獾
- 琏
- 箩
- 栀
- 隼
- 煸
- 罂
- 蛎
- 咂
- 谗
- 颦
- 佝
- 苣
- 搡
- 仄
- 垠
- 濂
- 泗
- 亟
- 蔺
- 蛆
- 霏
- 榈
- 裟
- 瑁
- 酚
- 蝼
- 怆
- 犄
- 沣
- 揖
- 斡
- 刎
- 鲟
- 峒
- 瞭
- 晁
- 袈
- 蓟
- 镁
- 骥
- 掸
- 玳
- 娑
- 馀
- 跚
- 槃
- 缄
- 猢
- 粕
- 隍
- 佃
- 獗
- 唢
- 菏
- 酰
- 腚
- 笈
- 哙
- 孢
- 飕
- 嘹
- 茱
- 蹒
- 殓
- 柩
- 谀
- 姣
- 戌
- 柑
- 粼
- 淅
- 啧
- 盅
- 鼬
- 啜
- 绉
- 咻
- 锲
- 铆
- Y
- 螨
- 茯
- 憩
- 臼
- 谄
- 讴
- 濠
- 雎
- 噻
- 淦
- 懋
- 尕
- 氦
- 褛
- 颉
- 喆
- 铬
- 褴
- 燮
- 銮
- 侗
- 蹙
- 煜
- 邺
- 锃
- 麋
- 矗
- 娆
- 匐
- 噌
- 潸
- 碘
- 浔
- 檄
- 皈
- 铂
- 遨
- 炜
- 曜
- 饴
- 舷
- 胫
- 叟
- 祎
- 沅
- 潺
- 楣
- 埂
- 瞠
- 幔
- 稞
- 抻
- 匝
- 幄
- 殒
- 瑭
- 袂
- 囫
- 瓴
- 攫
- 鲈
- 箔
- 哝
- 馗
- 蜍
- 痧
- 脘
- 姘
- 苒
- 缢
- 觞
- 蛹
- 饬
- 胄
- 筏
- 鸾
- 儆
- 痿
- 矬
- 酊
- 纾
- 铖
- 荏
- 掬
- 膑
- 贮
- 觊
- 囵
- 泓
- 搔
- 汞
- 蚩
- 婀
- 谧
- 恣
- 霎
- 饕
- 赅
- 鲶
- 梏
- 獠
- 俶
- 龛
- 桅
- 鹄
- 旌
- 鲲
- 姒
- 蠡
- 繇
- 祜
- 诨
- 汩
- 觥
- 孀
- R
- 谥
- 蕨
- 祐
- 榭
- 皑
- 纂
- 獐
- 覃
- 痂
- 孑
- 砧
- 圩
- 桎
- 啵
- 葚
- 嗫
- 浃
- 荠
- 阈
- 遴
- 枇
- 狒
- 秸
- 筠
- 硒
- 卞
- 玷
- 杈
- 狲
- 忿
- 俎
- 拚
- 颍
- 睢
- 颧
- 滦
- 霭
- 雉
- 毽
- 蓑
- 歙
- 鳃
- 鹬
- 墉
- 楔
- 舐
- 绔
- 弭
- 馏
- 挝
- 奂
- 嘭
- 忪
- 箕
- 诌
- 谒
- 颚
- 滂
- 醍
- 洵
- 鹫
- 虢
- 苋
- 玥
- 臾
- 蹩
- Z
- 杷
- 痍
- 酉
- 疸
- 鄢
- 垩
- 烷
- 湮
- 钎
- 樽
- 旮
- 葭
- 邬
- 缱
- 糍
- 亳
- 咦
- 苷
- 伉
- 隽
- 伫
- 聒
- 匍
- 飚
- 桠
- 睑
- 脍
- 焘
- 谶
- 赳
- 萸
- 讣
- 疽
- 臧
- 巽
- 毓
- 鸢
- 纰
- 啐
- 噙
- 舛
- 敕
- 醐
- 痢
- 嚯
- 婺
- 勖
- 岷
- 溧
- 骅
- 犸
- 麾
- 嗟
- 诘
- 懑
- 貔
- 貅
- 啉
- 崂
- 鸩
- 镭
- 绻
- 逑
- 煨
- 褓
- 姝
- 藜
- 溟
- 儋
- 谡
- 欸
- 郢
- 荚
- 疝
- 遽
- 陂
- 饯
- 孪
- 巳
- 荞
- 泔
- 岿
- 谆
- 镍
- 洙
- 佻
- 盂
- 睨
- 铄
- 餮
- 酯
- 癣
- 浜
- 酩
- 焗
- 挲
- 鬃
- 鲠
- 仞
- 诰
- 谔
- 胛
- 萼
- 涿
- 莠
- 珲
- 旯
- 蜢
- 黍
- 肽
- 涪
- 髡
- 氙
- 陉
- 鬶
- 侩
- 糅
- 氤
- 芾
- 砷
- 鳕
- 钣
- 锒
- 闱
- 铵
- 镊
- 玑
- 砀
- 癜
- 颔
- 楹
- 螈
- 醚
- 琮
- 铩
- 笄
- 瓤
- 裨
- 潋
- 悌
- 聿
- 祢
- 郜
- 汨
- 棂
- 氲
- 嶙
- 聩
- 菅
- 腧
- 妯
- 龇
- 谲
- 耄
- 耋
- 囿
- 黢
- 揄
- 鲇
- 仝
- 個
- 忖
- 峋
- 揶
- 迩
- 诳
- 踽
- 骐
- 趸
- 颞
- 撺
- 辇
- 猷
- 铉
- 羸
- 徜
- 徉
- 襁
- 镌
- 孱
- 钒
- 铣
- 呤
- 遑
- 俾
- 皋
- 笕
- 笺
- 趔
- 趄
- 辋
- 鄞
- 殚
- 岫
- 跬
- 嘌
- 苻
- 绶
- 郅
- 瑄
- 萋
- 蘼
- 湎
- 砣
- 钜
- 捭
- 喹
- 恹
- 娌
- 螯
- 锰
- 祚
- 阆
- 矾
- 厩
- 龅
- 炝
- 黠
- 妁
- 濑
- 鞑
- 柒
- 滁
- 淖
- 鸬
- 鬣
- 晔
- 恸
- 赓
- 侉
- 溏
- 還
- 珮
- 鸨
- 嚅
- 笤
- 靥
- 啮
- 滓
- 俚
- 唳
- 苜
- 蓿
- 鹚
- 耦
- 莜
- 麸
- 粳
- 綦
- 盱
- 噤
- 遒
- 玟
- 魍
- 魉
- 旖
- 栉
- 锷
- 醴
- 泮
- 恁
- 甾
- 琬
- 丶
- 擤
- 桉
- 踟
- 誊
- 谟
- 澧
- 玖
- 畿
- 顼
- 兖
- 贰
- 茏
- 愎
- 豇
- 旎
- 蹰
- 蜃
- 屐
- 芡
- 鎏
- 癸
- 卅
- 枥
- 陟
- 琨
- 粝
- 掮
- 妪
- 姹
- 鏖
- 捯
- 钴
- 竽
- 恽
- 佰
- 胗
- 崧
- 磴
- 绺
- 鳏
- 槁
- 啖
- 矍
- 徕
- 忾
- 烃
- 喏
- 囹
- 圄
- 砭
- 邕
- 犍
- 鸮
- 剜
- 琚
- 瘢
- 魑
- 眦
- 锉
- 柘
- 痦
- 苕
- 牯
- 湟
- 厝
- 濛
- 赭
- 馐
- 蜇
- 嶂
- 贲
- 靼
- 臬
- 陲
- 潞
- 芩
- 腓
- 锨
- 寮
- 於
- 洇
- 愠
- 疖
- 鹧
- 鸪
- 茕
- 戕
- 壬
- 庾
- 莒
- 鹈
- 鹕
- 蠹
- 勐
- 疥
- 辎
- 耒
- 嗬
- 沔
- 睥
- 邙
- 篾
- 揩
- 肱
- 胍
- 磬
- 菟
- 豢
- 垓
- 唑
- 剌
- 阗
- 汜
- 佤
- 璟
- 麽
- 鬻
- 怏
- 蕤
- 茭
- 睚
- 淙
- 牍
- 榫
- 濯
- 稹
- 媾
- 悱
- 骶
- 蛭
- 鞣
- 椁
- 槊
- 擢
- 滢
- 佚
- 菡
- 沭
- 扦
- 镆
- 闾
- 缛
- 窠
- 疣
- 骠
- 俅
- 喙
- 蹼
- 硼
- 黩
- 腴
- 醮
- 邛
- 漯
- 豉
- 昶
- 刿
- 凇
- 鲅
- 舸
- 邳
- 俟
- 铰
- 翌
- 鳟
- 葳
- 寤
- 碣
- 秭
- 揠
- 熵
- 燧
- 靛
- 嵊
- 窨
- 鹗
- 芎
- 颢
- 佶
- 骢
- 圜
- 岘
- 燊
- 壅
- 畲
- 萘
- 煊
- 粲
- 倌
- 嗳
- 橹
- 椽
- 夔
- 鲑
- 赧
- 殄
- 沆
- 瀣
- 廪
- 舢
- 狍
- 挈
- 鹳
- 蚜
- 彧
- 羟
- 盥
- 镛
- 痈
- 蜊
- 皲
- 篦
- 喑
- 鲢
- 邡
- 蕲
- 僳
- 秣
- 蛉
- 讫
- 祗
- 鹩
- 撷
- 狎
- 郓
- 镕
- 榉
- 鲷
- 娣
- 淝
- 桷
- 镉
- 郫
- 髌
- 醪
- 僭
- 伧
- 嵬
- 苁
- 鹘
- 徭
- 歃
- 阕
- 鸱
- 貉
- 闳
- 坻
- 缙
- 媪
- 莨
- 菪
- 绦
- 恫
- 崆
- 喟
- 葺
- 逶
- 迤
- 骈
- 馔
- 苎
- 溘
- 垭
- 樯
- 诤
- 魃
- 搽
- 绀
- 蚴
- 澶
- 蒺
- 罘
- 眙
- 怍
- 來
- 荪
- 贶
- 亓
- 唻
- 畈
- 谌
- 芨
- 鲀
- 窸
- 窣
- 荜
- 楫
- 衮
- 趵
- 勰
- 髯
- 椴
- 缶
- 荸
- 秫
- 菖
- 甙
- 翦
- 椟
- 峤
- 掼
- 謇
- 洄
- 鄯
- 妗
- 浐
- 颀
- 箸
- 畦
- 痼
- 橛
- 鲛
- 蝾
- 愍
- 蒹
- 嘁
- 韪
- 劭
- 垅
- 暹
- 僮
- 稗
- 筚
- 煅
- 嬅
- 蜉
- 骝
- 碚
- 冼
- 吶
- 洹
- 郧
- 炴
- 绌
- 泠
- 呓
- 簋
- 溴
- 篁
- 仟
- 锟
- 羧
- 鹞
- 嘬
- 渌
- 笸
- 霰
- 稔
- 钡
- 齁
- 胪
- 衾
- 尻
- 洮
- 蘅
- 鲳
- 殂
- 腭
- 涔
- 蝣
- 孳
- 澍
- 钼
- 蒡
- 枳
- 渑
- 茼
- 馕
- 埙
- 珣
- 菘
- 邰
- 樾
- 铱
- 鳐
- 唔
- 篙
- 箜
- 篌
- 耆
- 啫
- 枞
- 杼
- 嵋
- 舂
- 娉
- 铨
- 崃
- 笳
- 邗
- 逡
- 僖
- 泫
- 疴
- 捱
- 醅
- 堇
- 肄
- 荇
- 虬
- 谯
- 酞
- 桡
- 艮
- 膦
- 艹
- 啻
- 滏
- 茆
- 圪
- 磡
- 麼
- 闼
- 郯
- 仡
- 氐
- 贽
- 俦
- 蓖
- 跹
- 帏
- 氅
- 趿
- 暝
- 缟
- 棹
- 滹
- 毖
- 蝰
- 虻
- 缫
- 诮
- 闩
- ○
- 潴
- 樨
- 瘘
- 襦
- 妤
- 郾
- 衿
- 鸷
- 旰
- 镢
- 傈
- 倨
- 笏
- 蒽
- 醌
- 驽
- 浠
- 涠
- 蓁
- 柞
- 钺
- 蜮
- 诂
- 徵
- 锆
- 椋
- 叻
- 廿
- 藁
- 乜
- 摈
- 這
- 茌
- 辊
- 岬
- 郇
- 杓
- 轳
- 酎
- 蟥
- 時
- 镒
- 蚬
- 澹
- 赟
- 後
- 怿
- 箐
- 囍
- 揆
- 蹁
- 鬄
- 苫
- 蕖
- 卺
- 辔
- 偈
- 俳
- 吲
- 哚
- 瘆
- 蕞
- 笞
- 氩
- 嫘
- 墁
- 帔
- 褡
- 裢
- 乩
- 褊
- 颏
- 喒
- 錾
- 皌
- 戗
- 唪
- 啭
- 伥
- 茔
- 斫
- 齉
- 仵
- 赉
- 吡
- 啶
- 蹇
- 螅
- 汊
- 湓
- 凫
- 珙
- 腈
- 洌
- Ω
- 憷
- 跶
- 抔
- 濞
- 崤
- 殍
- 浥
- 铳
- 酽
- 馑
- 髂
- 隗
- 韫
- 晷
- 诒
- 埭
- 鹪
- 蕻
- 昃
- 瓠
- 萁
- 癔
- 怩
- 疳
- 跖
- 疔
- 簟
- 汆
- 疠
- 卟
- 墒
- 穰
- 铍
- 珥
- 钤
- 隻
- 樓
- 墎
- 鳜
- 沒
- 岀
- 杪
- 単
- 鲧
- 呋
- 彀
- 祇
- 豸
- 胴
- 唷
- 丨
- 燚
- 麴
- 觇
- 缑
- 橐
- 蚡
- 朊
- 俣
- 垡
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
use_preprocessor_valid: false
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_utt_prefix: null
rir_apply_prob: 1.0
noise_scp: null
noise_utt_prefix: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: true
```
</details>
## LM config
<details><summary>expand</summary>
```
NONE
```
</details>
|
Tuana/eigenfaces-sklearn-lfw
|
Tuana
| 2021-10-27T01:53:23Z | 0 | 1 | null |
[
"joblib",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Model to Recognize Faces using eigenfaces and scikit-learn
Simple model that was trained on a preprocessed excerpt of the “Labeled Faces in the Wild”, aka [LFW](http://vis-www.cs.umass.edu/lfw/)
This demo was taken from [Scikit-learn](https://scikit-learn.org/stable/auto_examples/applications/plot_face_recognition.html)
The dataset includes 7 classes (individuals):

|
pritoms/gpt2-finetuned-python2
|
pritoms
| 2021-10-26T23:15:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-python2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-python2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 25 | 2.0135 |
| No log | 2.0 | 50 | 1.9618 |
| No log | 3.0 | 75 | 1.9454 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
asapp/sew-small-100k
|
asapp
| 2021-10-26T19:39:52Z | 25 | 0 |
transformers
|
[
"transformers",
"pytorch",
"sew",
"feature-extraction",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2109.06870",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# SEW-small
[SEW by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWForCTC`.
|
huggingartists/arctic-monkeys
|
huggingartists
| 2021-10-26T17:28:49Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/arctic-monkeys",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/arctic-monkeys
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/12c27f4fbb06ef32dc1c1e432098f447.570x570x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Arctic Monkeys</div>
<a href="https://genius.com/artists/arctic-monkeys">
<div style="text-align: center; font-size: 14px;">@arctic-monkeys</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Arctic Monkeys.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/arctic-monkeys).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/arctic-monkeys")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1x4ii6qz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Arctic Monkeys's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/bmnqvn53) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/bmnqvn53/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/arctic-monkeys')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/arctic-monkeys")
model = AutoModelWithLMHead.from_pretrained("huggingartists/arctic-monkeys")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
chandank/bart-base-finetuned-kaggglenews
|
chandank
| 2021-10-26T16:04:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-base-finetuned-kaggglenews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6240
- Rouge1: 28.3618
- Rouge2: 15.9828
- Rougel: 24.078
- Rougelsum: 25.565
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| 1.9433 | 1.0 | 989 | 1.6240 | 28.3618 | 15.9828 | 24.078 | 25.565 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu102
- Datasets 1.14.0
- Tokenizers 0.10.3
|
mujerry/bert-base-uncased-finetuned-QnA-v1
|
mujerry
| 2021-10-26T09:19:02Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-QnA-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-QnA-v1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 39 | 3.3668 |
| No log | 2.0 | 78 | 3.2134 |
| No log | 3.0 | 117 | 3.1685 |
| No log | 4.0 | 156 | 3.1042 |
| No log | 5.0 | 195 | 3.1136 |
| No log | 6.0 | 234 | 2.9051 |
| No log | 7.0 | 273 | 2.9077 |
| No log | 8.0 | 312 | 2.9774 |
| No log | 9.0 | 351 | 2.9321 |
| No log | 10.0 | 390 | 2.9501 |
| No log | 11.0 | 429 | 2.8544 |
| No log | 12.0 | 468 | 2.8761 |
| 3.0255 | 13.0 | 507 | 2.8152 |
| 3.0255 | 14.0 | 546 | 2.8046 |
| 3.0255 | 15.0 | 585 | 2.6979 |
| 3.0255 | 16.0 | 624 | 2.6379 |
| 3.0255 | 17.0 | 663 | 2.7091 |
| 3.0255 | 18.0 | 702 | 2.6914 |
| 3.0255 | 19.0 | 741 | 2.7403 |
| 3.0255 | 20.0 | 780 | 2.7479 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
owen99630/catexp2
|
owen99630
| 2021-10-26T04:58:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
{0: 'Anorexia',
1: 'Anxiety',
2: 'Bullying',
3: 'Care',
4: 'Creativity',
5: 'Culture',
6: 'Depression',
7: 'Friends',
8: 'Getting help',
9: 'Happiness',
10: 'Helping others',
11: 'Helping yourself',
12: 'Hope',
13: 'Learning',
14: 'Life Issues',
15: 'Mental Health',
16: 'Mental Health Matters',
17: 'Mental health awareness',
18: 'PTSD',
19: 'Positivity',
20: 'Resilience',
21: 'Self-care',
22: 'Sharing',
23: 'Support',
24: 'University'}
|
huggingtweets/theonion
|
huggingtweets
| 2021-10-26T04:42:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://www.huggingtweets.com/theonion/1635223358201/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/875392068125769732/yrN-1k0Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Onion</div>
<div style="text-align: center; font-size: 14px;">@theonion</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Onion.
| Data | The Onion |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 2 |
| Short tweets | 10 |
| Tweets kept | 3238 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tl5cqc3f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theonion's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y8p1w9v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y8p1w9v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theonion')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
espnet/siddhana_slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
|
espnet
| 2021-10-25T23:23:39Z | 1 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slurp",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- slurp
license: cc-by-4.0
---
## ESPnet2 SLU pretrained model
### `siddhana/slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
♻️ Imported from https://zenodo.org/record/5590384
This model was trained by siddhana using slurp/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kwang2049/TSDAE-scidocs
|
kwang2049
| 2021-10-25T16:19:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on scidocs in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on scidocs with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
kwang2049/TSDAE-twitterpara
|
kwang2049
| 2021-10-25T16:18:44Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on twitterpara in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
kwang2049/TSDAE-askubuntu
|
kwang2049
| 2021-10-25T16:17:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on AskUbuntu in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.