repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
alkiskoudounas/sd-butterflies-64px | alkiskoudounas | null | 6 | 2 | diffusers | 0 | unconditional-image-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class'] | false | true | true | 529 | false |
# Model Card for Stable Diffusion - Butterflies, 64px
Model developed for the Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class).
This model is a diffusion model for unconditional image generation of cute butterflies 🦋.
It is trained on a very small collection of 1'000 pictures and trained for 30 epochs.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('alkiskoudounas/sd-butterflies-64px')
image = pipeline().images[0]
image
```
| c0312552c82b562e5dd697ecc7ded993 |
google/multiberts-seed_4-step_1900k | google | bert | 8 | 14 | transformers | 0 | null | true | true | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['multiberts', 'multiberts-seed_4', 'multiberts-seed_4-step_1900k'] | false | true | true | 3,527 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 1900k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 1900k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_1900k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_1900k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_1900k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
| 810bb0c75c89a845ba973490474a252e |
sentence-transformers/facebook-dpr-question_encoder-single-nq-base | sentence-transformers | bert | 13 | 906 | sentence-transformers | 0 | sentence-similarity | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers'] | false | true | true | 2,745 | false |
# sentence-transformers/facebook-dpr-question_encoder-single-nq-base
This is a port of the [DPR Model](https://github.com/facebookresearch/DPR) to [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/facebook-dpr-question_encoder-single-nq-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/facebook-dpr-question_encoder-single-nq-base')
model = AutoModel.from_pretrained('sentence-transformers/facebook-dpr-question_encoder-single-nq-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/facebook-dpr-question_encoder-single-nq-base)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 509, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
Have a look at: [DPR Model](https://github.com/facebookresearch/DPR) | dc15a7ef4120d408b6763e6e87cf232c |
techiaith/wav2vec2-xlsr-ft-cy | techiaith | wav2vec2 | 13 | 2,968 | transformers | 3 | automatic-speech-recognition | true | false | false | apache-2.0 | ['cy'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'ken-lm', 'robust-speech-event', 'speech'] | true | true | true | 1,871 | false |
# wav2vec2-xlsr-ft-cy
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [Welsh Common Voice version 11 dataset](https://commonvoice.mozilla.org/cy/datasets).
Source code and scripts for training acoustic and KenLM language models, as well as examples of inference in transcribing or a self-hosted API service, can be found at [https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy](https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy).
## Usage
The wav2vec2-xlsr-ft-cy (acoustic) model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
import librosa
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
model = Wav2Vec2ForCTC.from_pretrained("techiaith/wav2vec2-xlsr-ft-cy")
audio, rate = librosa.load(audio_file, sr=16000)
inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
# greedy decoding
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
```
## Using the Language Model
See https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy/releases/tag/22.10 for more details and
examples of a KenLM usage with the Parlance PyTorch CTC decode bindings library: [https://github.com/parlance/ctcdecode](https://github.com/parlance/ctcdecode)
## Evaluation
According to the Welsh Common Voice version 11 test set, the WER of techiaith/wav2vec2-xlsr-ft-cy standalone is **6.04%**
When assisted by the KenLM language model the same test produces a WER of **4.05%**
See: https://github.com/techiaith/docker-wav2vec2-xlsr-ft-cy/blob/main/train/python/evaluate.py
| dc8e390468afe068a5b2577551aa1969 |
Maltehb/danish-bert-botxo | Maltehb | bert | 13 | 5,694 | transformers | 3 | fill-mask | true | true | true | cc-by-4.0 | ['da'] | ['common_crawl', 'wikipedia', 'dindebat.dk', 'hestenettet.dk', 'danishOpenSubtitles'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['danish', 'bert', 'masked-lm', 'Certainly'] | false | true | true | 1,055 | false |
# Danish BERT (version 2, uncased) by [Certainly](https://certainly.io/) (previously known as BotXO).
All credit goes to [Certainly](https://certainly.io/) (previously known as BotXO), who developed Danish BERT. For data and training details see their [GitHub repository](https://github.com/certainlyio/nordic_bert) or [this article](https://www.certainly.io/blog/danish-bert-model/). You can also visit their [organization page](https://huggingface.co/Certainly) on Hugging Face.
It is both available in TensorFlow and Pytorch format.
The original TensorFlow version can be downloaded using [this link](https://www.dropbox.com/s/19cjaoqvv2jicq9/danish_bert_uncased_v2.zip?dl=1).
Here is an example on how to load Danish BERT in PyTorch using the [🤗Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("Maltehb/danish-bert-botxo")
model = AutoModelForPreTraining.from_pretrained("Maltehb/danish-bert-botxo")
```
| 2ea93cff1bc963c29707ec2333c6f9ab |
sberbank-ai/mGPT | sberbank-ai | gpt2 | 7 | 8,519 | transformers | 98 | text-generation | true | false | false | apache-2.0 | ['en', 'az', 'sw', 'af', 'ar', 'ba', 'be', 'bxr', 'bg', 'bn', 'cv', 'hy', 'da', 'de', 'el', 'es', 'eu', 'fa', 'fi', 'fr', 'he', 'hi', 'hu', 'kk', 'id', 'it', 'ja', 'ka', 'ky', 'ko', 'lt', 'lv', 'mn', 'ml', 'os', 'mr', 'ms', 'my', 'nl', 'ro', 'pl', 'pt', 'sah', 'ru', 'tg', 'sv', 'ta', 'te', 'tk', 'th', 'tr', 'tl', 'tt', 'tyv', 'uk', 'en', 'ur', 'vi', 'uz', 'yo', 'zh', 'xal'] | ['mc4', 'wikipedia'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['multilingual', 'PyTorch', 'Transformers', 'gpt3', 'gpt2', 'Deepspeed', 'Megatron'] | false | true | true | 3,172 | false |
# Multilingual GPT model
We introduce a family of autoregressive GPT-like models with 1.3 billion parameters trained on 60 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus.
We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) frameworks allows us to effectively parallelize the training and inference steps. The resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhancing NLP possibilities for low resource languages.
## Code
The source code for the mGPT XL model is available on [Github](https://github.com/sberbank-ai/mgpt)
## Paper
mGPT: Few-Shot Learners Go Multilingual
[Abstract](https://arxiv.org/abs/2204.07580) [PDF](https://arxiv.org/pdf/2204.07580.pdf)

```
@misc{https://doi.org/10.48550/arxiv.2204.07580,
doi = {10.48550/ARXIV.2204.07580},
url = {https://arxiv.org/abs/2204.07580},
author = {Shliazhko, Oleh and Fenogenova, Alena and Tikhonova, Maria and Mikhailov, Vladislav and Kozlova, Anastasia and Shavrina, Tatiana},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2; I.2.7, 68-06, 68-04, 68T50, 68T01},
title = {mGPT: Few-Shot Learners Go Multilingual},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## Languages
Model supports 60 languages:
ISO codes:
```az, sw, af, ar, ba, be, bxr, bg, bn, cv, hy, da, de, el, es, eu, fa, fi, fr, he, hi, hu, kk, id, it, ja, ka, ky, ko, lt, lv, mn, ml, os, mr, ms, my, nl, ro, pl, pt, sah, ru, tg, sv, ta, te, tk, th, tr, tl, tt, tyv, uk, en, ur, vi, uz, yo, zh, xal```
Languages:
```Afrikaans, Azerbaijani, Belarusian, Bengali, Chuvash, German, English, Basque, Finnish, Hebrew (modern), Hungarian, Indonesian, Japanese, Kazakh, Kirghiz, Kyrgyz, Latvian, Mongolian, Malay, Dutch, Polish, Romanian, Moldavan, Yakut, Swahili, Telugu, Thai, Turkish, Tuvinian, Urdu, Vietnamese, Yoruba, Arabic, Bashkir, Bulgarian, Buriat, Danish, Greek, Modern, Spanish; Castilian, Persian, French, Hindi, Armenian, Italian, Georgian, Korean, Lithuanian, Malayalam, Marathi, Burmese, Ossetian, Ossetic, Portuguese, Russian, Swedish, Tamil, Tajik, Turkmen, Tatar, Ukrainian, Uzbek, Kalmyk, Chinese```
## Training Data Statistics
- Size: 488 Billion UTF characters
<img style="text-align:center; display:block;" src="https://huggingface.co/sberbank-ai/mGPT/resolve/main/stats.png">
"General training corpus statistics"
## Details
The model was trained with sequence length 512 using Megatron and Deepspeed libs by [SberDevices](https://sberdevices.ru/) team on a dataset of 600 GB of texts in 60 languages. The model has seen 440 billion BPE tokens in total.
Total training time was around 12 days on 256 Nvidia V100 GPUs.
| 3ab3caf4740255c734a61b02c3468f84 |
Helsinki-NLP/opus-mt-af-fr | Helsinki-NLP | marian | 10 | 55 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 768 | false |
### opus-mt-af-fr
* source languages: af
* target languages: fr
* OPUS readme: [af-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.fr | 35.3 | 0.543 |
| 9b17b204cd64abb2b02d01493163455d |
kadirnar/bsrgan | kadirnar | null | 3 | 0 | null | 0 | null | false | false | false | apache-2.0 | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['Super-Resolution', 'computer-vision', 'bsrgan', 'gan'] | false | true | true | 933 | false |
### Model Description
[BSRGAN: Designing a Practical Degradation Model for Deep Blind Image Super-Resolution .](https://arxiv.org/abs/2103.14006)
[BSRGAN-Pip: Packaged version of the BSRGAN repository](https://github.com/kadirnar/bsrgan-pip/)
[Paper Repo: Implementation of paper - BSRGAN](https://github.com/cszn/BSRGAN)
### Installation
```
pip install bsrgan
```
### BSRGAN Usage
```python
from bsrgan import BSRGAN
model = BSRGAN(weights='kadirnar/bsrgan', device='cuda:0', hf_model=True)
model.save = True
pred = model.predict(img_path='data/image/test.png')
```
### BibTeX Entry and Citation Info
```
@inproceedings{zhang2021designing,
title={Designing a Practical Degradation Model for Deep Blind Image Super-Resolution},
author={Zhang, Kai and Liang, Jingyun and Van Gool, Luc and Timofte, Radu},
booktitle={IEEE International Conference on Computer Vision},
pages={4791--4800},
year={2021}
}
``` | 48f6eb8dc9acbea1d527d205fbf82085 |
s87204/distilbert-base-uncased-finetuned-cola | s87204 | distilbert | 12 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,572 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8505
- Matthews Correlation: 0.5365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5201 | 1.0 | 535 | 0.5345 | 0.4153 |
| 0.3469 | 2.0 | 1070 | 0.5033 | 0.5109 |
| 0.2367 | 3.0 | 1605 | 0.6589 | 0.5209 |
| 0.1705 | 4.0 | 2140 | 0.7778 | 0.5354 |
| 0.125 | 5.0 | 2675 | 0.8505 | 0.5365 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| f77dbb161c6a1dc76985145f2215c060 |
microsoft/tapex-base | microsoft | bart | 8 | 809 | transformers | 10 | table-question-answering | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['tapex', 'table-question-answering'] | false | true | true | 2,701 | false |
# TAPEX (base-sized model)
TAPEX was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found [here](https://github.com/microsoft/Table-Pretraining).
## Model description
TAPEX (**Ta**ble **P**re-training via **Ex**ecution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with *table reasoning* skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries.
TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.
## Intended Uses
You can use the raw model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. However, the model is mostly meant to be fine-tuned on a supervised dataset. Currently TAPEX can be fine-tuned to tackle table question answering tasks and table fact verification tasks. See the [model hub](https://huggingface.co/models?search=tapex) to look for fine-tuned versions on a task that interests you.
### How to Use
Here is how to use this model in transformers:
```python
from transformers import TapexTokenizer, BartForConditionalGeneration
import pandas as pd
tokenizer = TapexTokenizer.from_pretrained("microsoft/tapex-base")
model = BartForConditionalGeneration.from_pretrained("microsoft/tapex-base")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
# tapex accepts uncased input since it is pre-trained on the uncased corpus
query = "select year where city = beijing"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# ['2008']
```
### How to Fine-tuning
Please find the fine-tuning script [here](https://github.com/SivilTaram/transformers/tree/add_tapex_bis/examples/research_projects/tapex).
### BibTeX entry and citation info
```bibtex
@inproceedings{
liu2022tapex,
title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=O50443AsCP}
}
``` | b93f1d56b445abae8be6da0b55d702df |
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_sst2_128 | gokuls | mobilebert | 17 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,861 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_sst2_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4833
- Accuracy: 0.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5233 | 1.0 | 8748 | 0.5730 | 0.8291 |
| 0.3614 | 2.0 | 17496 | 0.5357 | 0.8394 |
| 0.3019 | 3.0 | 26244 | 0.5166 | 0.8509 |
| 0.268 | 4.0 | 34992 | 0.5172 | 0.8509 |
| 0.2465 | 5.0 | 43740 | 0.4833 | 0.8544 |
| 0.2313 | 6.0 | 52488 | 0.5422 | 0.8463 |
| 0.2201 | 7.0 | 61236 | 0.5778 | 0.8303 |
| 0.2113 | 8.0 | 69984 | 0.5364 | 0.8417 |
| 0.204 | 9.0 | 78732 | 0.5428 | 0.8314 |
| 0.198 | 10.0 | 87480 | 0.5442 | 0.8337 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| 99b8057fa20dec1e4e65370dd058c0fa |
dxiao/bert-finetuned-ner-100percent | dxiao | bert | 12 | 18 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,526 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-100percent
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5711
- Precision: 0.8227
- Recall: 0.8498
- F1: 0.8360
- Accuracy: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 75 | 0.5329 | 0.8228 | 0.8438 | 0.8332 | 0.9277 |
| No log | 2.0 | 150 | 0.5674 | 0.8110 | 0.8438 | 0.8271 | 0.9242 |
| No log | 3.0 | 225 | 0.5711 | 0.8227 | 0.8498 | 0.8360 | 0.9254 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
| 9ef5227855bdc1c02f7adbba1e0205af |
SiriRRR/test-model | SiriRRR | bart | 4 | 4 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 828 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# test-model
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
| cf57aeae7a8deb82c94f8a9feda3d2e3 |
anas-awadalla/t5-small-few-shot-k-32-finetuned-squad-seed-4 | anas-awadalla | t5 | 15 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 963 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| 2fd2e72adc4ab9d2032e8327a6e66727 |
hisaoka/bart-large-cnn_radiology-ai-cardiothoracic-0.9 | hisaoka | bart | 11 | 5 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,052 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn_radiology-ai-cardiothoracic-0.9
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
| a276a3795f8c06049c98e46778d1ff8f |
Yidhar/dream | Yidhar | null | 9 | 0 | null | 0 | null | false | false | false | openrail | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 893 | false |




| caef81d2034bd95472df2e8ab7c30482 |
db0/microworlds | db0 | null | 4 | 0 | null | 6 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 1,088 | false |
This is a fork of the Microworlds model provided in [Public Prompts](https://publicprompts.art/microworlds-dreambooth-model/) for easier download and integration into services as allowed by the CreativeML-Openrails-M license
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | 95c3cce9b857bc33a5a6173a6c4838b3 |
johntang/finetuning-sentiment-model-3000-samples | johntang | distilbert | 10 | 9 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['imdb'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,049 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3426
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| bb6fe34d1f38eafa42b7de98741623bd |
vendorabc/modeltest | vendorabc | null | 5 | 0 | sklearn | 0 | tabular-classification | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['sklearn', 'skops', 'tabular-classification'] | false | true | true | 19,191 | false |
# Model description
This is a HistGradientBoostingClassifier model trained on breast cancer dataset. It's trained with Halving Grid Search Cross Validation, with parameter grids on max_leaf_nodes and max_depth.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|---------------------------------|----------------------------------------------------------|
| aggressive_elimination | False |
| cv | 5 |
| error_score | nan |
| estimator__categorical_features | |
| estimator__early_stopping | auto |
| estimator__l2_regularization | 0.0 |
| estimator__learning_rate | 0.1 |
| estimator__loss | auto |
| estimator__max_bins | 255 |
| estimator__max_depth | |
| estimator__max_iter | 100 |
| estimator__max_leaf_nodes | 31 |
| estimator__min_samples_leaf | 20 |
| estimator__monotonic_cst | |
| estimator__n_iter_no_change | 10 |
| estimator__random_state | |
| estimator__scoring | loss |
| estimator__tol | 1e-07 |
| estimator__validation_fraction | 0.1 |
| estimator__verbose | 0 |
| estimator__warm_start | False |
| estimator | HistGradientBoostingClassifier() |
| factor | 3 |
| max_resources | auto |
| min_resources | exhaust |
| n_jobs | -1 |
| param_grid | {'max_leaf_nodes': [5, 10, 15], 'max_depth': [2, 5, 10]} |
| random_state | 42 |
| refit | True |
| resource | n_samples |
| return_train_score | True |
| scoring | |
| verbose | 0 |
</details>
### Model Plot
The model plot is below.
<style>#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 {color: black;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 pre{padding: 0;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable {background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator:hover {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-item {z-index: 1;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:only-child::after {width: 0;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-text-repr-fallback {display: none;}</style><div id="sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={'max_depth': [2, 5, 10],'max_leaf_nodes': [5, 10, 15]},random_state=42)</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ab167486-be7e-4eb5-be01-ba21adbd7469" type="checkbox" ><label for="ab167486-be7e-4eb5-be01-ba21adbd7469" class="sk-toggleable__label sk-toggleable__label-arrow">HalvingGridSearchCV</label><div class="sk-toggleable__content"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={'max_depth': [2, 5, 10],'max_leaf_nodes': [5, 10, 15]},random_state=42)</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="e9df9f06-8d9e-4379-ad72-52f461408663" type="checkbox" ><label for="e9df9f06-8d9e-4379-ad72-52f461408663" class="sk-toggleable__label sk-toggleable__label-arrow">HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.959064 |
| f1 score | 0.959064 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
skops_user
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
```
# Additional Content
## Confusion matrix

## Hyperparameter search results
<details>
<summary> Click to expand </summary>
| iter | n_resources | mean_fit_time | std_fit_time | mean_score_time | std_score_time | param_max_depth | param_max_leaf_nodes | params | split0_test_score | split1_test_score | split2_test_score | split3_test_score | split4_test_score | mean_test_score | std_test_score | rank_test_score | split0_train_score | split1_train_score | split2_train_score | split3_train_score | split4_train_score | mean_train_score | std_train_score |
|--------|---------------|-----------------|----------------|-------------------|------------------|-------------------|------------------------|-----------------------------------------|---------------------|---------------------|---------------------|---------------------|---------------------|-------------------|------------------|-------------------|----------------------|----------------------|----------------------|----------------------|----------------------|--------------------|-------------------|
| 0 | 44 | 0.0498069 | 0.0107112 | 0.0121156 | 0.0061838 | 2 | 5 | {'max_depth': 2, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0492636 | 0.0187271 | 0.00738611 | 0.00245441 | 2 | 10 | {'max_depth': 2, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0572055 | 0.0153176 | 0.0111395 | 0.0010297 | 2 | 15 | {'max_depth': 2, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0498482 | 0.0177091 | 0.00857358 | 0.00415935 | 5 | 5 | {'max_depth': 5, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0500658 | 0.00992094 | 0.00998321 | 0.00527031 | 5 | 10 | {'max_depth': 5, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0525903 | 0.0151616 | 0.00874681 | 0.00462998 | 5 | 15 | {'max_depth': 5, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0512018 | 0.0130152 | 0.00881834 | 0.00500514 | 10 | 5 | {'max_depth': 10, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0566921 | 0.0186051 | 0.00513492 | 0.000498488 | 10 | 10 | {'max_depth': 10, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.060587 | 0.04041 | 0.00987453 | 0.00529624 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 1 | 132 | 0.232459 | 0.0479878 | 0.0145514 | 0.00856422 | 10 | 5 | {'max_depth': 10, 'max_leaf_nodes': 5} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1 | 132 | 0.272297 | 0.0228833 | 0.011561 | 0.0068272 | 10 | 10 | {'max_depth': 10, 'max_leaf_nodes': 10} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1 | 132 | 0.239161 | 0.0330412 | 0.0116591 | 0.003554 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 2 | 396 | 0.920334 | 0.18198 | 0.0166654 | 0.00776263 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.962025 | 0.911392 | 0.987342 | 0.974359 | 0.935897 | 0.954203 | 0.0273257 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
</details>
## Classification report
<details>
<summary> Click to expand </summary>
| index | precision | recall | f1-score | support |
|--------------|-------------|----------|------------|-----------|
| malignant | 0.951613 | 0.936508 | 0.944 | 63 |
| benign | 0.963303 | 0.972222 | 0.967742 | 108 |
| macro avg | 0.957458 | 0.954365 | 0.955871 | 171 |
| weighted avg | 0.958996 | 0.959064 | 0.958995 | 171 |
</details> | 79c2152cc812a97d8963da8b6bcbb71c |
MaCoCu/XLMR-BERTovski | MaCoCu | xlm-roberta | 10 | 21 | transformers | 0 | feature-extraction | true | true | true | cc0-1.0 | ['bg', 'mk', 'multilingual'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['BERTovski', 'MaCoCu'] | false | true | true | 5,510 | false |
# Model description
**XLMR-BERTovski** is a large pre-trained language model trained on Bulgarian and Macedonian texts. It was created by continuing training from the [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) model. It was developed as part of the [MaCoCu](https://macocu.eu/) project. The main developer is [Rik van Noord](https://www.rikvannoord.nl/) from the University of Groningen.
XLMR-BERTovski was trained on 74GB of Bulgarian and Macedonian text, which is equal to just over 7 billion tokens. It was trained for 67,500 steps with a batch size of 1,024, which was approximately 2.5 epochs. It uses the same vocabulary as the original XLMR-large model. The model is trained on the same data as [BERTovski](https://huggingface.co/RVN/BERTovski), but this model was trained from scratch using the RoBERTa architecture.
The training and fine-tuning procedures are described in detail on our [Github repo](https://github.com/macocu/LanguageModels).
# How to use
```python
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("RVN/XLMR-BERTovski")
model = AutoModel.from_pretrained("RVN/XLMR-BERTovski") # PyTorch
model = TFAutoModel.from_pretrained("RVN/XLMR-BERTovski") # Tensorflow
```
# Data
For training, we used all Bulgarian and Macedonian data that was present in the [MaCoCu](https://macocu.eu/), Oscar, mc4 and Wikipedia corpora. In a manual analysis we found that for Oscar and mc4, if the data did not come from the corresponding domain (.bg or .mk), it was often (badly) machine translated. Therefore, we opted to only use data that originally came from a .bg or .mk domain.
After de-duplicating the data, we were left with a total of 54.5 GB of Bulgarian and 9 GB of Macedonian text. Since there was quite a bit more Bulgarian data, we simply doubled the Macedonian data during training.
# Benchmark performance
We tested performance of XLMR-BERTovski on benchmarks of XPOS, UPOS and NER. For Bulgarian, we used the data from the [Universal Dependencies](https://universaldependencies.org/) project. For Macedonian, we used the data sets created in the [babushka-bench](https://github.com/clarinsi/babushka-bench/) project. We also tested on a Google (Bulgarian) and human (Macedonian) translated version of the COPA data set (for details see our [Github repo](https://github.com/RikVN/COPA)). We compare performance to [BERTovski](https://huggingface.co/RVN/BERTovski) and the strong multi-lingual models XLMR-base and XLMR-large. For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
Scores are averages of three runs, except for COPA, for which we use 10 runs. We use the same hyperparameter settings for all models for UPOS/XPOS/NER, for COPA we optimized the learning rate on the dev set.
## Bulgarian
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **NER** | **NER** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:|:-------:|:--------:|:--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 99.2 | 99.4 | 98.0 | 98.3 | 93.2 | 92.9 | 56.9 |
| **XLM-R-large** | 99.3 | 99.4 | 97.4 | 97.7 | 93.7 | 93.5 | 53.1 |
| **BERTovski** | 98.8 | 99.1 | 97.6 | 97.8 | 93.5 | 93.3 | 51.7 |
| **XLMR-BERTovski** | 99.3 | 99.5 | 98.5 | 98.8 | 94.4 | 94.3 | 54.6 |
## Macedonian
| | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **NER** | **NER** | **COPA** |
|-----------------|:--------:|:--------:|:--------:|:--------:|:-------:|:--------:|:--------:|
| | **Dev** | **Test** | **Dev** | **Test** | **Dev** | **Test** | **Test** |
| **XLM-R-base** | 98.3 | 98.6 | 97.3 | 97.1 | 92.8 | 94.8 | 55.3 |
| **XLM-R-large** | 98.3 | 98.7 | 97.7 | 97.5 | 93.3 | 95.1 | 52.5 |
| **BERTovski** | 97.8 | 98.1 | 96.4 | 96.0 | 92.8 | 94.6 | 51.8 |
| **XLMR-BERTovski** | 98.6 | 98.8 | 98.0 | 97.7 | 94.4 | 96.3 | 55.6|
# Acknowledgements
Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). The authors received funding from the European Union's Connecting Europe Facility 2014-
2020 - CEF Telecom, under Grant Agreement No.INEA/CEF/ICT/A2020/2278341 (MaCoCu).
# Citation
If you use this model, please cite the following paper:
```bibtex
@inproceedings{non-etal-2022-macocu,
title = "{M}a{C}o{C}u: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages",
author = "Ba{\~n}{\'o}n, Marta and
Espl{\`a}-Gomis, Miquel and
Forcada, Mikel L. and
Garc{\'\i}a-Romero, Cristian and
Kuzman, Taja and
Ljube{\v{s}}i{\'c}, Nikola and
van Noord, Rik and
Sempere, Leopoldo Pla and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Rupnik, Peter and
Suchomel, V{\'\i}t and
Toral, Antonio and
van der Werff, Tobias and
Zaragoza, Jaume",
booktitle = "Proceedings of the 23rd Annual Conference of the European Association for Machine Translation",
month = jun,
year = "2022",
address = "Ghent, Belgium",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2022.eamt-1.41",
pages = "303--304"
}
``` | b0c981cd52746401d75b2331c5d45c96 |
anas-awadalla/roberta-base-few-shot-k-64-finetuned-squad-seed-0 | anas-awadalla | roberta | 17 | 5 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 985 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-64-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 1df1ba0eff9b1f234e22b5a7089cfdb3 |
coldfir3/xlm-roberta-base-finetuned-panx-en | coldfir3 | xlm-roberta | 9 | 12 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3925
- F1: 0.7075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1493 | 1.0 | 50 | 0.5884 | 0.4748 |
| 0.5135 | 2.0 | 100 | 0.4088 | 0.6623 |
| 0.3558 | 3.0 | 150 | 0.3925 | 0.7075 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 0bdd233591d24353cfa2274b4aa0ac6e |
Intel/bert-base-uncased-sparse-80-1x4-block-pruneofa | Intel | bert | 8 | 10 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | ['en'] | ['wikipedia', 'bookcorpus'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['fill-mask'] | false | true | true | 456 | false | # 80% 1x4 Block Sparse BERT-Base (uncased) Prune OFA
This model is was created using Prune OFA method described in [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754) presented in ENLSP NeurIPS Workshop 2021.
For further details on the model and its result, see our paper and our implementation available [here](https://github.com/IntelLabs/Model-Compression-Research-Package/tree/main/research/prune-once-for-all).
| 2d7a5c2037c582eed264d9c5f9c6804c |
scasutt/Prototype_training_large_model | scasutt | wav2vec2 | 7 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,380 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Prototype_training_large_model
This model is a fine-tuned version of [scasutt/Prototype_training_large_model](https://huggingface.co/scasutt/Prototype_training_large_model) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2585
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.0545 | 1.47 | 100 | 3.2604 | 1.0 |
| 3.0413 | 2.93 | 200 | 3.2585 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| 012d84d8dcf15feb701ef3d364103d48 |
muhtasham/bert-small-finetuned-cuad-longer | muhtasham | bert | 12 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['cuad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,661 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finetuned-cuad-longer
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the cuad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0601 | 1.0 | 8702 | 0.0404 |
| 0.0551 | 2.0 | 17404 | 0.0394 |
| 0.0428 | 3.0 | 26106 | 0.0481 |
| 0.0375 | 4.0 | 34808 | 0.0425 |
| 0.0403 | 5.0 | 43510 | 0.0551 |
| 0.0246 | 6.0 | 52212 | 0.0588 |
| 0.0284 | 7.0 | 60914 | 0.0557 |
| 0.0303 | 8.0 | 69616 | 0.0543 |
| 0.0239 | 9.0 | 78318 | 0.0634 |
| 0.0207 | 10.0 | 87020 | 0.0617 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 923f09efefab314107b467ad24ec3f55 |
kadirnar/yolox_tiny-v0.1.1 | kadirnar | null | 3 | 0 | null | 0 | object-detection | false | false | false | apache-2.0 | null | ['detection-datasets/coco'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['object-detection', 'computer-vision', 'yolox', 'yolov3', 'yolov5'] | false | true | true | 1,203 | false |
### Model Description
[YOLOX](https://arxiv.org/abs/2107.08430) is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported.
[YOLOXDetect-Pip](https://github.com/kadirnar/yolox-pip/): This repo is a packaged version of the [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) for easy installation and use.
[Paper Repo]: Implementation of paper - [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX)
### Installation
```
pip install yoloxdetect
```
### Yolox Inference
```python
from yoloxdetect import YoloxDetector
from yolox.data.datasets import COCO_CLASSES
model = YoloxDetector(
model_path = "kadirnar/yolox_tiny-v0.1.1",
config_path = "configs.yolox_tiny",
device = "cuda:0",
hf_model=True
)
model.classes = COCO_CLASSES
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(image='data/images', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{yolox2021,
title={YOLOX: Exceeding YOLO Series in 2021},
author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
journal={arXiv preprint arXiv:2107.08430},
year={2021}
}
``` | ba885c8242a9d0107ddb4ff5d74b8545 |
Toshifumi/distilbert-base-multilingual-cased-finetuned-emotion | Toshifumi | distilbert | 12 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,378 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3702
- Accuracy: 0.8885
- F1: 0.8888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1646 | 1.0 | 250 | 0.6190 | 0.8085 | 0.7992 |
| 0.4536 | 2.0 | 500 | 0.3702 | 0.8885 | 0.8888 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| 0a5674aae2c3defb84e429e550b95a25 |
anuragshas/wav2vec2-large-xls-r-300m-pa-in | anuragshas | wav2vec2 | 19 | 9 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pa'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard'] | true | true | true | 2,939 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLS-R-300M - Punjabi
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2548
- Wer: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 120
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 6.4804 | 16.65 | 400 | 1.8461 | 1.0 |
| 0.474 | 33.33 | 800 | 1.1018 | 0.6624 |
| 0.1389 | 49.98 | 1200 | 1.1918 | 0.6103 |
| 0.0919 | 66.65 | 1600 | 1.1889 | 0.6058 |
| 0.0657 | 83.33 | 2000 | 1.2266 | 0.5931 |
| 0.0479 | 99.98 | 2400 | 1.2512 | 0.5902 |
| 0.0355 | 116.65 | 2800 | 1.2548 | 0.5677 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
```bash
python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-pa-in --dataset mozilla-foundation/common_voice_7_0 --config pa-IN --split test
```
### Inference With LM
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "anuragshas/wav2vec2-large-xls-r-300m-pa-in"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_7_0", "pa-IN", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
# => "ਉਨ੍ਹਾਂ ਨੇ ਸਾਰੇ ਤੇਅਰਵੇ ਵੱਖਰੀ ਕਿਸਮ ਦੇ ਕੀਤੇ ਹਨ"
```
### Eval results on Common Voice 7 "test" (WER):
| Without LM | With LM (run `./eval.py`) |
|---|---|
| 51.968 | 45.611 |
| 16362b1b287711ffb5ae45959853a9a7 |
gopalkalpande/t5-small-finetuned-bbc-news-summarization | gopalkalpande | t5 | 18 | 1 | transformers | 0 | text2text-generation | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,548 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gopalkalpande/t5-small-finetuned-bbc-news-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7637
- Validation Loss: 0.3528
- Train Rouge1: 19.4783
- Train Rouge2: 13.2994
- Train Rougel: 17.4791
- Train Rougelsum: 17.6204
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 4e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 0.7637 | 0.3528 | 19.4783 | 13.2994 | 17.4791 | 17.6204 | 19.0 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
| c15be7ca14597c641bc03cf21c567ba9 |
ahsanjavid/convnext-tiny-finetuned-cifar10 | ahsanjavid | convnext | 5 | 28 | transformers | 1 | image-classification | true | false | false | apache-2.0 | null | ['cifar10'] | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | ['vision', 'image-classification'] | false | true | true | 525 | false |
# ConvNext-tiny-finetuned-cifar10 (tiny-sized model)
ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
Convnext tiny finetuned on cifar 10 dataset. Which has ten classes.
Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.
| fa329d7ce402ae006ef9c0cfc638e896 |
transformerCommunity/rojtburd | transformerCommunity | null | 11 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,222 | false | ### rojtburd on Stable Diffusion
This is the `<Rojtburd>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






| 3a70dfcd031ee4b96c7fb11242483204 |
Ivor22/distilbert-base-uncased-finetuned-sst2 | Ivor22 | distilbert | 25 | 115 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,742 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5776
- Precision: 0.9038
- Recall: 0.9099
- Accuracy: 0.9048
- F1: 0.9068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.0237 | 1.0 | 4210 | 0.6639 | 0.8685 | 0.9369 | 0.8956 | 0.9014 |
| 0.0247 | 2.0 | 8420 | 0.5776 | 0.9038 | 0.9099 | 0.9048 | 0.9068 |
| 0.0304 | 3.0 | 12630 | 0.6533 | 0.8839 | 0.9257 | 0.9002 | 0.9043 |
| 0.0281 | 4.0 | 16840 | 0.6654 | 0.8877 | 0.9257 | 0.9025 | 0.9063 |
| 0.0095 | 5.0 | 21050 | 0.7832 | 0.8710 | 0.9279 | 0.8933 | 0.8986 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| ebb9c76af2957697a98a9e3312b4a296 |
Jersonm89/Avatar | Jersonm89 | null | 23 | 473 | diffusers | 15 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 2 | 2 | 0 | 0 | 2 | 2 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 824 | false |
Model trained with images from James Cameron's Avatar movie. Draw avatar characters with facial features of the person indicated at the prompt
### Sample images Will Smith: prompt= portrait Will Smith male, avatar style



### Sample images Johnny Depp: prompt= portrait Johnny Depp male, avatar style


 | 02e7d14882355576248b48337423066b |
emmyapi/distilbart-podimo-data-eval-2-2e | emmyapi | bart | 13 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,590 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-podimo-data-eval-2-2e
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7374
- Rouge1: 32.9705
- Rouge2: 6.9494
- Rougel: 17.922
- Rougelsum: 29.4629
- Gen Len: 137.5363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
| 4.1649 | 0.98 | 44 | 3.8138 | 32.12 | 6.544 | 17.5999 | 28.8314 | 136.4553 |
| 3.6772 | 1.98 | 88 | 3.7374 | 32.9705 | 6.9494 | 17.922 | 29.4629 | 137.5363 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
| b269a0d5f34f0e744dc108246c310abe |
cnut1648/biolinkbert-mednli | cnut1648 | bert | 13 | 131 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,570 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biolinkbert-mednli
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on [MedNLI](https://physionet.org/content/mednli/1.0.0/).
It achieves the following results on the evaluation set:
```
{
"eval_accuracy": 0.8788530230522156,
"eval_loss": 0.7843484878540039,
"eval_runtime": 39.7009,
"eval_samples": 1395,
"eval_samples_per_second": 35.138,
"eval_steps_per_second": 1.108
}
```
The accuracy for the test set is
```
{
"eval_accuracy": 0.8607594966888428,
"eval_loss": 0.879707932472229,
"eval_runtime": 27.4404,
"eval_samples": 1395,
"eval_samples_per_second": 51.821,
"eval_steps_per_second": 1.64
}
```
The labels are
```
"id2label": {
"0": "entailment",
"1": "neutral",
"2": "contradiction"
},
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.22.2
- Pytorch 1.13.0+cu117
- Datasets 2.4.0
- Tokenizers 0.12.1
| acc477875c495cc2265e50d60a888e78 |
paola-md/recipe-reg | paola-md | roberta | 8 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,832 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-reg
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3717
- Rmse: 1.8362
- Mse: 3.3717
- Mae: 1.6145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|
| 3.3247 | 1.0 | 12809 | 3.3717 | 1.8362 | 3.3717 | 1.6145 |
| 3.3238 | 2.0 | 25618 | 3.3722 | 1.8363 | 3.3722 | 1.6145 |
| 3.3217 | 3.0 | 38427 | 3.3718 | 1.8362 | 3.3718 | 1.6145 |
| 3.3215 | 4.0 | 51236 | 3.3754 | 1.8372 | 3.3754 | 1.6145 |
| 3.3203 | 5.0 | 64045 | 3.3721 | 1.8363 | 3.3721 | 1.6145 |
| 3.3199 | 6.0 | 76854 | 3.3731 | 1.8366 | 3.3731 | 1.6145 |
| 3.319 | 7.0 | 89663 | 3.3731 | 1.8366 | 3.3731 | 1.6145 |
| 3.3188 | 8.0 | 102472 | 3.3717 | 1.8362 | 3.3717 | 1.6145 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
| be0c84904487aeab43a5f40c418b60c6 |
anas-awadalla/t5-base-few-shot-k-256-finetuned-squad-infilling-seed-0 | anas-awadalla | t5 | 17 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 965 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-256-finetuned-squad-infilling-seed-0
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
| a0f9437c97e00da0d73e5195f9e30ffe |
google/mobilenet_v1_0.75_192 | google | mobilenet_v1 | 5 | 45,426 | transformers | 0 | image-classification | true | false | false | other | null | ['imagenet-1k'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'image-classification'] | false | true | true | 2,363 | false |
# MobileNet V1
MobileNet V1 model pre-trained on ImageNet-1k at resolution 192x192. It was introduced in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Howard et al, and first released in [this repository](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md).
Disclaimer: The team releasing MobileNet V1 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
From the [original README](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md):
> MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings and segmentation similar to how other popular large scale models, such as Inception, are used. MobileNets can be run efficiently on mobile devices [...] MobileNets trade off between latency, size and accuracy while comparing favorably with popular models from the literature.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v1) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
preprocessor = AutoImageProcessor.from_pretrained("google/mobilenet_v1_0.75_192")
model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v1_0.75_192")
inputs = preprocessor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Note: This model actually predicts 1001 classes, the 1000 classes from ImageNet plus an extra “background” class (index 0).
Currently, both the feature extractor and model support PyTorch.
| 447ab49cd16a8f176cc4b70c47e9e669 |
sayakpaul/glpn-nyu-finetuned-diode-221116-054332 | sayakpaul | glpn | 9 | 1 | transformers | 0 | depth-estimation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['vision', 'depth-estimation', 'generated_from_trainer'] | true | true | true | 1,786 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glpn-nyu-finetuned-diode-221116-054332
This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6028
- Rmse: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:----:|
| 1.6801 | 1.0 | 72 | 1.3466 | nan |
| 1.2071 | 2.0 | 144 | 0.8388 | nan |
| 0.7548 | 3.0 | 216 | 0.6771 | nan |
| 0.6401 | 4.0 | 288 | 0.6443 | nan |
| 0.629 | 5.0 | 360 | 0.6452 | nan |
| 0.5892 | 6.0 | 432 | 0.6202 | nan |
| 0.5773 | 7.0 | 504 | 0.6262 | nan |
| 0.5532 | 8.0 | 576 | 0.6151 | nan |
| 0.5268 | 9.0 | 648 | 0.5995 | nan |
| 0.5416 | 10.0 | 720 | 0.6028 | nan |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Tokenizers 0.13.2
| 19354e87764c494a5c434e42235cbb49 |
MadhuG/vit-base-patch16-224-in21k-lung_cancer | MadhuG | vit | 7 | 3 | transformers | 0 | image-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,709 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MadhuG/vit-base-patch16-224-in21k-lung_cancer
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1061
- Train Accuracy: 0.1041
- Validation Loss: 1.1028
- Validation Accuracy: 0.1394
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 600, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 1.1061 | 0.1041 | 1.1028 | 0.1394 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.10.0
- Datasets 2.7.0
- Tokenizers 0.13.2
| c381da0bbc01b8671bf5416d85502412 |
Hemlok/DateMix | Hemlok | null | 10 | 0 | null | 36 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 2,925 | false |
# 『Date Mix & RDt Mix』


- "Date Mix" is a model created by hierarchical merging based on "anything-v4.5"([andite/anything-v4.0 · Hugging Face](https://huggingface.co/andite/anything-v4.0)).
----


- "RDtMix" is a merged realistic model based on "Date Mix".
----
# ◆Discord
[Join Discord Server](https://discord.gg/eN6aSWRddT)
- The merged model community of Hemlok.
----
# ◆About
- This model was created to improve composition and color tone.
- If you like realistic illustrations, please use "RDtModel".
- Sampler: DDIM or DPM++ SDE Karras
- Steps: 50~
- Clipskip: 2
- CFG Scale: 5-12
- Denoise strength: 0.5-0.7(As you like)
- Negative prompts should be as few as possible.
- vae: As you wish. (Any etc. If not used, color may become lighter)
----
# ◆How to use
- Please download the file by yourself and use it with WebUI(AUTOMATIC1111) etc.
- Use the f16 version for Colab(T4) or a PC with low RAM.
----
# ◆Colab Note
[](https://colab.research.google.com/drive/1Bsm7p_Db5u3IiHr3YjTeKu4jOllfTyqZ?usp=sharing)
- (I have not checked the operation but it probably works.)
----
# ◆Comparison
<img src="https://i.imgur.com/fW2cPsY.jpg" width="1700" height="">
<img src="https://i.imgur.com/0k48okp.jpg" width="1700" height="">
```
kawaii, 1girl, looking at viewer, smile
```
----
<img src="https://i.imgur.com/d44iIvs.jpg" width="1700" height="">
```
(morning), (school), 1girl, solo, looking at viewer, cowboy shot, (school uniform), smile, black hair, stockings
```
----
# ◆Sampler & CFG Scale
<img src="https://i.imgur.com/vzUBzmw.jpg" width="1700" height="">
```
(morning), (school), 1girl, solo, looking at viewer, cowboy shot, (school uniform), smile, black hair, stockings
```
----
# Disclaimer
- The creation of SFW and NSFW images is at the discretion of the individual creator.
- This model is not a model created to publish NSFW content in public places, etc.
----
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
(Full text of the license: https://huggingface.co/spaces/CompVis/stable-diffusion-license) | 3de325aadbf228d77c047bef7acbc78d |
Helsinki-NLP/opus-mt-es-pon | Helsinki-NLP | marian | 10 | 11 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-es-pon
* source languages: es
* target languages: pon
* OPUS readme: [es-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-pon/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-pon/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.es.pon | 21.6 | 0.448 |
| 84c3590b4c75ebb3adc4502078528c33 |
Elytum/bert-finetuned-ner | Elytum | bert | 10 | 8 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,541 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [gaunernst/bert-small-uncased](https://huggingface.co/gaunernst/bert-small-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0186
- Precision: 0.9941
- Recall: 0.9952
- F1: 0.9946
- Accuracy: 0.9963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0277 | 1.0 | 2500 | 0.0190 | 0.9929 | 0.9939 | 0.9934 | 0.9956 |
| 0.0137 | 2.0 | 5000 | 0.0180 | 0.9935 | 0.9951 | 0.9943 | 0.9960 |
| 0.0095 | 3.0 | 7500 | 0.0186 | 0.9941 | 0.9952 | 0.9946 | 0.9963 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
| d79839588f534d6e4dfc0d1780141bb8 |
Allenbv/kevin-xd | Allenbv | null | 26 | 7 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 2,177 | false | ### Kevin-xd Dreambooth model trained by Allenbv with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
Kevekek
.png)
,_kintsugi,_modern_fine_art,_fractal,_intricate,_elegant,_highly_detailed,_digital_photogr.png)
.png)
,_digital_painting,_artstation,_concept_art,_donato_giancola,_Joseph_Christian_Leyendecker,_WLOP,_Bor.png)
_as_a_mermaid_,_realistic_shaded,_fine_details._Night_setting._Very_anime_style._Realistic_shaded_lighting.png)
_as_a_mermaid_,_realistic_shaded,_fine_details._Night_setting._Very_anime_style._Realistic_shaded_lighting.png)
_with_colorful_band_aids,_rave_concert_poster,_retro,_conrad_roset,_greg_rutkows.png)
_wearing_cat_ears_with_big_sack,_anime_key_visual,_intricate,_stunning,_highly_detailed,_digital_painting,_art.png)
| 9b63d6de907cbee57d0c035444adfacb |
BlackKakapo/t5-small-grammar-ro-v2 | BlackKakapo | t5 | 8 | 34 | transformers | 1 | text2text-generation | true | false | false | ['apache-2.0'] | ['ro'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,264 | false | # Romanian grammar

Fine-tune [t5-small-grammar-ro](https://huggingface.co/datasets/BlackKakapo/grammar-ro-v2) model for grammar. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset v2](https://huggingface.co/datasets/BlackKakapo/grammar-ro-v2). The dataset contains ~1600k examples.
### How to use
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("BlackKakapo/t5-small-grammar-ro-v2")
model = AutoModelForSeq2SeqLM.from_pretrained("BlackKakapo/t5-small-grammar-ro-v2")
```
### Or
```cmd
pip install happytransformer
```
```python
from happytransformer import TTSettings, HappyTextToText
happy_tt_save = HappyTextToText('T5',r"BlackKakapo/t5-small-grammar-ro-v2")
beam_settings = TTSettings(num_beams=10, min_length=10, max_length=100)
```
### Generate
```python
sent = "Interesant cym lucreaza modelu dat, carr a fost antrenat cu un dataset foarre mare."
prefix = "grammar: "
example = prefix + sent
result = happy_tt_save.generate_text(example, args=beam_settings)
print(result.text)
```
### Output
```out
Interesant cum lucrează modelul dat, care a fost antrenat cu un dataset foarte mare.
``` | ab95b692502041e854c6429f6b20fa2c |
fpianz/roberta-english-book-reviews-sentiment | fpianz | roberta | 4 | 7 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 713 | false |
Three classes sentiment analysis (positive, negative, neutral)
Based on https://huggingface.co/j-hartmann/sentiment-roberta-large-english-3-classes
Fine-tuned using:
- annotated sentences from book reviews in English https://www.gti.uvigo.es/index.php/en/book-reviews-annotated-dataset-for-aspect-based-sentiment-analysis
- annotated paragraphs from amateur writers' stories https://arxiv.org/abs/1910.11769
Performance for books:

Performance for reviews:

| e4d326bca185376a18c05d50f64e5b2f |
nlpconnect/dpr-question_encoder_bert_uncased_L-2_H-128_A-2 | nlpconnect | bert | 9 | 148 | transformers | 0 | feature-extraction | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback', 'dpr'] | true | true | true | 4,355 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dpr-question_encoder_bert_uncased_L-2_H-128_A-2
This model(google/bert_uncased_L-2_H-128_A-2) was trained from scratch on training data: data.retriever.nq-adv-hn-train(facebookresearch/DPR).
It achieves the following results on the evaluation set:
## Evaluation data
evaluation dataset: facebook-dpr-dev-dataset from official DPR github
|model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
|---|---|---|---|---|---|---|---|---|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-dev dataset|6445|199795|60.53%|68.28%|76.07%|80.98%|91.45%|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-dev dataset|6445|199795|65.43%|71.99%|79.03%|83.24%|92.11%|
|*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-dev dataset|6445|199795|40.94%|49.27%|59.05%|66.00%|82.00%|
evaluation dataset: UKPLab/beir test data but we have used first 2lac passage only.
|model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
|---|---|---|---|---|---|---|---|---|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-test dataset|3452|200001|49.68%|59.06%|69.40%|75.75%|89.28%|
|nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-test dataset|3452|200001|51.62%|61.09%|70.10%|76.07%|88.70%|
|*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-test dataset|3452|200001|32.93%|43.74%|56.95%|66.30%|83.92%|
Note: * means we have evaluated on same eval dataset.
### Usage (HuggingFace Transformers)
```python
passage_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2")
query_encoder = TFAutoModel.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-12_H-128_A-2")
p_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2")
q_tokenizer = AutoTokenizer.from_pretrained("nlpconnect/dpr-question_encoder_bert_uncased_L-12_H-128_A-2")
def get_title_text_combined(passage_dicts):
res = []
for p in passage_dicts:
res.append(tuple((p['title'], p['text'])))
return res
processed_passages = get_title_text_combined(passage_dicts)
def extracted_passage_embeddings(processed_passages, model_config):
passage_inputs = tokenizer.batch_encode_plus(
processed_passages,
add_special_tokens=True,
truncation=True,
padding="max_length",
max_length=model_config.passage_max_seq_len,
return_token_type_ids=True
)
passage_embeddings = passage_encoder.predict([np.array(passage_inputs['input_ids']),
np.array(passage_inputs['attention_mask']),
np.array(passage_inputs['token_type_ids'])],
batch_size=512,
verbose=1)
return passage_embeddings
passage_embeddings = extracted_passage_embeddings(processed_passages, model_config)
def extracted_query_embeddings(queries, model_config):
query_inputs = tokenizer.batch_encode_plus(
queries,
add_special_tokens=True,
truncation=True,
padding="max_length",
max_length=model_config.query_max_seq_len,
return_token_type_ids=True
)
query_embeddings = query_encoder.predict([np.array(query_inputs['input_ids']),
np.array(query_inputs['attention_mask']),
np.array(query_inputs['token_type_ids'])],
batch_size=512,
verbose=1)
return query_embeddings
query_embeddings = extracted_query_embeddings(queries, model_config)
```
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Tokenizers 0.10.3 | 3e1aa0bbc0da61433fd24bddccf3a58f |
Danessely/distilroberta-base-finetuned-dna | Danessely | roberta | 6 | 4 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,266 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-dna
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1473
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1615 | 1.0 | 8014 | 1.1578 |
| 1.1559 | 2.0 | 16028 | 1.1561 |
| 1.1503 | 3.0 | 24042 | 1.1475 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| e18a10f53a13288c4f07178a46cab521 |
Vandita/Bert-finetuned-Sarcoji_1 | Vandita | bert | 22 | 13 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,328 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert-finetuned-Sarcoji_1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3051
- Accuracy: 0.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3476 | 1.0 | 735 | 0.2745 | 0.8764 |
| 0.2515 | 2.0 | 1470 | 0.2667 | 0.8896 |
| 0.1341 | 3.0 | 2205 | 0.3051 | 0.8933 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| e33f10a21c9a72feef455484f439f58c |
pere/whisper-NST-TPU-test2 | pere | whisper | 28 | 3 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,292 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-NST-TPU-test2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9943
- Wer: 100.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- num_devices: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| 5.535 | 1.0 | 2 | 4.9943 | 100.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
| 85a5076ce3c50eb97d5b9000489ea6ac |
lmqg/bart-large-squad-qag | lmqg | bart | 14 | 38 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['en'] | ['lmqg/qag_squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['questions and answers generation'] | true | true | true | 3,838 | false |
# Model Card of `lmqg/bart-large-squad-qag`
This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large)
- **Language:** en
- **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/bart-large-squad-qag")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/bart-large-squad-qag")
output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 92.16 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedF1Score (MoverScore) | 63.79 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedPrecision (BERTScore) | 93.21 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedPrecision (MoverScore) | 66.71 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedRecall (BERTScore) | 91.17 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
| QAAlignedRecall (MoverScore) | 61.32 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_squad
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: facebook/bart-large
- max_length: 512
- max_length_output: 256
- epoch: 14
- batch: 8
- lr: 1e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-squad-qag/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| 513d72364a66f5273131469d83e477e3 |
gokuls/distilbert_sa_GLUE_Experiment_data_aug_rte_192 | gokuls | distilbert | 17 | 0 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,625 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_data_aug_rte_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8449
- Accuracy: 0.5199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2938 | 1.0 | 568 | 1.8449 | 0.5199 |
| 0.0556 | 2.0 | 1136 | 2.3009 | 0.4946 |
| 0.0295 | 3.0 | 1704 | 2.6341 | 0.5018 |
| 0.0183 | 4.0 | 2272 | 3.0449 | 0.5235 |
| 0.0119 | 5.0 | 2840 | 3.1349 | 0.5162 |
| 0.0091 | 6.0 | 3408 | 3.4876 | 0.5343 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
| ffdb84304b172de04c14b522a88d63af |
Atallahw/mycatPishi | Atallahw | null | 17 | 4 | diffusers | 0 | text-to-image | true | false | false | creativeml-openrail-m | null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal'] | false | true | true | 747 | false |
# DreamBooth model for the pishicat concept trained by Atallahw on the Atallahw/pishi2 dataset.
This is a Stable Diffusion model fine-tuned on the pishicat concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of pishicat cat**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `cat` images for the animal theme. It makes images of my cat.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Atallahw/mycatPishi')
image = pipeline().images[0]
image
```
| c957a3f8885ad22e8822decec72f62d9 |
paarthmadan/Sentiment-Analysis-API | paarthmadan | null | 14 | 0 | transformers | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 490 | false | ## Sentiment Analysis API
This is the deployment part of the project.
### Training:
Run the Google Colab notebook (Runtime = "GPU") (https://colab.research.google.com/drive/1EuF5FDl1X8VnuOO5RxzmM0c9TbtQrVm9?usp=sharing)
### Fine Tuning
1) Increasing #epochs
2) Increasing BATCH_SIZE to 32
3) Changing Adam optimizer rate
### Usage
1) Clone te repository
2) Set up the conda environment with requirements.txt
3) In terminal, run the command- uvivorn sentiment_analyzer.api:app --reload | 2367b2b537664ea54063a69a3496daf4 |
geevegeorge/ddpm-hkuoct-dr-256-200ep | geevegeorge | null | 28 | 2 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['imagefolder'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,215 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-hkuoct-dr-256-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-dr-256-200ep/tensorboard?#scalars)
| 27917508ba587335a481a204a3194a23 |
furusu/fate-diffusion | furusu | null | 19 | 101 | diffusers | 6 | text-to-image | false | false | false | openrail++ | null | null | null | 7 | 1 | 5 | 1 | 0 | 0 | 0 | ['stable-diffusion', 'text-to-image'] | false | true | true | 2,386 | false |
# fate-diffusion
This model is resumed from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) for 10 epochs on 53k images of fate series filtered by [aesthetic score](https://github.com/waifu-diffusion/aesthetic)>=0.9, and then 10 epoch on datasets added 8k images of closing up on face.
Caption is danbooru style tags inferenced by [wd1-4 tagger](https://github.com/toriato/stable-diffusion-webui-wd14-tagger).
# Usage
almostly same as original model except model_id = "furusu/fate-diffusion".
```bash
pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy
```
```python
import torch
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
model_id = "furusu/fate-diffusion"
# Use the Euler scheduler here instead
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "artoria pendragon,fate,standing,armor,gauntlets,hair ribbon,green eyes,puffy sleeves,skirt"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
image = pipe(prompt, height=768, width=768,negative_prompt=negative_prompt).images[0]
image.save("image.png")
```
You can also use it on webui by putting [fate-diffusion.ckpt](fate-diffusion.ckpt) and [fate-diffusion.yaml](fate-diffusion.yaml) into models/Stable-Diffusion. See [this link](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20).
# Generation examples
with negative prompt:lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry
artoria pendragon,fate,solo,standing,armor,gauntlets,hair ribbon,green eyes,puffy sleeves,skirt,blush

nero claudius,bride,fate,sitting,bridal veil,((blush)),zipper,cleavage,white gloves,leotard,thighhighs,flower,belt,green eyes

1girl,highres,solo,looking at viewer,blonde hair,twintails,maid,maid headdress,skirt,thighhighs,red eyes,blush

| 2859be6bb024a7e4637a01fa29529c43 |
jonas/roberta-base-finetuned-sdg | jonas | roberta | 11 | 66 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,637 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-sdg
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4993
- Acc: 0.9024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Acc |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4679 | 1.0 | 254 | 0.3660 | 0.8963 |
| 0.3578 | 2.0 | 508 | 0.3689 | 0.9019 |
| 0.2739 | 3.0 | 762 | 0.3284 | 0.9035 |
| 0.1841 | 4.0 | 1016 | 0.3763 | 0.9019 |
| 0.1127 | 5.0 | 1270 | 0.4174 | 0.9024 |
| 0.0822 | 6.0 | 1524 | 0.4523 | 0.9013 |
| 0.0329 | 7.0 | 1778 | 0.4829 | 0.9030 |
| 0.0157 | 8.0 | 2032 | 0.4993 | 0.9024 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0a0+8a1a93a
- Datasets 2.4.0
- Tokenizers 0.12.1
| fb6b4647abff47d297878cfae257acae |
jonatasgrosman/exp_w2v2t_th_unispeech-sat_s515 | jonatasgrosman | unispeech-sat | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['th'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'th'] | false | true | true | 463 | false | # exp_w2v2t_th_unispeech-sat_s515
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| b7ac06ee7e03aace16bc66333a2a0fbc |
anas-awadalla/roberta-base-few-shot-k-16-finetuned-squad-seed-8 | anas-awadalla | roberta | 17 | 5 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 985 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
| 4521666aed61de928f4c189dd9ce0009 |
GyuBeen/distilbert-base-uncased-finetuned-squad | GyuBeen | distilbert | 14 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,278 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2119 | 1.0 | 5533 | 1.1532 |
| 0.9427 | 2.0 | 11066 | 1.1100 |
| 0.7477 | 3.0 | 16599 | 1.1543 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| fe65f9fc40b7ad23a6c692ab4d1a4725 |
nandysoham/Warsaw_Pact-clustered | nandysoham | distilbert | 8 | 0 | transformers | 0 | question-answering | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,861 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/Warsaw_Pact-clustered
This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0792
- Train End Logits Accuracy: 0.9722
- Train Start Logits Accuracy: 0.9861
- Validation Loss: 1.0233
- Validation End Logits Accuracy: 1.0
- Validation Start Logits Accuracy: 0.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 0.0792 | 0.9722 | 0.9861 | 1.0233 | 1.0 | 0.0 | 0 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| c7e0d1e5c77063d44498645a4d5fdd71 |
takizawa/distilbert-base-uncased-finetuned-emotion | takizawa | distilbert | 12 | 4 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2251
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8481 | 1.0 | 250 | 0.3248 | 0.907 | 0.9028 |
| 0.2595 | 2.0 | 500 | 0.2251 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
| f47518141123d78b9a0431bda88170ed |
jonatasgrosman/exp_w2v2t_th_vp-it_s334 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['th'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'th'] | false | true | true | 469 | false | # exp_w2v2t_th_vp-it_s334
Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (th)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 4ac71945090ba1f5b01e86c6176ce628 |
rti-international/rota | rti-international | roberta | 19 | 10 | transformers | 2 | text-classification | true | false | true | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 12,988 | false |
# ROTA
## Rapid Offense Text Autocoder
[](https://huggingface.co/rti-international/rota)
[](https://huggingface.co/spaces/rti-international/rota-app)
[](https://github.com/RTIInternational/rota)
[](https://doi.org/10.5281/zenodo.4770492)
ROTA Application hosted on Hugging Face Spaces: https://huggingface.co/spaces/rti-international/rota-app
Criminal justice research often requires conversion of free-text offense descriptions into overall charge categories to aid analysis. For example, the free-text offense of "eluding a police vehicle" would be coded to a charge category of "Obstruction - Law Enforcement". Since free-text offense descriptions aren't standardized and often need to be categorized in large volumes, this can result in a manual and time intensive process for researchers. ROTA is a machine learning model for converting offense text into offense codes.
Currently ROTA predicts the *Charge Category* of a given offense text. A *charge category* is one of the headings for offense codes in the [2009 NCRP Codebook: Appendix F](https://www.icpsr.umich.edu/web/NACJD/studies/30799/datadocumentation#).
The model was trained on [publicly available data](https://web.archive.org/web/20201021001250/https://www.icpsr.umich.edu/web/pages/NACJD/guides/ncrp.html) from a crosswalk containing offenses from all 50 states combined with three additional hand-labeled offense text datasets.
<details>
<summary>Charge Category Example</summary>
<img src="https://i.ibb.co/xLsrzmV/charge-category-example.png" width="500">
</details>
### Data Preprocessing
The input text is standardized through a series of preprocessing steps. The text is first passed through a sequence of 500+ case-insensitive regular expressions that identify common misspellings and abbreviations and expand the text to a more full, correct English text. Some data-specific prefixes and suffixes are then removed from the text -- e.g. some states included a statute as a part of the text. Finally, punctuation (excluding dollar signs) are removed from the input, multiple spaces between words are removed, and the text is lowercased.
## Cross-Validation Performance
This model was evaluated using 3-fold cross validation. Except where noted, numbers presented below are the mean value across the 3 folds.
The model in this repository is trained on all available data. Because of this, you can typically expect production performance to be (unknowably) better than the numbers presented below.
### Overall Metrics
| Metric | Value |
| -------- | ----- |
| Accuracy | 0.934 |
| MCC | 0.931 |
| Metric | precision | recall | f1-score |
| --------- | --------- | ------ | -------- |
| macro avg | 0.811 | 0.786 | 0.794 |
*Note*: These are the average of the values *per fold*, so *macro avg* is the average of the macro average of all categories per fold.
### Per-Category Metrics
| Category | precision | recall | f1-score | support |
| ------------------------------------------------------ | --------- | ------ | -------- | ------- |
| AGGRAVATED ASSAULT | 0.954 | 0.954 | 0.954 | 4085 |
| ARMED ROBBERY | 0.961 | 0.955 | 0.958 | 1021 |
| ARSON | 0.946 | 0.954 | 0.95 | 344 |
| ASSAULTING PUBLIC OFFICER | 0.914 | 0.905 | 0.909 | 588 |
| AUTO THEFT | 0.962 | 0.962 | 0.962 | 1660 |
| BLACKMAIL/EXTORTION/INTIMIDATION | 0.872 | 0.871 | 0.872 | 627 |
| BRIBERY AND CONFLICT OF INTEREST | 0.784 | 0.796 | 0.79 | 216 |
| BURGLARY | 0.979 | 0.981 | 0.98 | 2214 |
| CHILD ABUSE | 0.805 | 0.78 | 0.792 | 139 |
| COCAINE OR CRACK VIOLATION OFFENSE UNSPECIFIED | 0.827 | 0.815 | 0.821 | 47 |
| COMMERCIALIZED VICE | 0.818 | 0.788 | 0.802 | 666 |
| CONTEMPT OF COURT | 0.982 | 0.987 | 0.984 | 2952 |
| CONTRIBUTING TO DELINQUENCY OF A MINOR | 0.544 | 0.333 | 0.392 | 50 |
| CONTROLLED SUBSTANCE - OFFENSE UNSPECIFIED | 0.864 | 0.791 | 0.826 | 280 |
| COUNTERFEITING (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| DESTRUCTION OF PROPERTY | 0.97 | 0.968 | 0.969 | 2560 |
| DRIVING UNDER INFLUENCE - DRUGS | 0.567 | 0.603 | 0.581 | 34 |
| DRIVING UNDER THE INFLUENCE | 0.951 | 0.946 | 0.949 | 2195 |
| DRIVING WHILE INTOXICATED | 0.986 | 0.981 | 0.984 | 2391 |
| DRUG OFFENSES - VIOLATION/DRUG UNSPECIFIED | 0.903 | 0.911 | 0.907 | 3100 |
| DRUNKENNESS/VAGRANCY/DISORDERLY CONDUCT | 0.856 | 0.861 | 0.858 | 380 |
| EMBEZZLEMENT | 0.865 | 0.759 | 0.809 | 100 |
| EMBEZZLEMENT (FEDERAL ONLY) | 0 | 0 | 0 | 1 |
| ESCAPE FROM CUSTODY | 0.988 | 0.991 | 0.989 | 4035 |
| FAMILY RELATED OFFENSES | 0.739 | 0.773 | 0.755 | 442 |
| FELONY - UNSPECIFIED | 0.692 | 0.735 | 0.712 | 122 |
| FLIGHT TO AVOID PROSECUTION | 0.46 | 0.407 | 0.425 | 38 |
| FORCIBLE SODOMY | 0.82 | 0.8 | 0.809 | 76 |
| FORGERY (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| FORGERY/FRAUD | 0.911 | 0.928 | 0.919 | 4687 |
| FRAUD (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| GRAND LARCENY - THEFT OVER $200 | 0.957 | 0.973 | 0.965 | 2412 |
| HABITUAL OFFENDER | 0.742 | 0.627 | 0.679 | 53 |
| HEROIN VIOLATION - OFFENSE UNSPECIFIED | 0.879 | 0.811 | 0.843 | 24 |
| HIT AND RUN DRIVING | 0.922 | 0.94 | 0.931 | 303 |
| HIT/RUN DRIVING - PROPERTY DAMAGE | 0.929 | 0.918 | 0.923 | 362 |
| IMMIGRATION VIOLATIONS | 0.84 | 0.609 | 0.697 | 19 |
| INVASION OF PRIVACY | 0.927 | 0.923 | 0.925 | 1235 |
| JUVENILE OFFENSES | 0.928 | 0.866 | 0.895 | 144 |
| KIDNAPPING | 0.937 | 0.93 | 0.933 | 553 |
| LARCENY/THEFT - VALUE UNKNOWN | 0.955 | 0.945 | 0.95 | 3175 |
| LEWD ACT WITH CHILDREN | 0.775 | 0.85 | 0.811 | 596 |
| LIQUOR LAW VIOLATIONS | 0.741 | 0.768 | 0.755 | 214 |
| MANSLAUGHTER - NON-VEHICULAR | 0.626 | 0.802 | 0.701 | 139 |
| MANSLAUGHTER - VEHICULAR | 0.79 | 0.853 | 0.819 | 117 |
| MARIJUANA/HASHISH VIOLATION - OFFENSE UNSPECIFIED | 0.741 | 0.662 | 0.699 | 62 |
| MISDEMEANOR UNSPECIFIED | 0.63 | 0.243 | 0.347 | 57 |
| MORALS/DECENCY - OFFENSE | 0.774 | 0.764 | 0.769 | 412 |
| MURDER | 0.965 | 0.915 | 0.939 | 621 |
| OBSTRUCTION - LAW ENFORCEMENT | 0.939 | 0.947 | 0.943 | 4220 |
| OFFENSES AGAINST COURTS, LEGISLATURES, AND COMMISSIONS | 0.881 | 0.895 | 0.888 | 1965 |
| PAROLE VIOLATION | 0.97 | 0.953 | 0.962 | 946 |
| PETTY LARCENY - THEFT UNDER $200 | 0.965 | 0.761 | 0.85 | 139 |
| POSSESSION/USE - COCAINE OR CRACK | 0.893 | 0.928 | 0.908 | 68 |
| POSSESSION/USE - DRUG UNSPECIFIED | 0.624 | 0.535 | 0.572 | 189 |
| POSSESSION/USE - HEROIN | 0.884 | 0.852 | 0.866 | 25 |
| POSSESSION/USE - MARIJUANA/HASHISH | 0.977 | 0.97 | 0.973 | 556 |
| POSSESSION/USE - OTHER CONTROLLED SUBSTANCES | 0.975 | 0.965 | 0.97 | 3271 |
| PROBATION VIOLATION | 0.963 | 0.953 | 0.958 | 1158 |
| PROPERTY OFFENSES - OTHER | 0.901 | 0.87 | 0.885 | 446 |
| PUBLIC ORDER OFFENSES - OTHER | 0.7 | 0.721 | 0.71 | 1871 |
| RACKETEERING/EXTORTION (FEDERAL ONLY) | 0 | 0 | 0 | 2 |
| RAPE - FORCE | 0.842 | 0.873 | 0.857 | 641 |
| RAPE - STATUTORY - NO FORCE | 0.707 | 0.55 | 0.611 | 140 |
| REGULATORY OFFENSES (FEDERAL ONLY) | 0.847 | 0.567 | 0.674 | 70 |
| RIOTING | 0.784 | 0.605 | 0.68 | 119 |
| SEXUAL ASSAULT - OTHER | 0.836 | 0.836 | 0.836 | 971 |
| SIMPLE ASSAULT | 0.976 | 0.967 | 0.972 | 4577 |
| STOLEN PROPERTY - RECEIVING | 0.959 | 0.957 | 0.958 | 1193 |
| STOLEN PROPERTY - TRAFFICKING | 0.902 | 0.888 | 0.895 | 491 |
| TAX LAW (FEDERAL ONLY) | 0.373 | 0.233 | 0.286 | 30 |
| TRAFFIC OFFENSES - MINOR | 0.974 | 0.977 | 0.976 | 8699 |
| TRAFFICKING - COCAINE OR CRACK | 0.896 | 0.951 | 0.922 | 185 |
| TRAFFICKING - DRUG UNSPECIFIED | 0.709 | 0.795 | 0.749 | 516 |
| TRAFFICKING - HEROIN | 0.871 | 0.92 | 0.894 | 54 |
| TRAFFICKING - OTHER CONTROLLED SUBSTANCES | 0.963 | 0.954 | 0.959 | 2832 |
| TRAFFICKING MARIJUANA/HASHISH | 0.921 | 0.943 | 0.932 | 255 |
| TRESPASSING | 0.974 | 0.98 | 0.977 | 1916 |
| UNARMED ROBBERY | 0.941 | 0.939 | 0.94 | 377 |
| UNAUTHORIZED USE OF VEHICLE | 0.94 | 0.908 | 0.924 | 304 |
| UNSPECIFIED HOMICIDE | 0.61 | 0.554 | 0.577 | 60 |
| VIOLENT OFFENSES - OTHER | 0.827 | 0.817 | 0.822 | 606 |
| VOLUNTARY/NONNEGLIGENT MANSLAUGHTER | 0.619 | 0.513 | 0.542 | 54 |
| WEAPON OFFENSE | 0.943 | 0.949 | 0.946 | 2466 |
*Note: `support` is the average number of observations predicted on per fold, so the total number of observations per class is roughly 3x `support`.*
### Using Confidence Scores
If we interpret the classification probability as a confidence score, we can use it to filter out predictions that the model isn't as confident about. We applied this process in 3-fold cross validation. The numbers presented below indicate how much of the prediction data is retained given a confidence score cutoff of `p`. We present the overall accuracy and MCC metrics as if the model was only evaluated on this subset of confident predictions.
| | cutoff | percent retained | mcc | acc |
| --- | ------ | ---------------- | ----- | ----- |
| 0 | 0.85 | 0.952 | 0.96 | 0.961 |
| 1 | 0.9 | 0.943 | 0.964 | 0.965 |
| 2 | 0.95 | 0.928 | 0.97 | 0.971 |
| 3 | 0.975 | 0.912 | 0.975 | 0.976 |
| 4 | 0.99 | 0.886 | 0.982 | 0.983 |
| 5 | 0.999 | 0.733 | 0.995 | 0.996 |
| d5f8acff019043ff25eb3b03395213b0 |
Yanis/biomedical-ner-all-finetuned-ner | Yanis | distilbert | 18 | 19 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 10,463 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomedical-ner-all-finetuned-ner
This model is a fine-tuned version of [d4data/biomedical-ner-all](https://huggingface.co/d4data/biomedical-ner-all) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0020
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 4 | 0.1905 | 0.7660 | 0.8480 | 0.8049 | 0.9447 |
| No log | 2.0 | 8 | 0.1689 | 0.7619 | 0.8565 | 0.8065 | 0.9482 |
| No log | 3.0 | 12 | 0.1535 | 0.7950 | 0.8801 | 0.8354 | 0.9552 |
| No log | 4.0 | 16 | 0.1337 | 0.816 | 0.8737 | 0.8438 | 0.9598 |
| No log | 5.0 | 20 | 0.1206 | 0.8191 | 0.9015 | 0.8583 | 0.9636 |
| No log | 6.0 | 24 | 0.1047 | 0.8285 | 0.9101 | 0.8673 | 0.9689 |
| No log | 7.0 | 28 | 0.0943 | 0.8509 | 0.9165 | 0.8825 | 0.9703 |
| No log | 8.0 | 32 | 0.0848 | 0.8495 | 0.9186 | 0.8827 | 0.9734 |
| No log | 9.0 | 36 | 0.0756 | 0.8732 | 0.9293 | 0.9004 | 0.9780 |
| No log | 10.0 | 40 | 0.0671 | 0.8884 | 0.9379 | 0.9125 | 0.9825 |
| No log | 11.0 | 44 | 0.0625 | 0.8912 | 0.9293 | 0.9099 | 0.9836 |
| No log | 12.0 | 48 | 0.0548 | 0.9048 | 0.9358 | 0.92 | 0.9857 |
| No log | 13.0 | 52 | 0.0492 | 0.9129 | 0.9422 | 0.9273 | 0.9881 |
| No log | 14.0 | 56 | 0.0466 | 0.9187 | 0.9443 | 0.9314 | 0.9885 |
| No log | 15.0 | 60 | 0.0400 | 0.9308 | 0.9507 | 0.9407 | 0.9902 |
| No log | 16.0 | 64 | 0.0369 | 0.9414 | 0.9636 | 0.9524 | 0.9920 |
| No log | 17.0 | 68 | 0.0342 | 0.9356 | 0.9636 | 0.9494 | 0.9920 |
| No log | 18.0 | 72 | 0.0300 | 0.9514 | 0.9636 | 0.9574 | 0.9937 |
| No log | 19.0 | 76 | 0.0284 | 0.9558 | 0.9722 | 0.9639 | 0.9941 |
| No log | 20.0 | 80 | 0.0270 | 0.9580 | 0.9764 | 0.9671 | 0.9944 |
| No log | 21.0 | 84 | 0.0234 | 0.9621 | 0.9786 | 0.9703 | 0.9948 |
| No log | 22.0 | 88 | 0.0215 | 0.9662 | 0.9807 | 0.9734 | 0.9955 |
| No log | 23.0 | 92 | 0.0199 | 0.9662 | 0.9807 | 0.9734 | 0.9955 |
| No log | 24.0 | 96 | 0.0179 | 0.9725 | 0.9829 | 0.9776 | 0.9955 |
| No log | 25.0 | 100 | 0.0164 | 0.9851 | 0.9893 | 0.9872 | 0.9965 |
| No log | 26.0 | 104 | 0.0158 | 0.9851 | 0.9914 | 0.9883 | 0.9969 |
| No log | 27.0 | 108 | 0.0145 | 0.9809 | 0.9914 | 0.9862 | 0.9979 |
| No log | 28.0 | 112 | 0.0130 | 0.9809 | 0.9914 | 0.9862 | 0.9983 |
| No log | 29.0 | 116 | 0.0122 | 0.9809 | 0.9914 | 0.9862 | 0.9983 |
| No log | 30.0 | 120 | 0.0114 | 0.9872 | 0.9936 | 0.9904 | 0.9986 |
| No log | 31.0 | 124 | 0.0104 | 0.9872 | 0.9936 | 0.9904 | 0.9990 |
| No log | 32.0 | 128 | 0.0098 | 0.9830 | 0.9914 | 0.9872 | 0.9986 |
| No log | 33.0 | 132 | 0.0094 | 0.9830 | 0.9914 | 0.9872 | 0.9986 |
| No log | 34.0 | 136 | 0.0091 | 0.9915 | 0.9957 | 0.9936 | 0.9993 |
| No log | 35.0 | 140 | 0.0085 | 0.9915 | 0.9957 | 0.9936 | 0.9990 |
| No log | 36.0 | 144 | 0.0080 | 0.9915 | 0.9957 | 0.9936 | 0.9990 |
| No log | 37.0 | 148 | 0.0073 | 0.9915 | 0.9957 | 0.9936 | 0.9993 |
| No log | 38.0 | 152 | 0.0069 | 0.9915 | 0.9957 | 0.9936 | 0.9993 |
| No log | 39.0 | 156 | 0.0067 | 0.9915 | 0.9957 | 0.9936 | 0.9993 |
| No log | 40.0 | 160 | 0.0063 | 0.9915 | 0.9957 | 0.9936 | 0.9993 |
| No log | 41.0 | 164 | 0.0059 | 0.9915 | 0.9957 | 0.9936 | 0.9993 |
| No log | 42.0 | 168 | 0.0056 | 0.9957 | 0.9979 | 0.9968 | 0.9997 |
| No log | 43.0 | 172 | 0.0053 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 44.0 | 176 | 0.0050 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 45.0 | 180 | 0.0049 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 46.0 | 184 | 0.0049 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 47.0 | 188 | 0.0046 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 48.0 | 192 | 0.0043 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 49.0 | 196 | 0.0041 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 50.0 | 200 | 0.0041 | 0.9957 | 0.9979 | 0.9968 | 0.9997 |
| No log | 51.0 | 204 | 0.0041 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 52.0 | 208 | 0.0038 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 53.0 | 212 | 0.0036 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 54.0 | 216 | 0.0034 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 55.0 | 220 | 0.0033 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 56.0 | 224 | 0.0033 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 57.0 | 228 | 0.0033 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 58.0 | 232 | 0.0032 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 59.0 | 236 | 0.0031 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 60.0 | 240 | 0.0030 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 61.0 | 244 | 0.0029 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 62.0 | 248 | 0.0029 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 63.0 | 252 | 0.0028 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 64.0 | 256 | 0.0028 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 65.0 | 260 | 0.0027 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 66.0 | 264 | 0.0027 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 67.0 | 268 | 0.0026 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 68.0 | 272 | 0.0025 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 69.0 | 276 | 0.0025 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 70.0 | 280 | 0.0024 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 71.0 | 284 | 0.0024 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 72.0 | 288 | 0.0024 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 73.0 | 292 | 0.0024 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 74.0 | 296 | 0.0023 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 75.0 | 300 | 0.0023 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 76.0 | 304 | 0.0023 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 77.0 | 308 | 0.0023 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 78.0 | 312 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 79.0 | 316 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 80.0 | 320 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 81.0 | 324 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 82.0 | 328 | 0.0022 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 83.0 | 332 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 84.0 | 336 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 85.0 | 340 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 86.0 | 344 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 87.0 | 348 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 88.0 | 352 | 0.0021 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 89.0 | 356 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 90.0 | 360 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 91.0 | 364 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 92.0 | 368 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 93.0 | 372 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 94.0 | 376 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 95.0 | 380 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 96.0 | 384 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 97.0 | 388 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 98.0 | 392 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 99.0 | 396 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
| No log | 100.0 | 400 | 0.0020 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 3b5d9e51c124f9291c20e9836d1b9522 |
vumichien/AnimeGANv2_Shinkai | vumichien | null | 3 | 0 | null | 0 | null | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['AnimeGanv2'] | false | true | true | 682 | false | ## Model Description
Transforming photos of real-world scenes into anime style images is a meaningful and challenging task in terms of computer vision and artistic style transfer.
AnimeGANv2_Shinkai Made by Asher Chan.
The official code in [here](https://github.com/TachibanaYoshino/AnimeGANv2)
## License
This repo is made freely available to academic and
non-academic entities for non-commercial purposes such
as academic research, teaching, scientific publications.
Permission is granted to use the AnimeGAN given
that you agree to my license terms. Regarding the
request for commercial use, please contact us via
email to help you obtain the authorization letter.
| 83fa3f820c591b796f793f3e44ae3000 |
MultiBertGunjanPatrick/multiberts-seed-3-120k | MultiBertGunjanPatrick | bert | 7 | 4 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | ['bookcorpus', 'wikipedia'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert', 'multiberts', 'multiberts-seed-3'] | false | true | true | 6,483 | false | # MultiBERTs Seed 3 Checkpoint 120k (uncased)
Seed 3 intermediate checkpoint 120k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-3](https://hf.co/multberts-seed-3). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-3-120k')
model = BertModel.from_pretrained("multiberts-seed-3-120k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| 6b0be0896205bf993b3e015693e76432 |
agnesluhtaru/whisper-small-et-ERR2020 | agnesluhtaru | whisper | 30 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,856 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-et-ERR2020
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5321
- Wer: 22.8462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.3387 | 0.1 | 1000 | 0.5383 | 33.8216 |
| 0.1393 | 0.2 | 2000 | 0.4897 | 27.7546 |
| 0.0982 | 0.3 | 3000 | 0.5477 | 26.7815 |
| 0.0912 | 1.02 | 4000 | 0.5195 | 24.8816 |
| 0.0811 | 1.12 | 5000 | 0.5373 | 25.9282 |
| 0.0649 | 1.22 | 6000 | 0.5422 | 23.7285 |
| 0.0618 | 1.32 | 7000 | 0.5504 | 23.5179 |
| 0.0558 | 2.03 | 8000 | 0.5321 | 22.8462 |
| 0.0452 | 2.13 | 9000 | 0.5543 | 23.5813 |
| 0.0462 | 2.23 | 10000 | 0.5424 | 22.8830 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+rocm5.1.1
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| fa5a3dafa9322d7e27e3eb2b1a75fb04 |
spacy/hr_core_news_lg | spacy | null | 32 | 1 | spacy | 0 | token-classification | false | false | false | cc-by-sa-4.0 | ['hr'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['spacy', 'token-classification'] | false | true | true | 42,190 | false | ### Details: https://spacy.io/models/hr#hr_core_news_lg
Croatian pipeline optimized for CPU. Components: tok2vec, tagger, morphologizer, parser, lemmatizer (trainable_lemmatizer), senter, ner.
| Feature | Description |
| --- | --- |
| **Name** | `hr_core_news_lg` |
| **Version** | `3.5.0` |
| **spaCy** | `>=3.5.0,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `attribute_ruler`, `ner` |
| **Components** | `tok2vec`, `tagger`, `morphologizer`, `parser`, `lemmatizer`, `senter`, `attribute_ruler`, `ner` |
| **Vectors** | floret (200000, 300) |
| **Sources** | [Training corpus hr500k 1.0](http://hdl.handle.net/11356/1183) (Ljubešić, Nikola ; Agić, Željko ; Klubička, Filip ; Batanović, Vuk and Erjavec, Tomaž)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (1518 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `Agcfpay`, `Agcfpdy`, `Agcfpgy`, `Agcfpiy`, `Agcfply`, `Agcfpny`, `Agcfsay`, `Agcfsdy`, `Agcfsgy`, `Agcfsiy`, `Agcfsly`, `Agcfsny`, `Agcmpay`, `Agcmpgy`, `Agcmpiy`, `Agcmpny`, `Agcmsany`, `Agcmsay`, `Agcmsayn`, `Agcmsdy`, `Agcmsgy`, `Agcmsiy`, `Agcmsly`, `Agcmsny`, `Agcnpay`, `Agcnpdy`, `Agcnpgy`, `Agcnpny`, `Agcnsay`, `Agcnsdy`, `Agcnsgy`, `Agcnsiy`, `Agcnsly`, `Agcnsny`, `Agpfpay`, `Agpfpdy`, `Agpfpgy`, `Agpfpiy`, `Agpfply`, `Agpfpny`, `Agpfsay`, `Agpfsdy`, `Agpfsgy`, `Agpfsin`, `Agpfsiy`, `Agpfsly`, `Agpfsny`, `Agpfsvy`, `Agpmpay`, `Agpmpdy`, `Agpmpgy`, `Agpmpiy`, `Agpmply`, `Agpmpny`, `Agpmsan`, `Agpmsann`, `Agpmsany`, `Agpmsay`, `Agpmsayn`, `Agpmsayy`, `Agpmsdy`, `Agpmsgn`, `Agpmsgy`, `Agpmsiy`, `Agpmsln`, `Agpmsly`, `Agpmsnn`, `Agpmsny`, `Agpmsvy`, `Agpnpay`, `Agpnpdy`, `Agpnpgy`, `Agpnpiy`, `Agpnply`, `Agpnpny`, `Agpnsay`, `Agpnsdy`, `Agpnsgn`, `Agpnsgy`, `Agpnsiy`, `Agpnsln`, `Agpnsly`, `Agpnsny`, `Agsfpay`, `Agsfpdy`, `Agsfpgy`, `Agsfpiy`, `Agsfply`, `Agsfpny`, `Agsfsay`, `Agsfsdy`, `Agsfsgy`, `Agsfsiy`, `Agsfsly`, `Agsfsny`, `Agsmpay`, `Agsmpdy`, `Agsmpgy`, `Agsmpiy`, `Agsmply`, `Agsmpny`, `Agsmsany`, `Agsmsayn`, `Agsmsayy`, `Agsmsdy`, `Agsmsgy`, `Agsmsiy`, `Agsmsly`, `Agsmsny`, `Agsnpay`, `Agsnpgy`, `Agsnply`, `Agsnpny`, `Agsnsay`, `Agsnsdy`, `Agsnsgy`, `Agsnsiy`, `Agsnsly`, `Agsnsny`, `Appfpay`, `Appfpdy`, `Appfpgy`, `Appfpiy`, `Appfply`, `Appfpny`, `Appfsay`, `Appfsgy`, `Appfsiy`, `Appfsly`, `Appfsny`, `Appmpay`, `Appmpdy`, `Appmpgy`, `Appmpiy`, `Appmply`, `Appmpny`, `Appmsann`, `Appmsany`, `Appmsayn`, `Appmsayy`, `Appmsdy`, `Appmsgn`, `Appmsgy`, `Appmsiy`, `Appmsly`, `Appmsnn`, `Appmsny`, `Appnpay`, `Appnpdy`, `Appnpgy`, `Appnpiy`, `Appnply`, `Appnpny`, `Appnsay`, `Appnsgy`, `Appnsly`, `Appnsny`, `Aspfpay`, `Aspfpgy`, `Aspfpiy`, `Aspfply`, `Aspfpny`, `Aspfsay`, `Aspfsdy`, `Aspfsgy`, `Aspfsly`, `Aspfsny`, `Aspmpay`, `Aspmpgy`, `Aspmply`, `Aspmpny`, `Aspmsayn`, `Aspmsayy`, `Aspmsdn`, `Aspmsdy`, `Aspmsgn`, `Aspmsgy`, `Aspmsiy`, `Aspmsln`, `Aspmsly`, `Aspmsnn`, `Aspnpay`, `Aspnpgy`, `Aspnpny`, `Aspnsay`, `Aspnsgn`, `Aspnsgy`, `Aspnsln`, `Aspnsly`, `Aspnsny`, `Cc`, `Cs`, `I`, `Mdc`, `Mdm`, `Mdo`, `Mds`, `Mlc`, `Mlc--g`, `Mlc--i`, `Mlc--l`, `Mlcf-a`, `Mlcf-d`, `Mlcf-g`, `Mlcf-n`, `Mlcfsa`, `Mlcfsd`, `Mlcfsg`, `Mlcfsi`, `Mlcfsl`, `Mlcfsn`, `Mlcm-a`, `Mlcm-g`, `Mlcm-l`, `Mlcm-n`, `Mlcmpn`, `Mlcmsan`, `Mlcmsay`, `Mlcmsg`, `Mlcmsi`, `Mlcmsl`, `Mlcmsn`, `Mlcn-n`, `Mlcnsa`, `Mlcnsg`, `Mlcnsn`, `Mlofpa`, `Mlofpd`, `Mlofpg`, `Mlofpi`, `Mlofpl`, `Mlofpn`, `Mlofsa`, `Mlofsd`, `Mlofsg`, `Mlofsi`, `Mlofsl`, `Mlofsn`, `Mlompa`, `Mlompd`, `Mlompg`, `Mlompi`, `Mlompl`, `Mlompn`, `Mlomsan`, `Mlomsay`, `Mlomsd`, `Mlomsg`, `Mlomsi`, `Mlomsl`, `Mlomsn`, `Mlonpa`, `Mlonpg`, `Mlonpl`, `Mlonpn`, `Mlonsa`, `Mlonsd`, `Mlonsg`, `Mlonsi`, `Mlonsl`, `Mlonsn`, `Mls`, `Mlsf-a`, `Mlsf-g`, `Mlsf-i`, `Mlsf-l`, `Mlsf-n`, `Mlsm-a`, `Mlsm-g`, `Mlsm-l`, `Mlsm-n`, `Mlsmpn`, `Mlsn-n`, `Mrc`, `Mro`, `Ncfpa`, `Ncfpd`, `Ncfpg`, `Ncfpi`, `Ncfpl`, `Ncfpn`, `Ncfpv`, `Ncfsa`, `Ncfsd`, `Ncfsg`, `Ncfsi`, `Ncfsl`, `Ncfsn`, `Ncfsv`, `Ncmpa`, `Ncmpd`, `Ncmpg`, `Ncmpi`, `Ncmpl`, `Ncmpn`, `Ncmpv`, `Ncmsan`, `Ncmsay`, `Ncmsd`, `Ncmsg`, `Ncmsi`, `Ncmsl`, `Ncmsn`, `Ncmsv`, `Ncnpa`, `Ncnpd`, `Ncnpg`, `Ncnpi`, `Ncnpl`, `Ncnpn`, `Ncnsa`, `Ncnsd`, `Ncnsg`, `Ncnsi`, `Ncnsl`, `Ncnsn`, `Ncnsv`, `Npfpa`, `Npfpg`, `Npfpl`, `Npfpn`, `Npfsa`, `Npfsd`, `Npfsg`, `Npfsi`, `Npfsl`, `Npfsn`, `Npmpa`, `Npmpd`, `Npmpg`, `Npmpi`, `Npmpl`, `Npmpn`, `Npmsan`, `Npmsay`, `Npmsd`, `Npmsg`, `Npmsi`, `Npmsl`, `Npmsn`, `Npmsv`, `Npnpg`, `Npnpn`, `Npnsa`, `Npnsd`, `Npnsg`, `Npnsi`, `Npnsl`, `Npnsn`, `Pd-fpa`, `Pd-fpd`, `Pd-fpg`, `Pd-fpi`, `Pd-fpl`, `Pd-fpn`, `Pd-fsa`, `Pd-fsd`, `Pd-fsg`, `Pd-fsi`, `Pd-fsl`, `Pd-fsn`, `Pd-mpa`, `Pd-mpd`, `Pd-mpg`, `Pd-mpi`, `Pd-mpl`, `Pd-mpn`, `Pd-msan`, `Pd-msay`, `Pd-msd`, `Pd-msg`, `Pd-msi`, `Pd-msl`, `Pd-msn`, `Pd-npa`, `Pd-npg`, `Pd-npi`, `Pd-npn`, `Pd-nsa`, `Pd-nsd`, `Pd-nsg`, `Pd-nsi`, `Pd-nsl`, `Pd-nsn`, `Pi-fpa`, `Pi-fpd`, `Pi-fpg`, `Pi-fpi`, `Pi-fpl`, `Pi-fpn`, `Pi-fsa`, `Pi-fsd`, `Pi-fsg`, `Pi-fsi`, `Pi-fsl`, `Pi-fsn`, `Pi-mpa`, `Pi-mpd`, `Pi-mpg`, `Pi-mpi`, `Pi-mpl`, `Pi-mpn`, `Pi-msan`, `Pi-msay`, `Pi-msd`, `Pi-msg`, `Pi-msi`, `Pi-msl`, `Pi-msn`, `Pi-npa`, `Pi-npd`, `Pi-npg`, `Pi-npi`, `Pi-npl`, `Pi-npn`, `Pi-nsa`, `Pi-nsd`, `Pi-nsg`, `Pi-nsi`, `Pi-nsl`, `Pi-nsn`, `Pi3m-a`, `Pi3m-d`, `Pi3m-g`, `Pi3m-i`, `Pi3m-n`, `Pi3n-a`, `Pi3n-d`, `Pi3n-g`, `Pi3n-i`, `Pi3n-l`, `Pi3n-n`, `Pp1-pa`, `Pp1-pd`, `Pp1-pg`, `Pp1-pi`, `Pp1-pl`, `Pp1-pn`, `Pp1-sa`, `Pp1-sd`, `Pp1-sg`, `Pp1-si`, `Pp1-sl`, `Pp1-sn`, `Pp2-pa`, `Pp2-pd`, `Pp2-pl`, `Pp2-pn`, `Pp2-sa`, `Pp2-sd`, `Pp2-sg`, `Pp2-sl`, `Pp2-sn`, `Pp3-pa`, `Pp3-pd`, `Pp3-pg`, `Pp3-pi`, `Pp3-pl`, `Pp3fpn`, `Pp3fsa`, `Pp3fsd`, `Pp3fsg`, `Pp3fsi`, `Pp3fsl`, `Pp3fsn`, `Pp3mpn`, `Pp3msa`, `Pp3msd`, `Pp3msg`, `Pp3msi`, `Pp3msl`, `Pp3msn`, `Pp3npn`, `Pp3nsa`, `Pp3nsi`, `Pp3nsn`, `Pq-fpa`, `Pq-fpn`, `Pq-fsa`, `Pq-fsi`, `Pq-fsl`, `Pq-fsn`, `Pq-mpn`, `Pq-msn`, `Pq-nsn`, `Pq3m-d`, `Pq3m-n`, `Pq3n-a`, `Pq3n-l`, `Pq3n-n`, `Ps1fpa`, `Ps1fpg`, `Ps1fpl`, `Ps1fpn`, `Ps1fsa`, `Ps1fsd`, `Ps1fsg`, `Ps1fsi`, `Ps1fsl`, `Ps1fsn`, _(truncated: full list in pipeline meta)_ |
| **`morphologizer`** | `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Ins\|POS=ADP`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Degree=Pos\|POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=PART`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=CCONJ`, `Case=Gen\|POS=ADP`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PART\|Polarity=Neg`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|POS=ADP`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `NumType=Ord\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `NumType=Card\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Degree=Pos\|POS=ADV\|PronType=Int,Rel`, `Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=X`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=ADV\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=NOUN`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=SPACE`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Ins\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|POS=ADP`, `Degree=Sup\|POS=ADV`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `POS=ADV\|Tense=Pres\|VerbForm=Conv`, `Case=Ins\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `NumType=Mult\|POS=NUM`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `POS=ADV\|Tense=Past\|VerbForm=Conv`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Degree=Pos\|POS=ADV\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Gen\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Mult\|POS=SYM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:emph`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl:pv`, `fixed`, `flat`, `flat:foreign`, `goeswith`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` |
| **`ner`** | `DERIV_PER`, `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.89 |
| `TOKEN_P` | 97.28 |
| `TOKEN_R` | 98.71 |
| `TOKEN_F` | 97.99 |
| `TAG_ACC` | 92.17 |
| `POS_ACC` | 97.59 |
| `MORPH_ACC` | 92.68 |
| `MORPH_MICRO_P` | 96.20 |
| `MORPH_MICRO_R` | 95.88 |
| `MORPH_MICRO_F` | 96.04 |
| `SENTS_P` | 95.59 |
| `SENTS_R` | 93.53 |
| `SENTS_F` | 94.55 |
| `DEP_UAS` | 86.62 |
| `DEP_LAS` | 80.16 |
| `LEMMA_ACC` | 92.88 |
| `ENTS_P` | 83.09 |
| `ENTS_R` | 82.95 |
| `ENTS_F` | 83.02 | | de5277e30a8951a68b754ab4a207e567 |
madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1 | madlag | bert | 93 | 8 | transformers | 0 | question-answering | true | true | false | mit | ['en'] | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering'] | false | true | true | 3,809 | false |
## BERT-base uncased model fine-tuned on SQuAD v1
This model was created using the [nn_pruning](https://github.com/huggingface/nn_pruning) python library: the **linear layers contains 8.0%** of the original weights.
The model contains **28.0%** of the original weights **overall** (the embeddings account for a significant part of the model, and they are not pruned by this method).
With a simple resizing of the linear matrices it ran **1.16x as fast as bert-base-uncased** on the evaluation.
This is possible because the pruning method lead to structured matrices: to visualize them, hover below on the plot to see the non-zero/zero parts of each matrix.
<div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1/raw/main/model_card/density_info.js" id="c60d09ec-81ff-4d6f-b616-c3ef09b2175d"></script></div>
In terms of accuracy, its **F1 is 88.11**, compared with 88.5 for bert-base-uncased, a **F1 drop of 0.39**.
## Fine-Pruning details
This model was fine-tuned from the HuggingFace [model](https://huggingface.co/bert-base-uncased) checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer), and distilled from the model [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad)
This model is case-insensitive: it does not make a difference between english and English.
A side-effect of the block pruning is that some of the attention heads are completely removed: 22 heads were removed on a total of 144 (15.3%).
Here is a detailed view on how the remaining heads are distributed in the network after pruning.
<div class="graph"><script src="/madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1/raw/main/model_card/pruning_info.js" id="55528c8b-d5f5-46a5-a35a-dad93725f7e5"></script></div>
## Details of the SQuAD1.1 dataset
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| SQuAD1.1 | train | 90.6K |
| SQuAD1.1 | eval | 11.1k |
### Fine-tuning
- Python: `3.8.5`
- Machine specs:
```CPU: Intel(R) Core(TM) i7-6700K CPU
Memory: 64 GiB
GPUs: 1 GeForce GTX 3090, with 24GiB memory
GPU driver: 455.23.05, CUDA: 11.1
```
### Results
**Pytorch model file size**: `398MB` (original BERT: `420MB`)
| Metric | # Value | # Original ([Table 2](https://www.aclweb.org/anthology/N19-1423.pdf))| Variation |
| ------ | --------- | --------- | --------- |
| **EM** | **80.94** | **80.8** | **+0.14**|
| **F1** | **88.11** | **88.5** | **-0.39**|
## Example Usage
Install nn_pruning: it contains the optimization script, which just pack the linear layers into smaller ones by removing empty rows/columns.
`pip install nn_pruning`
Then you can use the `transformers library` almost as usual: you just have to call `optimize_model` when the pipeline has loaded.
```python
from transformers import pipeline
from nn_pruning.inference_model_patcher import optimize_model
qa_pipeline = pipeline(
"question-answering",
model="madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1",
tokenizer="madlag/bert-base-uncased-squadv1-x1.16-f88.1-d8-unstruct-v1"
)
print("bert-base-uncased parameters: 152.0M")
print(f"Parameters count (includes only head pruning, not feed forward pruning)={int(qa_pipeline.model.num_parameters() / 1E6)}M")
qa_pipeline.model = optimize_model(qa_pipeline.model, "dense")
print(f"Parameters count after complete optimization={int(qa_pipeline.model.num_parameters() / 1E6)}M")
predictions = qa_pipeline({
'context': "Frédéric François Chopin, born Fryderyk Franciszek Chopin (1 March 1810 – 17 October 1849), was a Polish composer and virtuoso pianist of the Romantic era who wrote primarily for solo piano.",
'question': "Who is Frederic Chopin?",
})
print("Predictions", predictions)
``` | fbfa305913d05390e9d355722f4b00eb |
harveyagraphcore/roberta-base-finetuned-squad | harveyagraphcore | roberta | 12 | 3 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,103 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 3
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
| e3b84a4e2bd5bf0191ebb290f9699898 |
dhlanm/distilbert-base-uncased-finetune | dhlanm | distilbert | 14 | 11 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,519 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetune
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1315
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 48 | 0.1349 | 0.0 | 0.0 | 0.0 | 0.9715 |
| No log | 2.0 | 96 | 0.1318 | 0.0 | 0.0 | 0.0 | 0.9715 |
| No log | 3.0 | 144 | 0.1315 | 0.0 | 0.0 | 0.0 | 0.9715 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1d98f8808752ff699e31ec0f5650b921 |
ML-Projects-Kiel/tweetyface | ML-Projects-Kiel | gpt2 | 9 | 3 | transformers | 2 | text-generation | true | false | false | apache-2.0 | ['en'] | ['ML-Projects-Kiel/tweetyface'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['gpt2'] | false | true | true | 2,807 | false |
# Tweety Face
Finetuned language model based on [GPT-2](https://huggingface.co/gpt2) to generate Tweets in a users style.
## Model description
Tweety Face is a transformer model finetuned using GTP-2 and Tweets from various Twitter users. It was created to
generate a Twitter Tweet for a given user similar to their specific writing style. It accepts a prompt for a user
and completes the text.
This finetuned model uses the **smallest** version of GPT-2, with 124M parameters.
## Intended uses & limitations
This model was created to experiment with prompt inputs and is not intended to create real Tweets. The generated text
is not a real representation of the given users opinion, political affiliation, behaviour, etc. Do not use this model
to impersonate a user.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='ML-Projects-Kiel/tweetyface')
>>> set_seed(42)
>>> generator("User: elonmusk\nTweet: Twitter is", max_length=30, num_return_sequences=5)
[{'generated_text': 'User: elonmusk\nTweet: Twitter is more active than ever. Even though you can’t see your entire phone list, your'},
{'generated_text': 'User: elonmusk\nTweet: Twitter is just in a few hours until an announcement which has been approved by President. This should be a'},
{'generated_text': 'User: elonmusk\nTweet: Twitter is currently down to a minimum of 13 tweets per day, a decline that was significantly worse than Twitter'},
{'generated_text': 'User: elonmusk\nTweet: Twitter is a great investment to us. Will go above his legal fees to join Twitter in many countries,'},
{'generated_text': 'User: elonmusk\nTweet: Twitter is not doing something like this – they are not using Twitter to give out their content – other than'}]
```
## Training data
The training data used for this model has been released as a dataset one can browse [here](https://huggingface.co/ML-Projects-Kiel/tweetyface).
The raw data can be found in our [Github Repository](https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers). The raw data
can be found in two versions. All data on the develop branch is used in a [debugging dataset](https://huggingface.co/datasets/ML-Projects-Kiel/tweetyface_debug).
All data in the qa branch is used in the final dataset.
## Training procedure
### Preprocessing
For training first all retweets (RT) have been removed. Next the newline characters "\n" have been replaced by white
spaces and all URLs haven been replaced with the word URL.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters).
| bc60ad0d1a49dd24865a33a56608ad42 |
kpriyanshu256/whisper-small-as-500-64-1e-05-bn | kpriyanshu256 | whisper | 15 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['as'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,442 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-small-Assamese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3386
- Wer: 61.7578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3132 | 0.3 | 150 | 1.4029 | 161.4149 |
| 0.1888 | 1.08 | 300 | 1.3000 | 61.7217 |
| 0.1358 | 1.38 | 450 | 1.3386 | 61.7578 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 19232b2057c8ccf82c2a89f7e924bde9 |
allenai/aspire-contextualsentence-multim-biomed | allenai | bert | 7 | 5 | transformers | 0 | feature-extraction | true | false | false | apache-2.0 | ['en'] | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | [] | false | true | true | 5,925 | false |
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `tsAspire` and represents the papers proposed multi-vector model for fine-grained scientific document similarity.
## Model Card
### Model description
This model is a BERT based multi-vector model trained for fine-grained similarity of biomedical papers. This model inputs the title and abstract of a paper and represents a paper with a contextual sentence vectors obtained by averaging the token representations of individual sentences - the whole title and abstract are encoded with cross-attention in the encoder block before obtaining sentence embeddings. The model is trained by minimizing an Wasserstein/Earth Movers Distance between sentence vectors for a pair of documents - in the process also learning a sparse alignment between sentences in both documents. Test time behavior ranks documents based on the Wasserstein Distance between all sentences of documents or a set of query sentences and a candidate documents sentences.
### Training data
The model is trained on pairs of co-cited papers with their sentences aligned by the co-citation context in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model, negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers. For example - the papers in brackets below are all co-cited and each pair of papers would be used as a training pair:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for fine-grained document similarity tasks in **biomedical** scientific text using multiple vectors per document. The model allows _multiple_ fine grained sentence-to-sentence similarities between documents. The model is well suited to an aspect conditional task formulation where a query might consist of sentence_s_ in a query document and candidates must be retrieved along the specified sentences. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as document or sentence level classification. Since the training data comes primarily from biomedical, performance on other domains may be poorer.
### How to use
This model can be used via the `transformers` library, and some additional code to compute contextual sentence vectors and to make multiple matches using optimal transport.
View example usage and sample document matches in the model github repo: [`examples/demo-contextualsentence-multim.ipynb`](https://github.com/allenai/aspire/blob/main/examples/demo-contextualsentence-multim.ipynb)
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts. In using this model we rank documents by the Wasserstein distance between the query sentences and a candidates sentences.
### Evaluation results
The released model `aspire-contextualsentence-multim-biomed` is compared against `allenai/specter`. `aspire-contextualsentence-multim-biomed`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released models `aspire-contextualsentence-multim-biomed` is the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `specter` | 28.24 | 59.28 | 60.62 | 77.20 |
| `aspire-contextualsentence-multim-biomed`<sup>*</sup> | 30.92 | 62.23 | 62.57 | 78.95 |
| `aspire-contextualsentence-multim-biomed` | 31.25 | 62.99 | 62.24 | 78.65 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-contextualsentence-multim-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-multim-compsci): If you wanted to run on computer science papers and want to use a model trained to match _multiple_ sentences between documents.
[`aspire-contextualsentence-singlem-biomed`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-biomed): If you wanted to run on biomedical papers and want to use a model trained to match _single_ sentences between documents.
[`aspire-contextualsentence-singlem-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-compsci): If you wanted to run on computer science papers and want to use a model trained to match _single_ sentences between documents. | 81176edaf8c7be8bfc44cee216b496d1 |
phailyoor/distilbert-base-uncased-finetuned-yahd-twval-hptune | phailyoor | distilbert | 96 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,495 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-yahd-twval-hptune
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3727
- Accuracy: 0.2039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.1638 | 1.0 | 10106 | 2.1944 | 0.3646 |
| 1.7982 | 2.0 | 20212 | 2.6390 | 0.3333 |
| 1.3279 | 3.0 | 30318 | 3.1526 | 0.3095 |
| 0.8637 | 4.0 | 40424 | 4.8368 | 0.2470 |
| 0.5727 | 5.0 | 50530 | 6.3727 | 0.2039 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
| 22c544e3255e5032375efee97c615a83 |
banjtheman/distilbert-base-uncased-helpful-amazon | banjtheman | distilbert | 13 | 7 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 976 | false |
## Overview
This model was trained with data from https://registry.opendata.aws/helpful-sentences-from-reviews/ to predict how "helpful" a review is.
The model was fine-tuned from the `distilbert-base-uncased` model
### Labels
LABEL_0 - Not helpful
LABEL_1 - Helpful
### How to use
The following code shows how to make a prediction with this model
```python
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
TextClassificationPipeline,
)
tokenizer = AutoTokenizer.from_pretrained("banjtheman/distilbert-base-uncased-helpful-amazon")
model = AutoModelForSequenceClassification.from_pretrained(
"banjtheman/distilbert-base-uncased-helpful-amazon"
)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer)
result = pipe("This was a Christmas gift for my grandson.")
print(result)
#[{'label': 'LABEL_0', 'score': 0.998775064945221}]
# This is NOT A HELPFUL comment
``` | fb18e55110327e677323d7d1040fae08 |
vinayak361/token_fine_tunned_flipkart_2_gl | vinayak361 | distilbert | 12 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,004 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token_fine_tunned_flipkart_2_gl
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0275
- Precision: 0.9888
- Recall: 0.9900
- F1: 0.9894
- Accuracy: 0.9924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 136 | 0.1945 | 0.9023 | 0.9338 | 0.9178 | 0.9331 |
| No log | 2.0 | 272 | 0.1232 | 0.9469 | 0.9572 | 0.9520 | 0.9658 |
| No log | 3.0 | 408 | 0.0852 | 0.9595 | 0.9688 | 0.9641 | 0.9747 |
| 0.2214 | 4.0 | 544 | 0.0603 | 0.9723 | 0.9760 | 0.9741 | 0.9831 |
| 0.2214 | 5.0 | 680 | 0.0455 | 0.9770 | 0.9819 | 0.9794 | 0.9865 |
| 0.2214 | 6.0 | 816 | 0.0357 | 0.9823 | 0.9863 | 0.9843 | 0.9887 |
| 0.2214 | 7.0 | 952 | 0.0307 | 0.9869 | 0.9894 | 0.9882 | 0.9916 |
| 0.0938 | 8.0 | 1088 | 0.0275 | 0.9888 | 0.9900 | 0.9894 | 0.9924 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu102
- Datasets 2.2.2
- Tokenizers 0.12.1
| c17a1ee3f5136c37e741bbf0e9888150 |
aXhyra/emotion_trained_42 | aXhyra | distilbert | 10 | 6 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['tweet_eval'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,391 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9012
- F1: 0.7361
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6131 | 0.6955 |
| No log | 2.0 | 408 | 0.5816 | 0.7297 |
| 0.5148 | 3.0 | 612 | 0.8942 | 0.7199 |
| 0.5148 | 4.0 | 816 | 0.9012 | 0.7361 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| 6a2dc39ee0a164ecdd799bf20ff23a87 |
tensorspeech/tts-tacotron2-thorsten-ger | tensorspeech | null | 5 | 0 | tensorflowtts | 0 | text-to-speech | false | false | false | apache-2.0 | ['german'] | ['Thorsten'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel'] | false | true | true | 2,696 | false |
# Tacotron 2 with Guided Attention trained on Thorsten (Ger)
This repository provides a pretrained [Tacotron2](https://arxiv.org/abs/1712.05884) trained with [Guided Attention](https://arxiv.org/abs/1710.08969) on Thorsten dataset (Ger). For a detail of the model, we encourage you to read more about
[TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS).
## Install TensorFlowTTS
First of all, please install TensorFlowTTS with the following command:
```
pip install TensorFlowTTS
```
### Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("tensorspeech/tts-tacotron2-thorsten-ger")
tacotron2 = TFAutoModel.from_pretrained("tensorspeech/tts-tacotron2-thorsten-ger")
text = "Möchtest du das meiner Frau erklären? Nein? Ich auch nicht."
input_ids = processor.text_to_sequence(text)
decoder_output, mel_outputs, stop_token_prediction, alignment_history = tacotron2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
input_lengths=tf.convert_to_tensor([len(input_ids)], tf.int32),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
)
```
#### Referencing Tacotron 2
```
@article{DBLP:journals/corr/abs-1712-05884,
author = {Jonathan Shen and
Ruoming Pang and
Ron J. Weiss and
Mike Schuster and
Navdeep Jaitly and
Zongheng Yang and
Zhifeng Chen and
Yu Zhang and
Yuxuan Wang and
R. J. Skerry{-}Ryan and
Rif A. Saurous and
Yannis Agiomyrgiannakis and
Yonghui Wu},
title = {Natural {TTS} Synthesis by Conditioning WaveNet on Mel Spectrogram
Predictions},
journal = {CoRR},
volume = {abs/1712.05884},
year = {2017},
url = {http://arxiv.org/abs/1712.05884},
archivePrefix = {arXiv},
eprint = {1712.05884},
timestamp = {Thu, 28 Nov 2019 08:59:52 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1712-05884.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
#### Referencing TensorFlowTTS
```
@misc{TFTTS,
author = {Minh Nguyen, Alejandro Miguel Velasquez, Erogol, Kuan Chen, Dawid Kobus, Takuya Ebata,
Trinh Le and Yunchao He},
title = {TensorflowTTS},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/TensorSpeech/TensorFlowTTS}},
}
``` | b6173cec6b5752cad4d9ef5e1ac232c2 |
caffsean/t5-small-finetuned-keyword-to-text-generation | caffsean | t5 | 20 | 1 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,360 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-keyword-to-text-generation
This model is a fine-tuned version of [caffsean/t5-small-finetuned-keyword-to-text-generation](https://huggingface.co/caffsean/t5-small-finetuned-keyword-to-text-generation) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 188 | 3.8742 | 0.5567 | 0.0851 | 0.4968 | 0.4972 | 16.243 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 6ffa127a5d381ea6ca75c11fd23b0d73 |
EleutherAI/enformer-preview | EleutherAI | enformer | 4 | 1 | transformers | 2 | null | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,462 | false |
# Enformer
Enformer model. It was introduced in the paper [Effective gene expression prediction from sequence by integrating long-range interactions.](https://www.nature.com/articles/s41592-021-01252-x) by Avsec et al. and first released in [this repository](https://github.com/deepmind/deepmind-research/tree/master/enformer).
This particular model was trained on sequences of 131,072 basepairs, target length 896 on v3-64 TPUs for 2 and a half days without augmentations and poisson loss.
This repo contains the weights of the PyTorch implementation by Phil Wang as seen in the [enformer-pytorch repository](https://github.com/lucidrains/enformer-pytorch).
Disclaimer: The team releasing Enformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Enformer is a neural network architecture based on the Transformer that led to greatly increased accuracy in predicting gene expression from DNA sequence.
We refer to the [paper](https://www.nature.com/articles/s41592-021-01252-x) published in Nature for details.
### How to use
Refer to the README of [enformer-pytorch](https://github.com/lucidrains/enformer-pytorch) regarding usage.
### Citation info
```
Avsec, Ž., Agarwal, V., Visentin, D. et al. Effective gene expression prediction from sequence by integrating long-range interactions. Nat Methods 18, 1196–1203 (2021). https://doi.org/10.1038/s41592-021-01252-x
``` | 90d1ec271241cda95cc21ee75b9d9c01 |
anas-awadalla/bert-base-uncased-prefix-tuning-squad | anas-awadalla | null | 21 | 0 | null | 0 | null | false | false | false | apache-2.0 | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,039 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-prefix-tuning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| 2af3c5432edcdb93681c0177ba04b3c7 |
IMSyPP/hate_speech_slo | IMSyPP | bert | 7 | 9 | transformers | 0 | text-classification | true | false | false | mit | ['sl'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 670 | false |
# Hate Speech Classifier for Social Media Content in Slovenian Language
A monolingual model for hate speech classification of social media content in Slovenian language. The model was trained on 50,000 Twitter comments and tested on an independent test set of 10,000 Twitter comments. It is based on multilingual CroSloEngual BERT pre-trained language model.
## Tokenizer
During training the text was preprocessed using the original CroSloEngual BERT tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | e3940c3f18e0f462c3faacdd21df9255 |
theojolliffe/bart-cnn-pubmed-arxiv-pubmed-v3-e43 | theojolliffe | bart | 13 | 1 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 5,912 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed-v3-e43
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0837
- Rouge1: 58.1526
- Rouge2: 46.0425
- Rougel: 49.5624
- Rougelsum: 56.9295
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 43
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 1.2542 | 1.0 | 795 | 0.9354 | 51.4655 | 31.6464 | 34.2376 | 48.9765 | 141.963 |
| 0.7019 | 2.0 | 1590 | 0.8119 | 53.3066 | 34.683 | 36.4262 | 50.907 | 142.0 |
| 0.5251 | 3.0 | 2385 | 0.7839 | 52.4248 | 32.8685 | 36.0084 | 49.9957 | 142.0 |
| 0.3449 | 4.0 | 3180 | 0.7673 | 52.716 | 34.7869 | 38.4201 | 50.8384 | 142.0 |
| 0.2666 | 5.0 | 3975 | 0.7647 | 54.6433 | 37.1337 | 40.1459 | 52.4288 | 141.7778 |
| 0.1805 | 6.0 | 4770 | 0.8400 | 53.5747 | 36.001 | 39.5984 | 51.1935 | 141.8148 |
| 0.1413 | 7.0 | 5565 | 0.7925 | 53.9875 | 37.01 | 40.6532 | 51.9353 | 142.0 |
| 0.113 | 8.0 | 6360 | 0.7665 | 56.395 | 41.5764 | 44.327 | 54.7845 | 142.0 |
| 0.0907 | 9.0 | 7155 | 0.8442 | 55.1407 | 39.4113 | 43.0628 | 53.6503 | 142.0 |
| 0.0824 | 10.0 | 7950 | 0.8469 | 55.7103 | 40.6761 | 43.3754 | 53.8227 | 142.0 |
| 0.0639 | 11.0 | 8745 | 0.8892 | 56.0839 | 40.6204 | 43.2455 | 54.4412 | 142.0 |
| 0.0504 | 12.0 | 9540 | 0.8613 | 56.9634 | 42.8236 | 45.4255 | 55.4026 | 142.0 |
| 0.0447 | 13.0 | 10335 | 0.9341 | 57.7216 | 44.104 | 47.1429 | 56.4299 | 142.0 |
| 0.0396 | 14.0 | 11130 | 0.9203 | 56.2073 | 42.9575 | 45.8068 | 54.8089 | 142.0 |
| 0.036 | 15.0 | 11925 | 0.9253 | 58.5212 | 45.6047 | 49.1205 | 57.0551 | 142.0 |
| 0.0302 | 16.0 | 12720 | 0.9187 | 58.8046 | 46.0106 | 48.0442 | 57.2799 | 142.0 |
| 0.0261 | 17.0 | 13515 | 0.9578 | 57.3405 | 43.8227 | 46.6317 | 55.7836 | 142.0 |
| 0.0231 | 18.0 | 14310 | 0.9578 | 57.7604 | 44.6164 | 47.8902 | 56.2309 | 141.8148 |
| 0.0198 | 19.0 | 15105 | 0.9662 | 57.774 | 44.6407 | 47.5489 | 56.1936 | 142.0 |
| 0.0165 | 20.0 | 15900 | 0.9509 | 59.6297 | 46.5076 | 48.3507 | 58.083 | 142.0 |
| 0.0145 | 21.0 | 16695 | 0.9915 | 58.2245 | 45.1804 | 48.1191 | 56.889 | 142.0 |
| 0.0128 | 22.0 | 17490 | 0.9945 | 58.2646 | 46.2782 | 49.4411 | 56.992 | 142.0 |
| 0.0129 | 23.0 | 18285 | 1.0069 | 57.0055 | 44.1866 | 46.9101 | 55.5056 | 141.9444 |
| 0.0116 | 24.0 | 19080 | 0.9967 | 58.1091 | 45.5303 | 48.2208 | 56.4496 | 142.0 |
| 0.0093 | 25.0 | 19875 | 1.0188 | 56.59 | 43.677 | 45.8956 | 55.0954 | 142.0 |
| 0.008 | 26.0 | 20670 | 0.9976 | 58.5408 | 46.7019 | 48.9235 | 57.2562 | 142.0 |
| 0.0077 | 27.0 | 21465 | 1.0123 | 57.7909 | 45.7619 | 48.3412 | 56.3796 | 142.0 |
| 0.0075 | 28.0 | 22260 | 1.0258 | 58.1694 | 45.03 | 48.282 | 56.7303 | 142.0 |
| 0.0056 | 29.0 | 23055 | 1.0100 | 58.0406 | 45.37 | 48.0125 | 56.5288 | 142.0 |
| 0.0049 | 30.0 | 23850 | 1.0235 | 56.419 | 43.248 | 46.3448 | 54.8467 | 142.0 |
| 0.0042 | 31.0 | 24645 | 1.0395 | 57.7232 | 45.6305 | 48.4531 | 56.3343 | 141.9444 |
| 0.0034 | 32.0 | 25440 | 1.0605 | 58.9049 | 46.8049 | 49.9103 | 57.6751 | 141.5 |
| 0.0032 | 33.0 | 26235 | 1.0362 | 57.8681 | 45.9028 | 48.8624 | 56.5616 | 141.8704 |
| 0.0025 | 34.0 | 27030 | 1.0521 | 58.8985 | 46.8547 | 49.8485 | 57.4249 | 142.0 |
| 0.0021 | 35.0 | 27825 | 1.0639 | 58.9324 | 46.656 | 49.1907 | 57.4836 | 142.0 |
| 0.0023 | 36.0 | 28620 | 1.0624 | 58.5734 | 46.6774 | 49.6377 | 57.3825 | 142.0 |
| 0.0019 | 37.0 | 29415 | 1.0636 | 58.9899 | 46.8217 | 49.4829 | 57.8683 | 142.0 |
| 0.0018 | 38.0 | 30210 | 1.0640 | 58.793 | 46.7964 | 49.7845 | 57.6379 | 142.0 |
| 0.0013 | 39.0 | 31005 | 1.0692 | 57.7124 | 45.5948 | 49.0482 | 56.4246 | 142.0 |
| 0.0012 | 40.0 | 31800 | 1.0746 | 58.1789 | 46.458 | 49.547 | 57.1007 | 141.6296 |
| 0.0008 | 41.0 | 32595 | 1.0815 | 57.7392 | 45.6404 | 48.4845 | 56.6464 | 142.0 |
| 0.0009 | 42.0 | 33390 | 1.0853 | 58.317 | 46.2661 | 49.0466 | 57.0971 | 142.0 |
| 0.0005 | 43.0 | 34185 | 1.0837 | 58.1526 | 46.0425 | 49.5624 | 56.9295 | 142.0 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 41638cb8f265b4d6f9864389d6cde889 |
dpkmnit/bert-finetuned-squad | dpkmnit | bert | 8 | 4 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,289 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dpkmnit/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7048
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 66549, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2092 | 0 |
| 0.7048 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.1
- Datasets 2.7.0
- Tokenizers 0.13.2
| 8bd1bfffa4c6da7498ffbec872d4fcc4 |
mayrajeo/maskrcnn-deadwood | mayrajeo | null | 10 | 0 | null | 0 | null | false | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['UAV', 'Deadwood', 'detectron2'] | false | true | true | 5,696 | false |
## Model description
These models were trained for detecting both standing and fallen deadwood from UAV RGB images. All model configurations and weights here are fine-tuned from models available in [Detectron2 Model Zoo](https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md).
The models are trained on 512x512px RGB image patches with spatial resolution between 3.9 cm and 4.3 cm and with hand-annotated deadwood data based on visual inspection. The location of training dataset is in the vicinity of Hiidenportti national, Sotkamo, Finland, and the images were acquired during leaf-on season, at 16. and 17.7.2019. Most likely the models are most suitable to use with imagery from leaf-on season and with similar ground sampling distance.
Example app is running on [https://huggingface.co/spaces/mayrajeo/maskrcnn-deadwood](https://huggingface.co/spaces/mayrajeo/maskrcnn-deadwood), which uses R101 backbone without TTA.
## Training data
These models were trained on expert-annotated deadwood data, acquired on during leaf-on season at 16.-17.7.2019 from Hiidenportti, Sotkamo, Eastern-Finland. The ground resolution for the data varied between 3.9 and 4.4 cm. In addition, the model was tested with data collected from Evo, Hämeenlinna, Southern-Finland, acquired on 11.7.2018. The data from Evo was used only for testing the models.
## Results
Patch-level data are non-overlapping 512x512 pixel non-overlapping tiles extracted from larger virtual plots. The results presented here are the run with test-time augmentation.
Scene-level data are the full virtual plots extracted from the full images. For Hiidenportti, the virtual plot sizes vary between 2560x2560px and 8192x4864px. These patches contain also non-annotated buffer areas in order to extract the complete annotated area. For Sudenpesänkangas, all 71 scenes are 100x100 meters (2063x2062) pixels, and during inference they are extracted from the full mosaic with enough buffer to cover the full area. The results presented here are run for 512x512 pixel tiles with 256 px overlap, with both edge filtering and mask merging described in the workflow.
### Hiidenportti test set
Hiidenportti test set contains 241 non-overlapping 512x512 pixel image patches, extracted from 5 scenes that cover 11 circular field plots.
|Model|Patch AP50|Patch AP|Patch AP-groundwood|Patch AP-uprightwood|Scene AP50|Scene AP|Scene AP-groundwood|Scene AP-uprightwood|
|----|-----------|--------|-------------------|--------------------|-----------|--------|-------------------|--------------------|
|mask_rcnn_R_50_FPN_3x|0.654|0.339|0.316|0.363|0.640|0.315|0.235|0.396|
|mask_rcnn_R_101_FPN_3x|**0.704**|**0.366**|0.326|**0.406**|**0.683**|**0.341**|0.246|**0.436**|
|mask_rcnn_X_101_32x8d_FPN_3x|0.679|0.355|**0.333**|0.377|0.661|0.333|**0.255**|0.412|
|cascade_mask_rcnn_R_50_FPN_3x|0.652|0.345|0.306|0.384|0.623|0.317|0.223|0.411|
### Evo dataset
Sudenpesänkangas dataset contains 798 on-overlapping 512x512 pixel image patches, extracted from 71 scenes.
|Model|Patch AP50|Patch AP|Patch AP-groundwood|Patch AP-uprightwood|Scene AP50|Scene AP|Scene AP-groundwood|Scene AP-uprightwood|
|----|-----------|--------|-------------------|--------------------|-----------|--------|-------------------|--------------------|
|mask_rcnn_R_50_FPN_3x|0.486|0.237|0.175|0.299|0.474|0.221|0.152|0.290|
|mask_rcnn_R_101_FPN_3x|**0.519**|**0.252**|**0.183**|0.321|**0.511**|**0.236**|**0.160**|**0.311**|
|mask_rcnn_X_101_32x8d_FPN_3x|0.502|0.245|0.182|0.307|0.494|0.232|0.159|0.305|
|cascade_mask_rcnn_R_50_FPN_3x|0.497|0.248|0.172|**0.323**|0.473|0.225|0.148|0.302|
## How to use
### Running for small patches
```python
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.data import build_detection_test_loader
import cv2
cfg = get_cfg()
cfg.merge_from_file(<path_to_model_config>)
cfg.OUTPUT_DIR = '<path_to_output>'
cfg.MODEL.WEIGHTS = '<path_to_weights>'
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # score threshold for detections
predictor = DefaultPredictor(cfg)
img = cv2.imread('<path_to_image_patch>')
outputs = predictor(image)
```
### Running for larger scenes
Running on larger scenes requires the following steps:
1. Tiling the scenes into smaller image patches, optionally with overlap
2. Running the model on these smaller patches
3. Gathering the predictions into a single GIS data file
4. Optionally post-processing the results
[drone_detector](https://jaeeolma.github.io/drone_detector) package has helpers for this:
```python
from drone_detector.engines.detectron2.predict import predict_instance_masks
predict_instance_masks(
path_to_model_config='<path_to_model_config>', # model config file
path_to_image='<path_to_image>', # which image to process
outfile='<name_for_predictions>.geojson', # where to save the results
processing_dir='temp', # directory for temporary files, deleted afterwards. Default: temp
tile_size=512, # image patch size in pixels, square patches. Default: 400
tile_overlap=256, # overlap between tiles. Default: 100
smooth_preds=False, # not yet implemented, at some points runs dilation+erosion to smooth polygons. Default: False
coco_set='<path_to_coco>', # the coco set the model was trained on to infer the class names. None defaults to dummy categories. Default: None
postproc_results=True # whether to discard masks in the edge regions of patches Default: False
)
```
Also, after installing the package, `predict_instance_masks_detectron2` can be used as CLI command with identical syntax.
When provided models default to dummy classes, **1** is *standing deadwood* and **2** is *fallen deadwood*.
| 1b4f7bf5adc240fa863e67c24089ed62 |
DOOGLAK/Tagged_Uni_250v3_NER_Model_3Epochs_AUGMENTED | DOOGLAK | bert | 13 | 6 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['tagged_uni250v3_wikigold_split'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,565 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Tagged_Uni_250v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni250v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3093
- Precision: 0.5831
- Recall: 0.4849
- F1: 0.5295
- Accuracy: 0.8989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 78 | 0.3468 | 0.3486 | 0.2362 | 0.2816 | 0.8670 |
| No log | 2.0 | 156 | 0.3071 | 0.5484 | 0.4516 | 0.4953 | 0.8943 |
| No log | 3.0 | 234 | 0.3093 | 0.5831 | 0.4849 | 0.5295 | 0.8989 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
| 88daa8f319f9f0ddb876cfa5bf0b93ca |
StonyBrookNLP/t5-large-drop | StonyBrookNLP | t5 | 8 | 3 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['question-answering, multi-step-reasoning, multi-hop-reasoning'] | false | true | true | 2,617 | false |
# What's this?
This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496).
This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details.
We release the following models:
- **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}`
- **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}`
- **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}`
The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`.
The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`.
The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**.
# How to use it?
Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac
model_name = "StonyBrookNLP/t5-large-drop"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
enable_digit_tokenization(tokenizer)
input_texts = [
"answer_me: Who scored the first touchdown of the game?" +
"context: ... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..."
# Note: some models have slightly different qn/ctxt format. See the github repo.
]
input_ids = tokenizer(
input_texts, return_tensors="pt",
truncation=True, max_length=800,
add_special_tokens=True, padding=True,
)["input_ids"]
generated_ids = model.generate(input_ids, min_length=1, max_length=50)
generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
generated_predictions = [
tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions
]
# => ["Chaz Schilens"]
``` | c1f5b98d3d6f0cc2a0d691c267d083af |
silviacamplani/distilbert-base-uncased-finetuned-dapt-ner-ai_data_3labels | silviacamplani | distilbert | 18 | 2 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,794 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# silviacamplani/distilbert-base-uncased-finetuned-dapt-ner-ai_data_3labels
This model is a fine-tuned version of [silviacamplani/distilbert-base-uncased-finetuned-ai_data](https://huggingface.co/silviacamplani/distilbert-base-uncased-finetuned-ai_data) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6153
- Validation Loss: 0.5610
- Train Precision: 0.0
- Train Recall: 0.0
- Train F1: 0.0
- Train Accuracy: 0.8840
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 60, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 1.6831 | 1.5093 | 0.0031 | 0.0020 | 0.0024 | 0.8573 | 0 |
| 1.3863 | 1.2247 | 0.0 | 0.0 | 0.0 | 0.8839 | 1 |
| 1.1078 | 0.9646 | 0.0 | 0.0 | 0.0 | 0.8840 | 2 |
| 0.8884 | 0.7566 | 0.0 | 0.0 | 0.0 | 0.8840 | 3 |
| 0.7235 | 0.6484 | 0.0 | 0.0 | 0.0 | 0.8840 | 4 |
| 0.6686 | 0.6034 | 0.0 | 0.0 | 0.0 | 0.8840 | 5 |
| 0.6541 | 0.5804 | 0.0 | 0.0 | 0.0 | 0.8840 | 6 |
| 0.6347 | 0.5687 | 0.0 | 0.0 | 0.0 | 0.8840 | 7 |
| 0.6147 | 0.5630 | 0.0 | 0.0 | 0.0 | 0.8840 | 8 |
| 0.6153 | 0.5610 | 0.0 | 0.0 | 0.0 | 0.8840 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
| 0ecaf6c3235bc4d51e5fb3d6e223a21a |
jonatasgrosman/exp_w2v2t_en_vp-fr_s118 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'en'] | false | true | true | 475 | false | # exp_w2v2t_en_vp-fr_s118
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 41596bb1f08689eab92a082479710ad7 |
Jinchen/bert-base-cased-wikitext2-test-mlm | Jinchen | bert | 7 | 1 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,189 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2-test-mlm
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- total_eval_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.20.0
- Pytorch 1.10.0+rocm4.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 010c5c994223d72d0bb8b22def0cacb8 |
speechbrain/asr-wav2vec2-ctc-aishell | speechbrain | null | 7 | 15 | speechbrain | 4 | null | true | false | false | apache-2.0 | ['zh'] | ['aishell'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['CTC', 'wav2vec2', 'pytorch', 'speechbrain'] | false | true | true | 3,980 | false |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Wav2vec2-CTC for AISHELL (Mandarin Chinese)
This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on AISHELL (Mandarin Chinese)
within SpeechBrain. For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io).
The performance of the model is the following:
| Release | Dev CER | Test CER | GPUs | Full Results |
|:-------------:|:--------------:|:--------------:|:--------:|:--------:|
| 20-09-22 | 4.48 | 5.02 | 1xRTX8000 48GB | [Google Drive](https://drive.google.com/drive/folders/1GTB5IzQPl57j-0I1IpmvKg722Ti4ahLz?usp=sharing)|
## Pipeline description
This ASR system is composed of 2 different but linked blocks:
- Tokenizer (from huggingface) that transforms words into chars and trained with
the training transcriptions of AISHELL-1.
- Acoustic model made of a wav2vec2 encoder and fully-connected layers
To Train this system from scratch, [see our SpeechBrain recipe](https://github.com/speechbrain/speechbrain/tree/develop/recipes/AISHELL-1/ASR/CTC).
The system is trained with recordings sampled at 16kHz (single channel).
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Transcribing your own audio files (in English)
```python
from speechbrain.pretrained.interfaces import foreign_class
asr_model = foreign_class(source="speechbrain/asr-wav2vec2-ctc-aishell", pymodule_file="custom_interface.py", classname="CustomEncoderDecoderASR")
asr_model.transcribe_file("speechbrain/asr-wav2vec2-ctc-aishell/example.wav")
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
## Parallel Inference on a Batch
Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
### Training
The model was trained with SpeechBrain (Commit hash: '480dde87').
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/git
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/AISHELL-1/ASR/CTC/
python train_with_wav2vec.py hparams/train_with_wav2vec.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1GTB5IzQPl57j-0I1IpmvKg722Ti4ahLz?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` | 97fed5c5b627162caea4f0b897adf42c |
bookbot/wav2vec2-xls-r-adult-child-id-cls | bookbot | wav2vec2 | 10 | 3 | transformers | 0 | audio-classification | true | false | false | apache-2.0 | ['id'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio-classification', 'generated_from_trainer'] | true | true | true | 2,561 | false |
# Wav2Vec2 XLS-R Adult/Child Indonesian Speech Classifier
Wav2Vec2 XLS-R Adult/Child Indonesian Speech Classifier is an audio classification model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on a private adult/child Indonesian speech classification dataset.
This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ----------------------------------- | ------- | ----- | ---------------------------------------------------- |
| `wav2vec2-xls-r-adult-child-id-cls` | 300M | XLS-R | Adult/Child Indonesian Speech Classification Dataset |
## Evaluation Results
The model achieves the following results on evaluation:
| Dataset | Loss | Accuracy | F1 |
| -------------------------------------------- | ------ | -------- | ------ |
| Adult/Child Indonesian Speech Classification | 0.1970 | 93.38% | 0.9307 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 3e-05
- `train_batch_size`: 8
- `eval_batch_size`: 8
- `seed`: 42
- `gradient_accumulation_steps`: 4
- `total_train_batch_size`: 32
- `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_ratio`: 0.1
- `num_epochs`: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
| :-----------: | :---: | :--: | :-------------: | :------: | :----: |
| 0.336 | 1.0 | 305 | 0.3146 | 0.8845 | 0.8698 |
| 0.2345 | 2.0 | 610 | 0.2140 | 0.9251 | 0.9202 |
| 0.3215 | 3.0 | 915 | 0.2038 | 0.9315 | 0.9286 |
| 0.2059 | 4.0 | 1220 | 0.1970 | 0.9338 | 0.9307 |
## Disclaimer
Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
## Authors
Wav2Vec2 XLS-R Adult/Child Indonesian Speech Classifier was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/). All computation and development are done on Kaggle.
## Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.2.0
- Tokenizers 0.12.1
| 3484ad7c2f7aee1a09cd0a72a1b94641 |
jcmc/wav2vec2-xls-r-1b-ir | jcmc | wav2vec2 | 34 | 8 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['ga-IE'] | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer'] | true | true | true | 1,670 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - GA-IE dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6569
- Wer: 0.8623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1851 | 15.62 | 500 | 1.8067 | 0.9256 |
| 2.1586 | 31.25 | 1000 | 1.7883 | 0.9180 |
| 2.0302 | 46.86 | 1500 | 1.7571 | 0.9192 |
| 1.8706 | 62.49 | 2000 | 1.6314 | 0.8858 |
| 1.7008 | 78.12 | 2500 | 1.6131 | 0.8679 |
| 1.4982 | 93.74 | 3000 | 1.6540 | 0.8650 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| b63bfdc95cb547ce41efa94099f9402a |
lgris/wav2vec2-large-xlsr-coraa-portuguese-cv8 | lgris | wav2vec2 | 14 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['mozilla-foundation/common_voice_8_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,312 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-coraa-portuguese-cv8
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- Wer: 0.1365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5614 | 0.1 | 100 | 0.2542 | 0.1986 |
| 0.5181 | 0.19 | 200 | 0.2740 | 0.2146 |
| 0.5056 | 0.29 | 300 | 0.2472 | 0.2068 |
| 0.4747 | 0.39 | 400 | 0.2464 | 0.2166 |
| 0.4627 | 0.48 | 500 | 0.2277 | 0.2041 |
| 0.4403 | 0.58 | 600 | 0.2245 | 0.1977 |
| 0.4413 | 0.68 | 700 | 0.2156 | 0.1968 |
| 0.437 | 0.77 | 800 | 0.2102 | 0.1919 |
| 0.4305 | 0.87 | 900 | 0.2130 | 0.1864 |
| 0.4324 | 0.97 | 1000 | 0.2144 | 0.1902 |
| 0.4217 | 1.06 | 1100 | 0.2230 | 0.1891 |
| 0.3823 | 1.16 | 1200 | 0.2033 | 0.1774 |
| 0.3641 | 1.25 | 1300 | 0.2143 | 0.1830 |
| 0.3707 | 1.35 | 1400 | 0.2034 | 0.1793 |
| 0.3767 | 1.45 | 1500 | 0.2029 | 0.1823 |
| 0.3483 | 1.54 | 1600 | 0.1999 | 0.1740 |
| 0.3577 | 1.64 | 1700 | 0.1928 | 0.1728 |
| 0.3667 | 1.74 | 1800 | 0.1898 | 0.1726 |
| 0.3283 | 1.83 | 1900 | 0.1920 | 0.1688 |
| 0.3571 | 1.93 | 2000 | 0.1904 | 0.1649 |
| 0.3467 | 2.03 | 2100 | 0.1994 | 0.1648 |
| 0.3145 | 2.12 | 2200 | 0.1940 | 0.1682 |
| 0.3186 | 2.22 | 2300 | 0.1879 | 0.1571 |
| 0.3058 | 2.32 | 2400 | 0.1975 | 0.1678 |
| 0.3096 | 2.41 | 2500 | 0.1877 | 0.1589 |
| 0.2964 | 2.51 | 2600 | 0.1862 | 0.1568 |
| 0.3068 | 2.61 | 2700 | 0.1809 | 0.1588 |
| 0.3036 | 2.7 | 2800 | 0.1769 | 0.1573 |
| 0.3084 | 2.8 | 2900 | 0.1836 | 0.1524 |
| 0.3109 | 2.9 | 3000 | 0.1807 | 0.1519 |
| 0.2969 | 2.99 | 3100 | 0.1851 | 0.1516 |
| 0.2698 | 3.09 | 3200 | 0.1737 | 0.1490 |
| 0.2703 | 3.19 | 3300 | 0.1759 | 0.1457 |
| 0.2759 | 3.28 | 3400 | 0.1778 | 0.1471 |
| 0.2728 | 3.38 | 3500 | 0.1717 | 0.1462 |
| 0.2398 | 3.47 | 3600 | 0.1767 | 0.1451 |
| 0.256 | 3.57 | 3700 | 0.1742 | 0.1410 |
| 0.2712 | 3.67 | 3800 | 0.1674 | 0.1414 |
| 0.2648 | 3.76 | 3900 | 0.1717 | 0.1423 |
| 0.2576 | 3.86 | 4000 | 0.1672 | 0.1403 |
| 0.2504 | 3.96 | 4100 | 0.1683 | 0.1381 |
| 0.2406 | 4.05 | 4200 | 0.1685 | 0.1399 |
| 0.2403 | 4.15 | 4300 | 0.1656 | 0.1381 |
| 0.2233 | 4.25 | 4400 | 0.1687 | 0.1371 |
| 0.2546 | 4.34 | 4500 | 0.1642 | 0.1377 |
| 0.2431 | 4.44 | 4600 | 0.1655 | 0.1372 |
| 0.2337 | 4.54 | 4700 | 0.1625 | 0.1370 |
| 0.2607 | 4.63 | 4800 | 0.1618 | 0.1363 |
| 0.2292 | 4.73 | 4900 | 0.1622 | 0.1366 |
| 0.2232 | 4.83 | 5000 | 0.1626 | 0.1365 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| 643f79b5d866f23ffebb9841bd6364dc |
naclbit/trinart_characters_19.2m_stable_diffusion_v1 | naclbit | null | 19 | 214 | diffusers | 142 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 4 | 1 | 2 | 1 | 8 | 6 | 2 | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image'] | false | true | true | 5,537 | false |
## Note
A newer version of this model has been released:
https://huggingface.co/naclbit/trinart_derrida_characters_v2_stable_diffusion
## Stable Diffusion TrinArt Characters model v1
trinart_characters_19.2m_stable_diffusion_v1 is a stable diffusion v1-based model trained by roughly 19.2M anime/manga style images (pre-rolled augmented images included) plus final finetuning by about 50,000 images. This model seeks for a sweet spot between artistic style versatility and anatomical quality within the given model spec of SDv1.
This is the same version 1 model that was released in AI Novelist/TrinArt service from early September through Oct 14. We are currently experimenting with the new Derrida model on TrinArt service for further improvement and anatomical stabilization. In the mean time, please enjoy this real-service-tested Characters v1!
#### Hardware
- 8xNVIDIA A100 40GB
#### Custom autoencoder
*Note: There was a wrong checkpoint uploaded before 5 Nov 2022. The file has been replaced with the latest checkpoint.*
We also provide a separate checkpoint for the custom KL autoencoder. As suggested by the Latent Diffusion paper, we found that training the autoencoder and the latent diffusion model separately improves the result. Since the official stable diffusion script does not support loading the other VAE, in order to run it in your script, you'll need to override state_dict for first_stage_model.
The popular WebUI has the script to load separate first_stage_model parameters.
#### Safety
The dataset is filtered to exclude NSFW or unsafe contents. After our extensive experimentation and testing with 10M+ user generated images, we decided that this model is safe enough and less likely to spit out questionable (nudity/overly sexual/realistic gore) content than the stock SD v1.4 model or other anime/manga models. However, if the user tortures this model enough until it talks, it may be still possible to force this model to generate obnoxious materials. We do not consider this model to be 100% risk-free.
*This statement does not necessarily restrict third-party from training a derivative of this model that includes NSFW.
#### Examples
Below images are directly generated by the native TrinArt service with its idiosyncratic upscaler, parser and processes. Your mileage may vary.

(assorted random examples)


wide shot, high quality, htgngg animal arm rest brown hair merry chair cup dress flower from above jacket on shoulders long hair sitting solo sugar bowl fantasy adventurer's inn table teacup teapot landscape miniature (2022 Artstyle preset)

highres wide shot bangs bare shoulders water bird cage terrarium detached sleeves frilled frilled legwear frills hair ornament hair ribbon hood long hair medium breasts ribbon thighhighs (2019 Artstyle preset)

1girl standing holding sword hizzrd arm up bangs bare shoulders boots bow breasts bright pupils choker detached sleeves diamond (shape) floating floating hair footwear bow from side full body gloves leg up long hair looking at viewer open mouth outstretched arm solo streaked hair swept bangs two tone hair very long hair::4 angry::1 (2022 Artstyle preset)

1boy male focus standing hizzrd holding sword arm up bow bright pupils cape coat diamond (shape) floating floating hair fold-over boots footwear bow from side full body gloves leg up long sleeves looking at viewer open mouth outstretched arm open coat open clothes solo swept two tone hair thigh boots::4 angry::1.25 (2022 Artstyle preset)

cathedral 1girl schoolgirl momoko school uniform cats particles beautiful shooting stars detailed cathedral jacket open mouth glasses cats (2022 Artstyle preset)

highres 2girls yuri wide shot bangs bare shoulders water bird cage terrarium detached sleeves frilled frilled legwear frills hair ornament hair ribbon hood long hair medium breasts ribbon thighhighs (More Details preset)

wide shot, best quality lapis erebcir highres 1boy bangs black gloves brown hair closed mouth gloves hair between eyes looking at viewer male focus flowers green eyes (More Details preset)
TrinArt 2022 Artstyle preset negative prompts: **retro style, 1980s, 1990s, 2000s, 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019**
TrinArt More Details preset negative prompts: **flat color, flat shading**
We recommend to add known sets of negative prompts in order to stabilize the anatomy such as: bad hands, fewer digits, etc.
#### Credits
- Sta, AI Novelist Dev (https://ai-novel.com/) @ Bit192, Inc. Twitter https://twitter.com/naclbbr (Japanese) https://twitter.com/naclbbre (English)
- Stable Diffusion - Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bjorn
#### License
CreativeML OpenRAIL-M | 4d2ce52f4b5fe5a036c86ba8405507a7 |
SBB/sbb_ner | SBB | null | 5 | 27 | transformers | 2 | token-classification | true | false | false | apache-2.0 | ['de'] | ['conll2003', 'germeval_14'] | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'token-classification', 'sequence-tagger-model'] | false | true | true | 1,560 | false | # About `sbb_ner`
This is a BERT model for named entity recognition (NER) in historical German.
It predicts the classes `PER`, `LOC` and `ORG`. The model is based on the 🤗
[`BERT base multilingual cased`](https://huggingface.co/bert-base-multilingual-cased) model.
We applied unsupervised pre-training on 2,333,647 pages of
unlabeled historical German text from the Berlin State Library
digital collections, and supervised pre-training on two datasets
with contemporary German text, [conll2003](https://huggingface.co/models?dataset=dataset:conll2003)
and [germeval_14](https://huggingface.co/models?dataset=dataset:germeval_14).
For further details, have a look at [sbb_ner](https://github.com/qurator-spk/sbb_ner) on GitHub.
# Results
In a 5-fold cross validation with different historical German NER corpora
(see our *KONVENS2019* [paper](https://corpora.linguistik.uni-erlangen.de/data/konvens/proceedings/papers/KONVENS2019_paper_4.pdf)),
the model obtained an F1-Score of **84.3**±1.1%.
In the *CLEF-HIPE-2020* Shared Task ([paper](http://ceur-ws.org/Vol-2696/paper_255.pdf)),
the model ranked 2nd of 13 systems for the German coarse NER task.
# Weights
We provide model weights for PyTorch.
| Model | Downloads
| ------------------------| ------------------------
| `bert-sbb-de-finetuned` | [`config.json`](https://huggingface.co/SBB/sbb_ner/blob/main/config.json) • [`pytorch_model_ep7.bin`](https://huggingface.co/SBB/sbb_ner/blob/main/pytorch_model_ep7.bin) • [`vocab.txt`](https://huggingface.co/SBB/sbb_ner/blob/main/vocab.txt) | d4e33a6ac76965619ad7bf083f2f07ce |
tiagoblima/punctuation-finetune-mec | tiagoblima | bert | 10 | 12 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,232 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# punctuation-finetune-mec
This model is a fine-tuned version of [tiagoblima/punctuation-taboa-bert](https://huggingface.co/tiagoblima/punctuation-taboa-bert) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 411 | 0.1356 | 0.9791 | 0.7083 | 0.8220 | 0.9553 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
| a70b08daae9006af833bc10d45f6be42 |
ryusangwon/xlm-roberta-base-fintuned-panx-de | ryusangwon | xlm-roberta | 19 | 1 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,317 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-fintuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1389
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 525 | 0.1658 | 0.8206 |
| No log | 2.0 | 1050 | 0.1413 | 0.8486 |
| No log | 3.0 | 1575 | 0.1389 | 0.8638 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 3c21608274adb766d8fff590dbeec194 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.