repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
cafeai/cafe-instagram-sd-1-5-v6
cafeai
null
4
0
null
62
null
false
false
false
agpl-3.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,645
false
# Cafe Instagram Unofficial Test v2 This is a test model created to assess the Waifu Diffusion training code, and not intended to be a full-featured or official release. This model has been trained from `runwayml/stable-diffusion-v1-5` for approximately 1.6 epochs on 1.2m images total from various Instagram accounts (primarily Japanese). As the model is undertrained, its performance is marginal. Mixing the model is recommended for better performance. Natural language descriptions (using BLIP), as well as [booru tags](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger) have been used to assist in captioning. Any Instagram hashtags were also included in the caption data. *Note: Training was done using various aspect ratios, with a base resolution of 768x768, as well as the penultimate CLIP layer. Clip skip of 2 and a resolution of 768x768 or higher is recommended for generations.* ![Example](https://huggingface.co/cafeai/cafe-instagram-sd-1-5-v6/resolve/main/example.jpg) Example: ``` waifu, instagram, cute girl, japaneseidol, idol, アイドル, 自撮り女子, photorealistic, photo, 可愛い, kawaii, cute, gravure, fashion, 1girl, solo, cleavage, cowboy shot Negative prompt: (((mutated hands and fingers))), ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed face))), ((ugly)), ((bad anatomy)), (((bad proportions))), (((extra limbs))), extra face, ((double head)), ((extra head)), (big breast), (((extra feet))), monster, (text), (logo), (blurry), text, english text, watermark, logo, (((anime))) ``` This model is released under the aGPL. You can use this for whatever you like. If you make changes, share them.
4563c92c96812209b1bc54e465375264
JiachengLi/uctopic-base
JiachengLi
luke
10
128
transformers
0
null
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
16,354
false
# UCTopic This repository contains the code of model UCTopic and an easy-to-use tool UCTopicTool used for <strong>Topic Mining</strong>, <strong>Unsupervised Aspect Extractioin</strong> or <strong>Phrase Retrieval</strong>. Our ACL 2022 paper [UCTopic: Unsupervised Contrastive Learning for Phrase Representations and Topic Mining](https://arxiv.org/abs/2202.13469). # Quick Links - [Overview](#overview) - [Pretrained Model](#pretrained-model) - [Getting Started](#getting-started) - [UCTopic Model](#uctopic-model) - [UCTopicTool](#uctopictool) - [Experiments in Paper](#experiments) - [Requirements](#requirements) - [Datasets](#datasets) - [Entity Clustering](#entity-clustering) - [Topic Mining](#topic-mining) - [Pretraining](#pretraining) - [Contact](#contact) - [Citation](#citation) # Overview We propose UCTopic, a novel unsupervised contrastive learning framework for context-aware phrase representations and topic mining. UCTopic is pretrained in a large scale to distinguish if the contexts of two phrase mentions have the same semantics. The key to pretraining is positive pair construction from our phrase-oriented assumptions. However, we find traditional in-batch negatives cause performance decay when finetuning on a dataset with small topic numbers. Hence, we propose cluster-assisted contrastive learning(CCL) which largely reduces noisy negatives by selecting negatives from clusters and further improves phrase representations for topics accordingly. # Pretrained Model Our released model: | Model | Note| |:-------------------------------|------| |[uctopic-base](https://drive.google.com/file/d/1XQzi4E9ctdI373CK5O-pXQyBvOONssp1/view?usp=sharing)| Pretrained UCTopic model based on [LUKE-BASE](https://arxiv.org/abs/2010.01057) Unzip to get `uctopic-base` folder. # Getting Started We provide an easy-to-use phrase representation tool based on our UCTopic model. To use the tool, first install the uctopic package from PyPI ```bash pip install uctopic ``` Or directly install it from our code ```bash python setup.py install ``` ## UCTopic Model After installing the package, you can load our model by just two lines of code ```python from uctopic import UCTopic model = UCTopic.from_pretrained('JiachengLi/uctopic-base') ``` The model will automatically download pre-trained parameters from [HuggingFace's models](https://huggingface.co/models). If you encounter any problem when directly loading the models by HuggingFace's API, you can also download the models manually from the above table and use `model = UCTopic.from_pretrained({PATH TO THE DOWNLOAD MODEL})`. To get pre-trained <strong>phrase representations</strong>, our model inputs are same as [LUKE](https://huggingface.co/docs/transformers/model_doc/luke). Note: please input only <strong>ONE</strong> span each time, otherwise, will have performance decay according to our empirical results. ```python from uctopic import UCTopicTokenizer, UCTopic tokenizer = UCTopicTokenizer.from_pretrained('JiachengLi/uctopic-base') model = UCTopic.from_pretrained('JiachengLi/uctopic-base') text = "Beyoncé lives in Los Angeles." entity_spans = [(17, 28)] # character-based entity span corresponding to "Los Angeles" inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt") outputs, phrase_repr = model(**inputs) ``` `phrase_repr` is the phrase embedding (size `[768]`) of the phrase `Los Angeles`. `outputs` has the same format as the outputs from `LUKE`. ## UCTopicTool We provide a tool `UCTopicTool` built on `UCTopic` for efficient phrase encoding, topic mining (or unsupervised aspect extraction) or phrase retrieval. ### Initialization `UCTopicTool` is initialized by giving the `model_name_or_path` and `device`. ```python from uctopic import UCTopicTool topic_tool = UCTopicTool('JiachengLi/uctopic-base', device='cuda:0') ``` ### Phrase Encoding Phrases are encoded by our method `UCTopicTool.encode` in batches, which is more efficient than `UCTopic`. ```python phrases = [["This place is so much bigger than others!", (0, 10)], ["It was totally packed and loud.", (15, 21)], ["Service was on the slower side.", (0, 7)], ["I ordered 2 mojitos: 1 lime and 1 mango.", (12, 19)], ["The ingredient weren't really fresh.", (4, 14)]] embeddings = topic_tool.encode(phrases) # len(embeddings) is equal to len(phrases) ``` **Note**: Each instance in `phrases` contains only one sentence and one span (character-level position) in format `[sentence, span]`. Arguments for `UCTopicTool.encode` are as follows, * **phrase** (List) - A list of `[sentence, span]` to be encoded. * **return_numpy** (bool, *optional*, defaults to `False`) - Return `numpy.array` or `torch.Tensor`. * **normalize_to_unit** (bool, *optional*, defaults to `True`) - Normalize all embeddings to unit vectors. * **keepdim** (bool, *optional*, defaults to `True`) - Keep dimension size `[instance_number, hidden_size]`. * **batch_size** (int, *optional*, defaults to `64`) - The size of mini-batch in the model. ### Topic Mining and Unsupervised Aspect Extraction The method `UCTopicTool.topic_mining` can mine topical phrases or conduct aspect extraction from sentences with or without spans. ```python sentences = ["This place is so much bigger than others!", "It was totally packed and loud.", "Service was on the slower side.", "I ordered 2 mojitos: 1 lime and 1 mango.", "The ingredient weren't really fresh."] spans = [[(0, 10)], # This place [(15, 21), (26, 30)], # packed; loud [(0, 7)], # Service [(12, 19), (21, 27), (32, 39)], # mojitos; 1 lime; 1 mango [(4, 14)]] # ingredient # len(sentences) is equal to len(spans) output_data, topic_phrase_dict = tool.topic_mining(sentences, spans, \ n_clusters=[15, 25]) # predict topic for new phrases phrases = [["The food here is amazing!", (4, 8)], ["Lovely ambiance with live music!", (21, 31)]] topics = tool.predict_topic(phrases) ``` **Note**: If `spans` is not given, `UCTopicTool` will extract noun phrases by [spaCy](https://spacy.io/). Arguments for `UCTopicTool.topic_mining` are as follows, Data arguments: * **sentences** (List) - A List of sentences for topic mining. * **spans** (List, *optional*, defaults to `None`) - A list of span list corresponding sentences, e.g., `[[(0, 9), (5, 7)], [(1, 2)]]` and `len(sentences)==len(spans)`. If None, automatically mine phrases from noun chunks. Clustering arguments: * **n_clusters** (int or List, *optional*, defaults to `2`) - The number of topics. When `n_clusters` is a list, `n_clusters[0]` and `n_clusters[1]` will be the minimum and maximum numbers to search, `n_clusters[2]` is the search step length (if not provided, default to 1). * **meric** (str, *optional*, defaults to `"cosine"`) - The metric to measure the distance between vectors. `"cosine"` or `"euclidean"`. * **batch_size** (int, *optional*, defaults to `64`) - The size of mini-batch for phrase encoding. * **max_iter** (int, *optional*, defaults to `300`) - The maximum iteration number of kmeans. CCL-finetune arguments: * **ccl_finetune** (bool, *optional*, defaults to `True`) - Whether to conduct CCL-finetuning in the paper. * **batch_size_finetune** (int, *optional*, defaults to `8`) - The size of mini-batch for finetuning. * **max_finetune_num** (int, *optional*, defaults to `100000`) - The maximum number of training instances for finetuning. * **finetune_step** (int, *optional*, defaults to `2000`) - The number of training steps for finetuning. * **contrastive_num** (int, *optional*, defaults to `5`) - The number of negatives in contrastive learning. * **positive_ratio** (float, *optional*, defaults to `0.1`) - The ratio of the most confident instances for finetuning. * **n_sampling** (int, *optional*, defaults to `10000`) - The number of sampled examples for cluster number confirmation and finetuning. Set to `-1` to use the whole dataset. * **n_workers** (int, *optional*, defaults to `8`) - The number of workers for preprocessing data. Returns for `UCTopicTool.topic_mining` are as follows, * **output_data** (List) - A list of sentences and corresponding phrases and topic numbers. Each element is `[sentence, [[start1, end1, topic1], [start2, end2, topic2]]]`. * **topic_phrase_dict** (Dict) - A dictionary of topics and the list of phrases under a topic. The phrases are sorted by their confidence scores. E.g., `{topic: [[phrase1, score1], [phrase2, score2]]}`. The method `UCTopicTool.predict_topic` predicts the topic ids for new phrases based on your training results from `UCTopicTool.topic_mining`. The inputs of `UCTopicTool.predict_topic` are same as `UCTopicTool.encode` and returns a list of topic ids (int). ### Phrase Similarities and Retrieval The method `UCTopicTool.similarity` compute the cosine similarities between two groups of phrases: ```python phrases_a = [["This place is so much bigger than others!", (0, 10)], ["It was totally packed and loud.", (15, 21)]] phrases_b = [["Service was on the slower side.", (0, 7)], ["I ordered 2 mojitos: 1 lime and 1 mango.", (12, 19)], ["The ingredient weren't really fresh.", (4, 14)]] similarities = tool.similarity(phrases_a, phrases_b) ``` Arguments for `UCTopicTool.similarity` are as follows, * **queries** (List) - A list of `[sentence, span]` as queries. * **keys** (List or `numpy.array`) - A list of `[sentence, span]` as keys or phrase representations (`numpy.array`) from `UCTopicTool.encode`. * **batch_size** (int, *optional*, defaults to `64`) - The size of mini-batch in the model. `UCTopicTool.similarity` returns a `numpy.array` contains the similarities between phrase pairs in two groups. The methods `UCTopicTool.build_index` and `UCTopicTool.search` are used for phrase retrieval: ```python phrases = [["This place is so much bigger than others!", (0, 10)], ["It was totally packed and loud.", (15, 21)], ["Service was on the slower side.", (0, 7)], ["I ordered 2 mojitos: 1 lime and 1 mango.", (12, 19)], ["The ingredient weren't really fresh.", (4, 14)]] # query multiple phrases query1 = [["The food here is amazing!", (4, 8)], ["Lovely ambiance with live music!", (21, 31)]] # query single phrases query2 = ["The food here is amazing!", (4, 8)] tool.build_index(phrases) results = tool.search(query1, top_k=3) # or results = tool.search(query2, top_k=3) ``` We also support [faiss](https://github.com/facebookresearch/faiss), an efficient similarity search library. Just install the package following [instructions](https://github.com/facebookresearch/faiss/blob/main/INSTALL.md) here and `UCTopicTool` will automatically use `faiss` for efficient search. `UCTopicTool.search` returns the ranked top k phrases for each query. ### Save and Load finetuned UCTopicTool The methods `UCTopicTool.save` and `UCTopicTool.load` are used for save and load all paramters of `UCTopicTool`. Save: ```python tool = UCTopicTool('JiachengLi/uctopic-base', 'cuda:0') # finetune UCTopic with CCL output_data, topic_phrase_dict = tool.topic_mining(sentences, spans, \ n_clusters=[15, 25]) tool.save(**your directory**) ``` Load: ```python tool = UCTopicTool('JiachengLi/uctopic-base', 'cuda:0') tool.load(**your directory**) ``` The loaded parameters will be used for all methods (for encoding, topic mining, phrase similarities and retrieval) introduced above. # Experiments In this section, we re-implement experiments in our paper. ## Requirements First, install PyTorch by following the instructions from [the official website](https://pytorch.org). To faithfully reproduce our results, please use the correct `1.9.0` version corresponding to your platforms/CUDA versions. Then run the following script to install the remaining dependencies, ```bash pip install -r requirements.txt ``` Download `en_core_web_sm` model from spacy, ```bash python -m spacy download en_core_web_sm ``` ## Datasets The downstream datasets used in our experiments can be downloaded from [here](https://drive.google.com/file/d/1dVIp9li1Wdh0JgU8slsWm0ObcitbQtSL/view?usp=sharing). ## Entity Clustering The config file of entity clustering is `clustering/consts.py` and most arguments are self-explained. Please setup `--gpu` and `--data_path` before running. The clustering scores will be printed. Clustering with our pre-trained phrase embeddings. ```bash python clustering.py --gpu 0 ``` Clustering with our pre-trained phrase embeddings and Cluster-Assisted Constrastive Learning (CCL) proposed in our paper. ```bash python clustering_ccl_finetune.py --gpu 0 ``` ## Topic Mining The config file of entity clustering is `topic_modeling/consts.py`. **Key Argument Table** | Arguments | Description | |:-----------------|:-----------:| | --num_classes |**Min** and **Max** number of classes, e.g., `[5, 15]`. Our model will find the class number by [silhouette_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html).| | --sample_num_cluster |Number of sampled phrases to confirm class number.| | --sample_num_finetune|Number of sampled phrases for CCL finetuning.| | --contrastive_num|Number of negative classes for CCL finetuning.| | --finetune_step | CCL finetuning steps (maximum global steps for finetuning).| **Tips**: Please tune `--batch_size` or `--contrastive_num` for suitable GPU memory usage. Topic mining with our pre-trained phrase embeddings and Cluster-Assisted Constrastive Learning (CCL) proposed in our paper. ```bash python find_topic.py --gpu 0 ``` **Outputs** We output three files under `topic_results`: | File Name | Description | |:-----------------|:-----------:| | `merged_phraes_pred_prob.pickle` |A dictionary of phrases and their topic number and prediction probability. A topic of a phrase is merged from all phrase mentioins. `{phrase: [topic_id, probability]}`, e.g., {'fair prices': [0, 0.34889686]}| | `phrase_instances_pred.json`| A list of all mined phrase mentions. Each element is `[[doc_id, start, end, phrase_mention], topic_id]`.| | `topics_phrases.json`|A dictionary of topics and corresponding phrases sorted by probability. `{'topic_id': [[phrase1, prob1], [phrase2, prob2]]}`| ### Pretraining **Data** For unsupervised pretraining of UCTopic, we use article and span with links from English Wikipedia and Wikidata. Our processed dataset can be downloaded from [here](https://drive.google.com/file/d/1wflsmhPI9J0ZA6aVRl2mQjHIE6JIvzAv/view?usp=sharing). **Training scripts** We provide example training scripts and our default training parameters for unsupervised training of UCTopic in `run_example.sh`. ```bash bash run_example.sh ``` Arguments description can be found in `pretrain.py`. All the other arguments are standard Huggingface's `transformers` training arguments. **Convert models** Our pretrained checkpoints are slightly different from the checkpoint `uctopic-base`. Please refer `convert_uctopic_parameters.py` to convert it. # Contact If you have any questions related to the code or the paper, feel free to email Jiacheng (`[email protected]`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker! # Citation Please cite our paper if you use UCTopic in your work: ```bibtex @inproceedings{Li2022UCTopicUC, title = "{UCT}opic: Unsupervised Contrastive Learning for Phrase Representations and Topic Mining", author = "Li, Jiacheng and Shang, Jingbo and McAuley, Julian", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.426", doi = "10.18653/v1/2022.acl-long.426", pages = "6159--6169" } ```
31a0b0f6a1d0e82a610d66a2e89765ca
orhanxakarsu/turkishPoe-generation-1
orhanxakarsu
gpt2
9
2
transformers
0
text-generation
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,509
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # orhanxakarsu/turkishPoe-generation-1 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 6.7319 - Validation Loss: 5.8020 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 12731, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.003} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 6.7319 | 5.8020 | 0 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.4 - Datasets 2.1.0 - Tokenizers 0.12.1
3913c6fc936a4cc0e5b18c5fd4ef8923
jonatasgrosman/exp_w2v2t_zh-cn_wavlm_s368
jonatasgrosman
wavlm
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['zh-CN']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'zh-CN']
false
true
true
445
false
# exp_w2v2t_zh-cn_wavlm_s368 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
d230f167fb11f4438cfacd741ba47580
Tanhim/translation-En2De
Tanhim
marian
15
14
transformers
3
translation
true
false
false
gpl
['de']
['wmt19']
null
1
1
0
0
0
0
0
['translation']
false
true
true
624
false
<h2> English to German Translation </h2> Model Name: Tanhim/translation-En2De <br /> language: German or Deutsch <br /> thumbnail: https://huggingface.co/Tanhim/translation-En2De <br /> ### How to use You can use this model directly with a pipeline for machine translation. Since the generation relies on some randomness, I set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> text_En2De= pipeline('translation', model='Tanhim/translation-En2De', tokenizer='Tanhim/translation-En2De') >>> set_seed(42) >>> text_En2De("My name is Karl and I live in Aachen") ``` ### beta version
09c78446ba2455bbadeda97b724e0aae
mrm8488/flan-t5-xl-finetuned-gsm8k
mrm8488
t5
13
8
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['gsm8k']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,364
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-xl-finetuned-gsm8k This model is a fine-tuned version of [google/flan-t5-xl](https://huggingface.co/google/flan-t5-xl) on the gsm8k dataset. It achieves the following results on the evaluation set: - Loss: 0.2853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2845 | 1.0 | 1868 | 0.2778 | | 0.2204 | 2.0 | 3736 | 0.2718 | | 0.1803 | 3.0 | 5604 | 0.2762 | | 0.1578 | 4.0 | 7472 | 0.2853 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
e13a4d284b4e2ac62ee601dc35a91b1f
taqwa92/whisper-small-ArabicT11
taqwa92
whisper
16
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ar']
['taqwa92/tm_data']
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,288
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Arabic- Taqwa This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the tm_data dataset. It achieves the following results on the evaluation set: - Loss: 0.5306 - Wer: 46.4256 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2375 | 4.85 | 500 | 0.5306 | 46.4256 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
0b49e8bc46ed974aeecaae8af778b115
gokuls/distilbert_add_GLUE_Experiment_logit_kd_wnli_384
gokuls
distilbert
17
3
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,814
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_logit_kd_wnli_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3434 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.368 | 1.0 | 3 | 0.3481 | 0.4366 | | 0.3551 | 2.0 | 6 | 0.3499 | 0.4366 | | 0.3472 | 3.0 | 9 | 0.3441 | 0.5634 | | 0.3518 | 4.0 | 12 | 0.3434 | 0.5634 | | 0.3492 | 5.0 | 15 | 0.3494 | 0.4366 | | 0.3495 | 6.0 | 18 | 0.3481 | 0.4366 | | 0.3495 | 7.0 | 21 | 0.3440 | 0.5634 | | 0.3463 | 8.0 | 24 | 0.3437 | 0.5634 | | 0.349 | 9.0 | 27 | 0.3444 | 0.5634 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
fe52272d6697cdf5bfb63fbdef80cbfd
huxxx657/roberta-base-finetuned-squad
huxxx657
roberta
23
6
transformers
0
question-answering
true
false
false
mit
null
['squad_v2']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,147
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-squad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 0.8152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.8557 | 1.0 | 8239 | 0.8152 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
904904c5c4249a286115495afd83e435
KoichiYasuoka/deberta-large-japanese-aozora
KoichiYasuoka
deberta-v2
8
6
transformers
5
fill-mask
true
false
false
cc-by-sa-4.0
['ja']
null
null
0
0
0
0
0
0
0
['japanese', 'masked-lm']
false
true
true
846
false
# deberta-large-japanese-aozora ## Model Description This is a DeBERTa(V2) model pre-trained on 青空文庫 texts. NVIDIA A100-SXM4-40GB took 127 hours 8 minutes for training. You can fine-tune `deberta-large-japanese-aozora` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-luw-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-aozora-ud-head), and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-aozora") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-large-japanese-aozora") ``` ## Reference 安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
6f18c7da462d3b28dd66dcb87b2f5a3b
spacy/ca_core_news_md
spacy
null
28
5
spacy
0
token-classification
false
false
false
gpl-3.0
['ca']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
16,957
false
### Details: https://spacy.io/models/ca#ca_core_news_md Catalan pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `ca_core_news_md` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 500000 keys, 20000 unique vectors (300 dimensions) | | **Sources** | [UD Catalan AnCora v2.8](https://github.com/UniversalDependencies/UD_Catalan-AnCora) (Martínez Alonso, Héctor; Pascual, Elena; Zeman, Daniel)<br />[UD Catalan AnCora v2.8 + NER v3.2.8](https://github.com/TeMU-BSC/spacy/releases/tag/3.2.8) (Carlos Rodríguez-Penagos and Carme Armentano-Oller)<br />[Catalan Lemmatizer](https://github.com/explosion/spacy-lookups-data) (Text Mining Unit, Barcelona Supercomputing Center)<br />[Catalan Word Embeddings in FastText (Version 1.0)](http://doi.org/10.5281/zenodo.4522041) (Gutiérrez-Fandiño, Asier, Armengol-Estapé, Jordi, Gonzalez-Agirre, Aitor, Carrino, Casimiro Pio, de Gibert, Ona, & Villegas, Marta) | | **License** | `GNU GPL 3.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (317 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumForm=Digit\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Comm`, `POS=AUX\|VerbForm=Inf`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Peri`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `POS=SCONJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `POS=SYM`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|Polarity=Neg`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Ind`, `POS=PUNCT`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Ind`, `POS=AUX`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=VERB`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `AdvType=Tim\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Semi`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Dash`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=SPACE`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Colo`, `Gender=Masc\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Quot`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `POS=VERB`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PronType=Prs`, `POS=X`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Dem`, `POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `NumForm=Digit\|NumType=Ord\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=PRON\|PronType=Int`, `Foreign=Yes\|Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Foreign=Yes\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Comm`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Comm`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `AdvType=Tim\|Degree=Cmp\|POS=ADV`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Pre\|PronType=Prs`, `POS=DET\|PronType=Rel`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `POS=INTJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Foreign=Yes\|POS=SCONJ`, `Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=SYM`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=VERB`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|POS=DET`, `Foreign=Yes\|POS=ADV`, `POS=PUNCT\|PunctSide=Fin\|Punta d'aignctType=Brck`, `Degree=Cmp\|POS=ADJ`, `AdvType=Tim\|POS=SYM`, `Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:pass`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.93 | | `TOKEN_P` | 99.78 | | `TOKEN_R` | 99.79 | | `TOKEN_F` | 99.79 | | `POS_ACC` | 98.42 | | `MORPH_ACC` | 98.05 | | `MORPH_MICRO_P` | 99.45 | | `MORPH_MICRO_R` | 98.93 | | `MORPH_MICRO_F` | 99.19 | | `SENTS_P` | 99.18 | | `SENTS_R` | 99.18 | | `SENTS_F` | 99.18 | | `DEP_UAS` | 91.88 | | `DEP_LAS` | 88.92 | | `TAG_ACC` | 98.42 | | `LEMMA_ACC` | 98.02 | | `ENTS_P` | 84.34 | | `ENTS_R` | 83.63 | | `ENTS_F` | 83.98 |
5a2b86dc852a7fc79ebe80d5f2205063
Graphcore/gpt2-medium-wikitext-103
Graphcore
gpt2
15
3
transformers
1
text-generation
true
false
false
apache-2.0
null
['wikitext']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,854
false
# Graphcore/gpt2-medium-wikitext-103 Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) ## Intended uses & limitations This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the [wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset. It achieves the following results on the evaluation set: - Loss: 2.6973 ## Training and evaluation data Trained on wikipedia dataset: - [HuggingFace/wikitext-103-raw-v1](https://huggingface.co/datasets/wikitext) dataset ## Training procedure Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore). Command line: ``` python examples/language-modeling/run_clm.py \ --model_name_or_path gpt2-medium \ --ipu_config_name Graphcore/gpt2-medium-ipu \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 \ --do_train \ --do_eval \ --num_train_epochs 10 \ --dataloader_num_workers 64 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 256 \ --output_dir /tmp/clm_output_medium \ --logging_steps 5 \ --learning_rate 1e-5 \ --lr_scheduler_type linear \ --loss_scaling 16384 \ --weight_decay 0.01 \ --warmup_ratio 0.1 \ --ipu_config_overrides="embedding_serialization_factor=5,inference_device_iterations=9,replication_factor=2,inference_replication_factor=2,ipus_per_replica=8,layers_per_ipu=[0 3 3 3 3 4 4 4],matmul_proportion=0.25" \ --dataloader_drop_last \ --pod_type pod16 ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 256 - total_train_batch_size: 1024 - total_eval_batch_size: 18 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 - training precision: Mixed Precision ### Training results ``` ***** train metrics ***** "epoch": 10.0, "train_loss": 2.8070910754504506, "train_runtime": 11217.8167, "train_samples": 114248, "train_samples_per_second": 101.845, "train_steps_per_second": 0.099 ***** eval metrics ***** "eval_loss": 2.697265625, "eval_samples": 240, "perplexity": 14.83910053420958 ``` ### Framework versions - Transformers 4.18.0.dev0 - Pytorch 1.10.0+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
6598b8602322fb7b692557769e65ded0
Karnezis/finetuning-sentiment-model-3000-samples
Karnezis
distilbert
19
12
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,055
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3136 - Accuracy: 0.8767 - F1: 0.8771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
be70f47e07a642c7f209b913c21858eb
Rocketknight1/distilgpt2-finetuned-wikitext2
Rocketknight1
gpt2
19
26
transformers
0
text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,192
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rocketknight1/distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8577 - Validation Loss: 3.6752 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.8577 | 3.6752 | 0 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.8.0-rc0 - Datasets 1.17.0 - Tokenizers 0.11.0
fb69c73ae3cd38d3e1330d76c9dc553e
Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
Luciano
xlm-roberta
12
7
transformers
0
token-classification
true
false
false
mit
['pt']
['lener_br']
null
4
2
2
0
0
0
0
['generated_from_trainer']
true
true
true
2,767
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-lener_br-finetuned-lener-br This model is a fine-tuned version of [Luciano/xlm-roberta-base-finetuned-lener_br](https://huggingface.co/Luciano/xlm-roberta-base-finetuned-lener_br) on the lener_br dataset. It achieves the following results on the evaluation set: - Loss: nan - Precision: 0.9206 - Recall: 0.9294 - F1: 0.9250 - Accuracy: 0.9833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0657 | 1.0 | 1957 | nan | 0.7780 | 0.8687 | 0.8209 | 0.9718 | | 0.0321 | 2.0 | 3914 | nan | 0.8755 | 0.8708 | 0.8731 | 0.9793 | | 0.0274 | 3.0 | 5871 | nan | 0.8096 | 0.9124 | 0.8579 | 0.9735 | | 0.0216 | 4.0 | 7828 | nan | 0.7913 | 0.8842 | 0.8352 | 0.9718 | | 0.0175 | 5.0 | 9785 | nan | 0.7735 | 0.9248 | 0.8424 | 0.9721 | | 0.0117 | 6.0 | 11742 | nan | 0.9206 | 0.9294 | 0.9250 | 0.9833 | | 0.0121 | 7.0 | 13699 | nan | 0.8988 | 0.9318 | 0.9150 | 0.9819 | | 0.0086 | 8.0 | 15656 | nan | 0.8922 | 0.9175 | 0.9047 | 0.9801 | | 0.007 | 9.0 | 17613 | nan | 0.8482 | 0.8997 | 0.8732 | 0.9769 | | 0.0051 | 10.0 | 19570 | nan | 0.8730 | 0.9274 | 0.8994 | 0.9798 | | 0.0045 | 11.0 | 21527 | nan | 0.9172 | 0.9051 | 0.9111 | 0.9819 | | 0.0014 | 12.0 | 23484 | nan | 0.9138 | 0.9155 | 0.9147 | 0.9823 | | 0.0029 | 13.0 | 25441 | nan | 0.9099 | 0.9287 | 0.9192 | 0.9834 | | 0.0035 | 14.0 | 27398 | nan | 0.9019 | 0.9294 | 0.9155 | 0.9831 | | 0.0005 | 15.0 | 29355 | nan | 0.8886 | 0.9343 | 0.9109 | 0.9825 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
8f511f38e280e28920057863e61a9251
okho0653/Bio_ClinicalBERT-zero-shot-finetuned-all-cad
okho0653
bert
13
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
972
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bio_ClinicalBERT-zero-shot-finetuned-all-cad This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
ddea40bb3f2c7f00ebd955378af2fca4
jonatasgrosman/exp_w2v2t_fa_unispeech-ml_s998
jonatasgrosman
unispeech
10
8
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fa']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fa']
false
true
true
500
false
# exp_w2v2t_fa_unispeech-ml_s998 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
23656522f30941488e778548280c9ecc
nandysoham16/14-clustered_aug
nandysoham16
distilbert
8
0
keras
0
null
false
true
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
4,805
false
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> ['The_Legend_of_Zelda:_Twilight_Princess', 'Symbiosis', 'Tristan_da_Cunha', 'Hokkien', 'Thuringia', 'Samoa', 'Chinese_characters', 'Digimon', 'Tuvalu', 'Geological_history_of_Earth'] - **Developed by:** nandysoham - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure [optional] <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
3b308a6668dc2d5a5c9e7fac8c68eb01
Graphcore/groupbert-base-uncased
Graphcore
groupbert
14
1,128
transformers
1
null
true
false
false
apache-2.0
['en']
['Graphcore/wikipedia-bert-128', 'Graphcore/wikipedia-bert-512']
null
1
0
1
0
0
0
0
['generated_from_trainer']
true
true
true
7,551
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Graphcore/groupbert-base-uncased Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description GroupBERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed by Graphcore to pretrain bidirectional representations from unlabelled texts. GroupBERT uses grouped convolutions and matmuls in the encoder, which allows to parallelize computation and achieve higher parameter efficiency. More details are described in the [GroupBERT paper](https://arxiv.org/pdf/2106.05822.pdf). It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. Similarly to BERT it enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Intended uses & limitations This model is a pre-trained GroupBERT-Base trained in two phases on the [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) and [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) datasets. It was trained on a Graphcore IPU-POD16 using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore). Graphcore and Hugging Face are working together to make training of Transformer models on IPUs fast and easy. Learn more about how to take advantage of the power of Graphcore IPUs to train Transformers models at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). ## Training and evaluation data Trained on wikipedia datasets: - [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) - [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) ## Fine-tuning with these weights These weights can be used in either `transformers` or [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore). For example, to fine-tune the SQuAD v1 with `optimum-graphcore` you can do: ``` python examples/question-answering/run_qa.py \ --model_name_or_path Graphcore/groupbert-base-uncased \ --ipu_config_name Graphcore/groupbert-base-uncased \ --dataset_name squad \ --version_2_with_negative False \ --do_train \ --do_eval \ --pad_on_batch_axis \ --num_train_epochs 1 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 16 \ --gradient_accumulation_steps 10 \ --pod_type pod16 \ --learning_rate 4e-4 \ --max_seq_length 384 \ --doc_stride 128 \ --seed 42 \ --lr_scheduler_type linear \ --lamb \ --loss_scaling 64 \ --weight_decay 0.01 \ --warmup_ratio 0.1 \ --logging_steps 5 \ --save_steps -1 \ --dataloader_num_workers 64 \ --output_dir output/squad_groupbert_base ``` ## Training procedure Trained MLM and NSP pre-training scheme from [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962). Trained on a Graphcore IPU-POD16 using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore). It was trained with the IPUConfig [Graphcore/bert-base-ipu](https://huggingface.co/Graphcore/bert-base-ipu/). Command lines: Phase 1: ``` python examples/language-modeling/run_pretraining.py \ --model_type groupbert \ --tokenizer_name bert-base-uncased \ --ipu_config_name Graphcore/bert-base-ipu \ --dataset_name Graphcore/wikipedia-bert-128 \ --do_train \ --logging_steps 5 \ --max_seq_length 128 \ --max_steps 10500 \ --is_already_preprocessed \ --dataloader_num_workers 64 \ --dataloader_mode async_rebatched \ --lamb \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 2000 \ --pod_type pod16 \ --learning_rate 0.012 \ --loss_scaling 16384 \ --weight_decay 0.01 \ --warmup_ratio 0.15 \ --groupbert_schedule \ --config_overrides "hidden_dropout_prob=0.0,attention_probs_dropout_prob=0.0" \ --ipu_config_overrides device_iterations="1,matmul_proportion=0.22,layers_per_ipu=[1 3 4 4]" \ --output_dir output-pretrain-groupbert-base-phase1 ``` Phase 2: ``` python examples/language-modeling/run_pretraining.py \ --model_type groupbert \ --tokenizer_name bert-base-uncased \ --ipu_config_name Graphcore/bert-base-ipu \ --dataset_name Graphcore/wikipedia-bert-512 \ --model_name_or_path ./output-pretrain-bert-base-phase1 \ --do_train \ --logging_steps 5 \ --max_seq_length 512 \ --max_steps 2038 \ --is_already_preprocessed \ --dataloader_num_workers 128 \ --dataloader_mode async_rebatched \ --lamb \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 2048 \ --pod_type pod16 \ --learning_rate 0.01 \ --loss_scaling 128 \ --weight_decay 0.01 \ --warmup_ratio 0.15 \ --groupbert_schedule \ --config_overrides "hidden_dropout_prob=0.0,attention_probs_dropout_prob=0.0" \ --ipu_config_overrides "device_iterations=1,embedding_serialization_factor=2,matmul_proportion=0.22,layers_per_ipu=[1 3 4 4]" \ --output_dir output-pretrain-groupbert-base-phase2 ``` ### Training hyperparameters The following hyperparameters were used during phase 1 training: - learning_rate: 0.012 - train_batch_size: 8 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 200 - total_train_batch_size: 64000 - total_eval_batch_size: 20 - optimizer: LAMB - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - training_steps: 10500 - training precision: Mixed Precision The following hyperparameters were used during phase 2 training: - learning_rate: 0.01 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 2048 - total_train_batch_size: 16384 - total_eval_batch_size: 20 - optimizer: LAMB - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.15 - training_steps: 2038 - training precision: Mixed Precision ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.10.0+cpu - Datasets 2.6.1 - Tokenizers 0.12.1
44a49877783e91806822b76229070271
ViktorDo/distilbert-base-uncased-finetuned-powo_all
ViktorDo
distilbert
4
2
transformers
0
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,365
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-powo_all This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -343, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
36fd5ed06a9df53d6c04ddd5b2c75984
ShadoWxShinigamI/SD-2-MJart
ShadoWxShinigamI
null
4
0
null
17
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
1
1
0
[]
false
true
true
957
false
##Textual Inversion Embed + Hypernetwork For SD 2 models by ShadoWxShinigamI Trained on 200 BLIP Captioned images from my personal MJ Generations. Meant to be used with 768 Models. 16 Vectors - 625 Steps - TI Embed Swish - 10000 Steps - Hypernetwork. The Hypernetwork is meant to be an augment to be used alongside the embed. Using at 0.5 Strength tends to produce the best output (YMMV) Examples :- ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1670827476778-633a520aecbd8b19357b4806.png) ![00001-335098425.png](https://s3.amazonaws.com/moonup/production/uploads/1670828191063-633a520aecbd8b19357b4806.png) ![anime.png](https://s3.amazonaws.com/moonup/production/uploads/1670828241828-633a520aecbd8b19357b4806.png) ![monkey.png](https://s3.amazonaws.com/moonup/production/uploads/1670828303588-633a520aecbd8b19357b4806.png) ![panda.png](https://s3.amazonaws.com/moonup/production/uploads/1670828302002-633a520aecbd8b19357b4806.png)
62b12d2b1250275c1e024131a97592f9
HAriGa/my_awesome_model
HAriGa
bert
14
17
transformers
0
text-classification
true
false
false
mit
null
['gnad10']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,264
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the gnad10 dataset. It achieves the following results on the evaluation set: - Loss: 0.3414 - F1: 0.9001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5884 | 1.0 | 578 | 0.3510 | 0.8940 | | 0.2389 | 2.0 | 1156 | 0.3414 | 0.9001 | ### Framework versions - Transformers 4.26.0 - Pytorch 2.0.0.dev20230126+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
a587b6b812d092941d05675e98a7b228
pritoms/distilgpt2-YTTranscriptTrial2
pritoms
gpt2
9
4
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,243
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-YTTranscriptTrial2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.8738 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 70 | 6.0027 | | No log | 2.0 | 140 | 5.9072 | | No log | 3.0 | 210 | 5.8738 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
141816358dc304eef378c7613e4117b1
anuragshas/whisper-large-v2-as
anuragshas
whisper
22
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['as']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,389
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large-v2 Assamese This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 as dataset. It achieves the following results on the evaluation set: - Loss: 0.3451 - Wer: 23.6961 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0008 | 8.47 | 500 | 0.3451 | 23.6961 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
bc4956fa8358f5b41715b04c297bc6f4
adityay1221/cat.5.32
adityay1221
t5
9
3
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
979
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cat.5.32 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0293 - Bleu: 25.3811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 121 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu102 - Datasets 2.1.0 - Tokenizers 0.12.1
473f169b463d5a5f71335f5ec81c4c74
tkesonia/xlm-roberta-base-finetuned-marc-en
tkesonia
xlm-roberta
14
3
transformers
0
text-classification
true
false
false
mit
null
['amazon_reviews_multi']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,274
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-marc-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.9211 - Mae: 0.5122 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1436 | 1.0 | 235 | 1.0181 | 0.5366 | | 0.9756 | 2.0 | 470 | 0.9211 | 0.5122 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
05415165d2b55de5ce9cdc574b2a4415
timm/convnext_nano.in12k_ft_in1k
timm
null
4
3,490
timm
0
image-classification
true
false
false
apache-2.0
null
['imagenet-1k', 'imagenet-12k']
null
0
0
0
0
0
0
0
['image-classification', 'timm']
false
true
true
21,681
false
# Model card for convnext_nano.in12k_ft_in1k A ConvNeXt image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman. ImageNet-12k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program. Fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 15.6 - GMACs: 2.5 - Activations (M): 8.4 - Image size: 224 x 224 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/rwightman/pytorch-image-models - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_nano.in12k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_nano.in12k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for convnext_base: # torch.Size([1, 128, 56, 56]) # torch.Size([1, 256, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 1024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_nano.in12k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison ### By Top-1 All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. |model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| |[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | |[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | |[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | |[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | |[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | |[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | |[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | |[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | |[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | |[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | |[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | |[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | |[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | |[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | |[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | |[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | |[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | |[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | |[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | |[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | |[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | |[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | |[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | |[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | |[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | |[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | |[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | |[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | |[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | |[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | |[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | |[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | |[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | |[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | |[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | |[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | |[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | |[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | |[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | |[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | |[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | |[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | |[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ### By Throughput (samples / sec) All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. |model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| |[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | |[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | |[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | |[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | |[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | |[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | |[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | |[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | |[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | |[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | |[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | |[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | |[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | |[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | |[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | |[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | |[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | |[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | |[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | |[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | |[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | |[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | |[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | |[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | |[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | |[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | |[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | |[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | |[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | |[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | |[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | |[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | |[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | |[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | |[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | |[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | |[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | |[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | |[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | |[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | |[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | |[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | |[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | |[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | |[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ``` ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ```
21c1b955afc8150bdf1881d16c2703c4
tzvc/375132eb-61b1-49ec-83f3-04676640d6c9
tzvc
null
31
2
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image']
false
true
true
1,743
false
### 375132eb-61b1-49ec-83f3-04676640d6c9 Dreambooth model trained by tzvc with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: sdcid (use that on your prompt) ![sdcid 0](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%281%29.jpg)![sdcid 1](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%282%29.jpg)![sdcid 2](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%283%29.jpg)![sdcid 3](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%284%29.jpg)![sdcid 4](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%285%29.jpg)![sdcid 5](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%286%29.jpg)![sdcid 6](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%287%29.jpg)![sdcid 7](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%288%29.jpg)![sdcid 8](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%289%29.jpg)![sdcid 9](https://huggingface.co/tzvc/375132eb-61b1-49ec-83f3-04676640d6c9/resolve/main/concept_images/sdcid_%2810%29.jpg)
07f18fbf18cbe1ed96b1a69e9e21f1d2
aXhyra/demo_hate_31415
aXhyra
distilbert
10
6
transformers
0
text-classification
true
false
false
apache-2.0
null
['tweet_eval']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,387
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_hate_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8697 - F1: 0.7773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.320702985778492e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 282 | 0.4850 | 0.7645 | | 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 | | 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 | | 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
805efd4f36985070c25e2727c3cd19c9
pig4431/TUF_ELECTRA_5E
pig4431
electra
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
4,254
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TUF_ELECTRA_5E This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1195 - Accuracy: 0.94 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6021 | 0.1 | 50 | 0.5770 | 0.7533 | | 0.5301 | 0.2 | 100 | 0.5460 | 0.7533 | | 0.4958 | 0.3 | 150 | 0.4943 | 0.7533 | | 0.4347 | 0.4 | 200 | 0.4112 | 0.8467 | | 0.3565 | 0.5 | 250 | 0.3601 | 0.88 | | 0.3515 | 0.59 | 300 | 0.3465 | 0.9 | | 0.301 | 0.69 | 350 | 0.3214 | 0.9067 | | 0.2963 | 0.79 | 400 | 0.2996 | 0.9 | | 0.2848 | 0.89 | 450 | 0.2511 | 0.9267 | | 0.2548 | 0.99 | 500 | 0.2493 | 0.8933 | | 0.2527 | 1.09 | 550 | 0.2381 | 0.9333 | | 0.2484 | 1.19 | 600 | 0.2099 | 0.9333 | | 0.2267 | 1.29 | 650 | 0.1834 | 0.9333 | | 0.2147 | 1.39 | 700 | 0.1919 | 0.94 | | 0.1961 | 1.49 | 750 | 0.1751 | 0.9333 | | 0.1868 | 1.58 | 800 | 0.1772 | 0.9267 | | 0.2393 | 1.68 | 850 | 0.1726 | 0.92 | | 0.1747 | 1.78 | 900 | 0.1509 | 0.9467 | | 0.2236 | 1.88 | 950 | 0.1532 | 0.94 | | 0.174 | 1.98 | 1000 | 0.1752 | 0.9267 | | 0.1983 | 2.08 | 1050 | 0.1563 | 0.94 | | 0.2015 | 2.18 | 1100 | 0.1494 | 0.9467 | | 0.1563 | 2.28 | 1150 | 0.1876 | 0.9333 | | 0.168 | 2.38 | 1200 | 0.1802 | 0.9333 | | 0.2074 | 2.48 | 1250 | 0.1669 | 0.94 | | 0.1726 | 2.57 | 1300 | 0.1348 | 0.9533 | | 0.1373 | 2.67 | 1350 | 0.1549 | 0.9533 | | 0.1694 | 2.77 | 1400 | 0.1339 | 0.96 | | 0.1782 | 2.87 | 1450 | 0.1417 | 0.9533 | | 0.1771 | 2.97 | 1500 | 0.1228 | 0.96 | | 0.1886 | 3.07 | 1550 | 0.1415 | 0.9533 | | 0.1507 | 3.17 | 1600 | 0.1350 | 0.9467 | | 0.1435 | 3.27 | 1650 | 0.1294 | 0.9467 | | 0.1548 | 3.37 | 1700 | 0.1316 | 0.96 | | 0.1475 | 3.47 | 1750 | 0.1314 | 0.9333 | | 0.1764 | 3.56 | 1800 | 0.1195 | 0.94 | | 0.1668 | 3.66 | 1850 | 0.1199 | 0.94 | | 0.1336 | 3.76 | 1900 | 0.1210 | 0.9467 | | 0.1452 | 3.86 | 1950 | 0.1259 | 0.9467 | | 0.206 | 3.96 | 2000 | 0.1247 | 0.96 | | 0.1704 | 4.06 | 2050 | 0.1253 | 0.9533 | | 0.1489 | 4.16 | 2100 | 0.1194 | 0.94 | | 0.1766 | 4.26 | 2150 | 0.1278 | 0.96 | | 0.1387 | 4.36 | 2200 | 0.1179 | 0.94 | | 0.1269 | 4.46 | 2250 | 0.1270 | 0.96 | | 0.154 | 4.55 | 2300 | 0.1208 | 0.94 | | 0.1481 | 4.65 | 2350 | 0.1210 | 0.94 | | 0.1676 | 4.75 | 2400 | 0.1196 | 0.94 | | 0.1202 | 4.85 | 2450 | 0.1194 | 0.94 | | 0.1323 | 4.95 | 2500 | 0.1195 | 0.94 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.7.1 - Tokenizers 0.13.2
c8077571b48a42aabbe8e99f8f0eb3d5
PlanTL-GOB-ES/ca_anonimization_core_lg
PlanTL-GOB-ES
null
23
5
spacy
0
token-classification
false
false
false
mit
['es', 'ca']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
16,424
false
This is a Spacy multilingual (Catalan & Spanish) anonimization model, for use with BSC's AnonymizationPipeline at: https://github.com/TeMU-BSC/AnonymizationPipeline. The anonymization pipeline is a library for performing sensitive data identification and ultimately anonymization of the detected data in Spanish and Catalan user generated plain text. This is not a standalone model and is meant to work within the pipeline. The model can detect the following entities: `EMAIL`, `FINANCIAL`, `ID`, `LOC`, `MISC`, `ORG`, `PER`, `TELEPHONE`, `VEHICLE`, `ZIP` | Feature | Description | | --- | --- | | **Name** | `ca_anonimization_core_lg` | | **Version** | `1.0.0` | | **spaCy** | `>=3.2.3,<3.3.0` | | **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) | | **Sources** | n/a | | **License** | `MIT` | | **Author** | [Joaquin Silveira](https://github.com/TeMU-BSC/AnonymizationPipeline) | ### Label Scheme <details> <summary>View label scheme (322 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumForm=Digit\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Comm`, `POS=AUX\|VerbForm=Inf`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Peri`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `POS=SCONJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `POS=NOUN`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `POS=SYM`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|Polarity=Neg`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Ind`, `POS=PUNCT`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Ind`, `POS=AUX`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=VERB`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `AdvType=Tim\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Semi`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=PUNCT\|PunctType=Dash`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Colo`, `Gender=Masc\|NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Int`, `POS=PUNCT\|PunctType=Quot`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `POS=VERB`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PronType=Prs`, `POS=X`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Sing\|POS=DET\|PronType=Dem`, `POS=DET`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `NumForm=Digit\|NumType=Ord\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=PRON\|PronType=Int`, `Foreign=Yes\|Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Foreign=Yes\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Comm`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Comm`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `AdvType=Tim\|Degree=Cmp\|POS=ADV`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PrepCase=Pre\|PronType=Prs`, `POS=DET\|PronType=Rel`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `POS=INTJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Foreign=Yes\|POS=SCONJ`, `Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=SYM`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=VERB`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|POS=DET`, `Foreign=Yes\|POS=ADV`, `POS=PUNCT\|PunctSide=Fin\|Punta d'aignctType=Brck`, `Degree=Cmp\|POS=ADJ`, `AdvType=Tim\|POS=SYM`, `Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:pass`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` | | **`ner`** | `EMAIL`, `FINANCIAL`, `ID`, `LOC`, `MISC`, `ORG`, `PER`, `TELEPHONE`, `VEHICLE`, `ZIP` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 69.12 | | `ENTS_P` | 74.60 | | `ENTS_R` | 64.38 | | `NER_LOSS` | 26573.78 |
dd7cdfef753a925cba62fc9f10e7b9f9
henryscheible/mnli_roberta-base_125
henryscheible
null
14
0
null
0
null
true
false
false
mit
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,003
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mnli_roberta-base_125 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.3804 - Accuracy: 0.8695 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 400 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.13.1
11677a9c11c9ba229ec51b59bde161cf
AkshatSurolia/BEiT-FaceMask-Finetuned
AkshatSurolia
beit
10
6
transformers
0
image-classification
true
false
false
apache-2.0
null
['Face-Mask18K']
null
0
0
0
0
0
0
0
['image-classification']
false
true
true
2,495
false
# BEiT for Face Mask Detection BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Training Metrics epoch = 0.55 total_flos = 576468516GF train_loss = 0.151 train_runtime = 0:58:16.56 train_samples_per_second = 16.505 train_steps_per_second = 1.032 --- ## Evaluation Metrics epoch = 0.55 eval_accuracy = 0.975 eval_loss = 0.0803 eval_runtime = 0:03:13.02 eval_samples_per_second = 18.629 eval_steps_per_second = 2.331
f5843d670fad8c0545f9ac510084f965
jonatasgrosman/exp_w2v2t_es_wav2vec2_s596
jonatasgrosman
wav2vec2
10
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['es']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'es']
false
true
true
456
false
# exp_w2v2t_es_wav2vec2_s596 Fine-tuned [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
7f5595e31ce52cd8005a185d4e5d0285
tau/splinter-base
tau
splinter
7
971
transformers
1
question-answering
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['splinter', 'SplinterModel']
false
true
true
2,593
false
# Splinter base model Splinter-base is the pretrained model discussed in the paper [Few-Shot Question Answering by Pretraining Span Selection](https://aclanthology.org/2021.acl-long.239/) (at ACL 2021). Its original repository can be found [here](https://github.com/oriram/splinter). The model is case-sensitive. Note: This model **doesn't** contain the pretrained weights for the QASS layer (see paper for details), and therefore the QASS layer is randomly initialized upon loading it. For the model **with** those weights, see [tau/splinter-base-qass](https://huggingface.co/tau/splinter-base-qass). ## Model description Splinter is a model that is pretrained in a self-supervised fashion for few-shot question answering. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Recurring Span Selection (RSS) objective, which emulates the span selection process involved in extractive question answering. Given a text, clusters of recurring spans (n-grams that appear more than once in the text) are first identified. For each such cluster, all of its instances but one are replaced with a special `[QUESTION]` token, and the model should select the correct (i.e., unmasked) span for each masked one. The model also defines the Question-Aware Span selection (QASS) layer, which selects spans conditioned on a specific question (in order to perform multiple predictions). ## Intended uses & limitations The prime use for this model is few-shot extractive QA. ## Pretraining The model was pretrained on a v3-8 TPU for 2.4M steps. The training data is based on **Wikipedia** and **BookCorpus**. See the paper for more details. ### BibTeX entry and citation info ```bibtex @inproceedings{ram-etal-2021-shot, title = "Few-Shot Question Answering by Pretraining Span Selection", author = "Ram, Ori and Kirstain, Yuval and Berant, Jonathan and Globerson, Amir and Levy, Omer", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.239", doi = "10.18653/v1/2021.acl-long.239", pages = "3066--3079", } ```
f186f5d83e196f2fa807b46a59596fc3
MeshalAlamr/wav2vec2-xls-r-300m-arabic_speech_commands_10s_one_speaker_all_classes_3_aug
MeshalAlamr
wav2vec2
10
3
transformers
0
audio-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
5,025
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-arabic_speech_commands_10s_one_speaker_all_classes_3_aug This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1190 - Accuracy: 0.7137 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.6888 | 0.96 | 12 | 3.6887 | 0.025 | | 3.8686 | 1.96 | 24 | 3.6837 | 0.0488 | | 3.844 | 2.96 | 36 | 3.5466 | 0.1062 | | 3.7114 | 3.96 | 48 | 3.2589 | 0.1133 | | 2.8339 | 4.96 | 60 | 2.9553 | 0.1883 | | 2.5667 | 5.96 | 72 | 2.8784 | 0.1963 | | 2.1911 | 6.96 | 84 | 2.6379 | 0.2771 | | 1.8461 | 7.96 | 96 | 2.8874 | 0.2929 | | 1.6044 | 8.96 | 108 | 2.4989 | 0.34 | | 1.0916 | 9.96 | 120 | 2.3111 | 0.425 | | 0.9371 | 10.96 | 132 | 2.0899 | 0.4829 | | 0.8177 | 11.96 | 144 | 2.0116 | 0.4971 | | 0.6366 | 12.96 | 156 | 2.0598 | 0.5558 | | 0.549 | 13.96 | 168 | 2.0084 | 0.5575 | | 0.2917 | 14.96 | 180 | 1.8231 | 0.6038 | | 0.2283 | 15.96 | 192 | 1.9943 | 0.6079 | | 0.2382 | 16.96 | 204 | 2.2098 | 0.6083 | | 0.2475 | 17.96 | 216 | 2.3519 | 0.5992 | | 0.1612 | 18.96 | 228 | 2.2483 | 0.5929 | | 0.133 | 19.96 | 240 | 2.2263 | 0.6079 | | 0.1301 | 20.96 | 252 | 2.6094 | 0.5683 | | 0.0993 | 21.96 | 264 | 2.0289 | 0.6417 | | 0.0779 | 22.96 | 276 | 1.9693 | 0.6479 | | 0.0824 | 23.96 | 288 | 2.2471 | 0.6258 | | 0.0872 | 24.96 | 300 | 2.3715 | 0.6538 | | 0.0694 | 25.96 | 312 | 2.5367 | 0.6325 | | 0.0704 | 26.96 | 324 | 2.4467 | 0.6388 | | 0.061 | 27.96 | 336 | 2.1581 | 0.6621 | | 0.0835 | 28.96 | 348 | 2.1672 | 0.6792 | | 0.0402 | 29.96 | 360 | 2.2166 | 0.6596 | | 0.0329 | 30.96 | 372 | 2.6316 | 0.6217 | | 0.0516 | 31.96 | 384 | 2.0840 | 0.6908 | | 0.0455 | 32.96 | 396 | 2.2299 | 0.67 | | 0.0449 | 33.96 | 408 | 2.4341 | 0.6733 | | 0.0332 | 34.96 | 420 | 2.2830 | 0.6725 | | 0.0334 | 35.96 | 432 | 2.2060 | 0.6829 | | 0.025 | 36.96 | 444 | 2.2836 | 0.6554 | | 0.0351 | 37.96 | 456 | 2.5417 | 0.6517 | | 0.0372 | 38.96 | 468 | 2.2738 | 0.6779 | | 0.0136 | 39.96 | 480 | 2.4606 | 0.6525 | | 0.0178 | 40.96 | 492 | 2.1996 | 0.675 | | 0.0116 | 41.96 | 504 | 2.2557 | 0.6763 | | 0.0113 | 42.96 | 516 | 2.2061 | 0.6863 | | 0.014 | 43.96 | 528 | 2.1279 | 0.6925 | | 0.015 | 44.96 | 540 | 2.2151 | 0.6871 | | 0.0197 | 45.96 | 552 | 2.1506 | 0.6929 | | 0.0102 | 46.96 | 564 | 2.1609 | 0.685 | | 0.0115 | 47.96 | 576 | 2.1685 | 0.6854 | | 0.0097 | 48.96 | 588 | 2.2892 | 0.6821 | | 0.0148 | 49.96 | 600 | 2.4085 | 0.6921 | | 0.0114 | 50.96 | 612 | 2.2171 | 0.7104 | | 0.0141 | 51.96 | 624 | 2.1458 | 0.7075 | | 0.0066 | 52.96 | 636 | 2.2046 | 0.7013 | | 0.0128 | 53.96 | 648 | 2.1424 | 0.705 | | 0.0063 | 54.96 | 660 | 2.1425 | 0.7075 | | 0.0094 | 55.96 | 672 | 2.1554 | 0.7087 | | 0.0161 | 56.96 | 684 | 2.1892 | 0.7063 | | 0.0067 | 57.96 | 696 | 2.1819 | 0.7067 | | 0.0099 | 58.96 | 708 | 2.1341 | 0.7125 | | 0.0067 | 59.96 | 720 | 2.1190 | 0.7137 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
cc1cafec8f2644fcebe1e766525d54cc
maretamasaeva/thesis-freeform-yesno
maretamasaeva
roberta
13
4
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,413
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # thesis-freeform-yesno This model is a fine-tuned version of [maretamasaeva/thesis-freeform](https://huggingface.co/maretamasaeva/thesis-freeform) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4547 - Accuracy: 0.0194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.5001 | 1.0 | 9052 | 2.4600 | 0.0194 | | 2.4921 | 2.0 | 18104 | 2.4595 | 0.0194 | | 2.4879 | 3.0 | 27156 | 2.4576 | 0.0194 | | 2.4793 | 4.0 | 36208 | 2.4547 | 0.0194 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.1.0 - Tokenizers 0.12.1
bcfe60aca20098e46bd3a6bcbb2c5a44
pomercier/Francois_Legault
pomercier
null
24
20
diffusers
0
text-to-image
false
false
false
lgpl-3.0
null
null
null
2
0
2
0
0
0
0
['text-to-image']
false
true
true
1,270
false
### Francois Legault This is a Hugging Face model that utilizes Stable Diffusion 1.5, which is a technique used to improve the quality and stability of generated images. The model is prompted with the name "Francois Legault", the current Premier of Quebec, Canada, as input and generates an image of him, that could be a portrait, a photo of him in a meeting, him giving a speech, etc. The generated image can be used in a variety of applications, such as in creating avatars for virtual assistants, generating images for news articles, or creating personalized images for social media. For example, in a virtual assistant, the model can generate an image of Francois Legault, which can be used as an avatar for the virtual assistant. In a news article, the model can generate an image of Francois Legault giving a speech, which can be used as the featured image for the article. And in a social media, the model can generate an image of Francois Legault, which can be used as a personalized image for a user's profile. This model is a powerful tool for anyone looking to generate high-quality images based on a specific prompt. It can be used in a wide range of applications and can help save time and resources when it comes to creating images for various projects.
367eb63845186d5581e016416345bdf4
DOOGLAK/Tagged_Uni_100v9_NER_Model_3Epochs_AUGMENTED
DOOGLAK
bert
13
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['tagged_uni100v9_wikigold_split']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,565
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Tagged_Uni_100v9_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4080 - Precision: 0.3227 - Recall: 0.2305 - F1: 0.2689 - Accuracy: 0.8557 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 39 | 0.4881 | 0.2185 | 0.0487 | 0.0797 | 0.8066 | | No log | 2.0 | 78 | 0.4431 | 0.2831 | 0.1536 | 0.1992 | 0.8387 | | No log | 3.0 | 117 | 0.4080 | 0.3227 | 0.2305 | 0.2689 | 0.8557 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.11.6
8edc917be658ede291538f7e4b2989d9
armamoyl/xlm-roberta-base-finetuned-panx-de
armamoyl
xlm-roberta
44
7
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,299
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1711 - F1: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:---:| | No log | 1.0 | 2097 | 0.1970 | 0.0 | | No log | 2.0 | 4194 | 0.1686 | 0.0 | | No log | 3.0 | 6291 | 0.1711 | 0.0 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu116 - Datasets 2.6.1 - Tokenizers 0.12.1
aefcd30186b4da89a9910ed39578db00
Wiebke/bert-base-casedepoch3_sexist_baseline_with_reddit_and_gab
Wiebke
bert
12
18
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,683
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-casedepoch3_sexist_baseline_with_reddit_and_gab This model is a fine-tuned version of [Wiebke/bert-base-casedepoch3_sexist_baseline](https://huggingface.co/Wiebke/bert-base-casedepoch3_sexist_baseline) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4434 - Accuracy: 0.8707 - F1: 0.8699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.0279 | 0.16 | 500 | 0.5257 | 0.8564 | 0.8540 | | 0.0273 | 0.31 | 1000 | 0.4614 | 0.8607 | 0.8607 | | 0.0235 | 0.47 | 1500 | 0.4873 | 0.8657 | 0.8620 | | 0.0201 | 0.63 | 2000 | 0.4544 | 0.8729 | 0.8694 | | 0.0215 | 0.78 | 2500 | 0.4597 | 0.865 | 0.8653 | | 0.0184 | 0.94 | 3000 | 0.4434 | 0.8707 | 0.8699 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
b1bcfd81b722f84bd82a964a37552903
ConvLab/roberta-base-trippy-dst-multiwoz21
ConvLab
roberta
4
12
transformers
0
null
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['dialogue state tracking', 'task-oriented dialog']
false
true
true
1,748
false
# roberta-base-trippy-dst-multiwoz21 This is a TripPy model trained on [MultiWOZ 2.1](https://github.com/budzianowski/multiwoz) for use in [ConvLab-3](https://github.com/ConvLab/ConvLab-3). This model predicts informable slots, requestable slots, general actions and domain indicator slots. Expected joint goal accuracy for MultiWOZ 2.1 is in the range of 55-56\%. For information about TripPy DST, refer to [TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking](https://aclanthology.org/2020.sigdial-1.4/). The training and evaluation code is available at the official [TripPy repository](https://gitlab.cs.uni-duesseldorf.de/general/dsml/trippy-public). ## Training procedure The model was trained on MultiWOZ 2.1 data via supervised learning using the [TripPy codebase](https://gitlab.cs.uni-duesseldorf.de/general/dsml/trippy-public). MultiWOZ 2.1 data was loaded via ConvLab-3's unified data format dataloader. The pre-trained encoder is [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta) (base). Fine-tuning the encoder and training the DST specific classification heads was conducted for 10 epochs. ### Training hyperparameters ``` python3 run_dst.py \ --task_name="unified" \ --model_type="roberta" \ --model_name_or_path="roberta-base" \ --dataset_config=dataset_config/unified_multiwoz21.json \ --do_lower_case \ --learning_rate=1e-4 \ --num_train_epochs=10 \ --max_seq_length=180 \ --per_gpu_train_batch_size=24 \ --per_gpu_eval_batch_size=32 \ --output_dir=results \ --save_epochs=2 \ --eval_all_checkpoints \ --warmup_proportion=0.1 \ --adam_epsilon=1e-6 \ --weight_decay=0.01 \ --fp16 \ --do_train \ --predict_type=dummy \ --seed=42 ```
f5d80b9ccb528893479603f7aeaeda7e
bookbot/wav2vec2-ljspeech-gruut
bookbot
wav2vec2
20
2,944
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['w11wo/ljspeech_phonemes']
null
0
0
0
0
0
0
0
['phoneme-recognition', 'generated_from_trainer']
true
true
true
6,730
false
# Wav2Vec2 LJSpeech Gruut Wav2Vec2 LJSpeech Gruut is an automatic speech recognition model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a fine-tuned version of [Wav2Vec2-Base](https://huggingface.co/facebook/wav2vec2-base) on the [LJSpech Phonemes](https://huggingface.co/datasets/w11wo/ljspeech_phonemes) dataset. Instead of being trained to predict sequences of words, this model was trained to predict sequence of phonemes, e.g. `["h", "ɛ", "l", "ˈoʊ", "w", "ˈɚ", "l", "d"]`. Therefore, the model's [vocabulary](https://huggingface.co/bookbot/wav2vec2-ljspeech-gruut/blob/main/vocab.json) contains the different IPA phonemes found in [gruut](https://github.com/rhasspy/gruut). This model was trained using HuggingFace's PyTorch framework. All training was done on a Google Cloud Engine VM with a Tesla A100 GPU. All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/bookbot/wav2vec2-ljspeech-gruut/tree/main) tab, as well as the [Training metrics](https://huggingface.co/bookbot/wav2vec2-ljspeech-gruut/tensorboard) logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------- | ------- | ----------- | ------------------------------- | | `wav2vec2-ljspeech-gruut` | 94M | wav2vec 2.0 | `LJSpech Phonemes` Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | PER (w/o stress) | CER (w/o stress) | | ---------------------------- | :--------------: | :--------------: | | `LJSpech Phonemes` Test Data | 0.99% | 0.58% | ## Usage ```py from transformers import AutoProcessor, AutoModelForCTC, Wav2Vec2Processor import librosa import torch from itertools import groupby from datasets import load_dataset def decode_phonemes( ids: torch.Tensor, processor: Wav2Vec2Processor, ignore_stress: bool = False ) -> str: """CTC-like decoding. First removes consecutive duplicates, then removes special tokens.""" # removes consecutive duplicates ids = [id_ for id_, _ in groupby(ids)] special_token_ids = processor.tokenizer.all_special_ids + [ processor.tokenizer.word_delimiter_token_id ] # converts id to token, skipping special tokens phonemes = [processor.decode(id_) for id_ in ids if id_ not in special_token_ids] # joins phonemes prediction = " ".join(phonemes) # whether to ignore IPA stress marks if ignore_stress == True: prediction = prediction.replace("ˈ", "").replace("ˌ", "") return prediction checkpoint = "bookbot/wav2vec2-ljspeech-gruut" model = AutoModelForCTC.from_pretrained(checkpoint) processor = AutoProcessor.from_pretrained(checkpoint) sr = processor.feature_extractor.sampling_rate # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") audio_array = ds[0]["audio"]["array"] # or, read a single audio file # audio_array, _ = librosa.load("myaudio.wav", sr=sr) inputs = processor(audio_array, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs["input_values"]).logits predicted_ids = torch.argmax(logits, dim=-1) prediction = decode_phonemes(predicted_ids[0], processor, ignore_stress=True) # => should give 'b ɪ k ʌ z j u ɚ z s l i p ɪ ŋ ɪ n s t ɛ d ə v k ɔ ŋ k ɚ ɪ ŋ ð ə l ʌ v l i ɹ z p ɹ ɪ n s ə s h æ z b ɪ k ʌ m ə v f ɪ t ə l w ɪ θ n b oʊ p ɹ ə ʃ æ ɡ i s ɪ t s ð ɛ ɹ ə k u ɪ ŋ d ʌ v' ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 0.0001 - `train_batch_size`: 16 - `eval_batch_size`: 8 - `seed`: 42 - `gradient_accumulation_steps`: 2 - `total_train_batch_size`: 32 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_steps`: 1000 - `num_epochs`: 30.0 - `mixed_precision_training`: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | | :-----------: | :---: | :---: | :-------------: | :----: | :----: | | No log | 1.0 | 348 | 2.2818 | 1.0 | 1.0 | | 2.6692 | 2.0 | 696 | 0.2045 | 0.0527 | 0.0299 | | 0.2225 | 3.0 | 1044 | 0.1162 | 0.0319 | 0.0189 | | 0.2225 | 4.0 | 1392 | 0.0927 | 0.0235 | 0.0147 | | 0.0868 | 5.0 | 1740 | 0.0797 | 0.0218 | 0.0143 | | 0.0598 | 6.0 | 2088 | 0.0715 | 0.0197 | 0.0128 | | 0.0598 | 7.0 | 2436 | 0.0652 | 0.0160 | 0.0103 | | 0.0447 | 8.0 | 2784 | 0.0571 | 0.0152 | 0.0095 | | 0.0368 | 9.0 | 3132 | 0.0608 | 0.0163 | 0.0112 | | 0.0368 | 10.0 | 3480 | 0.0586 | 0.0137 | 0.0083 | | 0.0303 | 11.0 | 3828 | 0.0641 | 0.0141 | 0.0085 | | 0.0273 | 12.0 | 4176 | 0.0656 | 0.0131 | 0.0079 | | 0.0232 | 13.0 | 4524 | 0.0690 | 0.0133 | 0.0082 | | 0.0232 | 14.0 | 4872 | 0.0598 | 0.0128 | 0.0079 | | 0.0189 | 15.0 | 5220 | 0.0671 | 0.0121 | 0.0074 | | 0.017 | 16.0 | 5568 | 0.0654 | 0.0114 | 0.0069 | | 0.017 | 17.0 | 5916 | 0.0751 | 0.0118 | 0.0073 | | 0.0146 | 18.0 | 6264 | 0.0653 | 0.0112 | 0.0068 | | 0.0127 | 19.0 | 6612 | 0.0682 | 0.0112 | 0.0069 | | 0.0127 | 20.0 | 6960 | 0.0678 | 0.0114 | 0.0068 | | 0.0114 | 21.0 | 7308 | 0.0656 | 0.0111 | 0.0066 | | 0.0101 | 22.0 | 7656 | 0.0669 | 0.0109 | 0.0066 | | 0.0092 | 23.0 | 8004 | 0.0677 | 0.0108 | 0.0065 | | 0.0092 | 24.0 | 8352 | 0.0653 | 0.0104 | 0.0063 | | 0.0088 | 25.0 | 8700 | 0.0673 | 0.0102 | 0.0063 | | 0.0074 | 26.0 | 9048 | 0.0669 | 0.0105 | 0.0064 | | 0.0074 | 27.0 | 9396 | 0.0707 | 0.0101 | 0.0061 | | 0.0066 | 28.0 | 9744 | 0.0673 | 0.0100 | 0.0060 | | 0.0058 | 29.0 | 10092 | 0.0689 | 0.0100 | 0.0059 | | 0.0058 | 30.0 | 10440 | 0.0683 | 0.0099 | 0.0058 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors Wav2Vec2 LJSpeech Gruut was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Cloud. ## Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.10.0 - Datasets 2.7.1 - Tokenizers 0.13.2 - Gruut 2.3.4
28281abd2b834ea784f81db1f7b19267
MK096/finetuning-sentiment-model-3000-samples
MK096
distilbert
23
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,055
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2453 - Accuracy: 0.92 - F1: 0.9098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
a432774109dcc192a4ad56db01b02012
anhtv/distilbert-base-uncased-finetuned-cola
anhtv
distilbert
13
2
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,571
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7992 - Matthews Correlation: 0.5530 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.523 | 1.0 | 535 | 0.5411 | 0.4128 | | 0.3479 | 2.0 | 1070 | 0.5195 | 0.4901 | | 0.2357 | 3.0 | 1605 | 0.5492 | 0.5444 | | 0.1758 | 4.0 | 2140 | 0.7339 | 0.5387 | | 0.1244 | 5.0 | 2675 | 0.7992 | 0.5530 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
891e6a6b6e436c3380d40e2fe105b87c
tomekkorbak/gifted_shirley
tomekkorbak
null
2
0
null
0
null
false
false
false
mit
['en']
['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
8,997
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gifted_shirley This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 1562 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True, 'skip_tokens': 1661599744}, 'generation': {'every_n_steps': 16, 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'every_n_steps': 16, 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '81a1701e025d2c65ae6e8c2103df559071523ee0', 'value_head_config': {'is_detached': False}}, 'path_or_name': 'tomekkorbak/goofy_pasteur'}, 'objective': {'alpha': 0.5, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'gifted_shirley', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 1673, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1661599744, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1rminqjf
d029d1797b63bf4122bb97ecd5a3e495
jonatasgrosman/exp_w2v2t_uk_xlsr-53_s324
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['uk']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'uk']
false
true
true
461
false
# exp_w2v2t_uk_xlsr-53_s324 Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
14b03d0d2f22a6edc4ae8ab591c55de3
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_data_aug_cola_256
gokuls
distilbert
17
0
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,744
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_sa_GLUE_Experiment_logit_kd_data_aug_cola_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6912 - Matthews Correlation: 0.1233 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.6278 | 1.0 | 835 | 0.6912 | 0.1233 | | 0.5109 | 2.0 | 1670 | 0.7554 | 0.1039 | | 0.4467 | 3.0 | 2505 | 0.7497 | 0.1097 | | 0.3975 | 4.0 | 3340 | 0.7609 | 0.1608 | | 0.3601 | 5.0 | 4175 | 0.7996 | 0.1259 | | 0.3298 | 6.0 | 5010 | 0.7797 | 0.1247 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.9.0 - Tokenizers 0.13.2
7f97e265ff429e50a1d5a6167f6d6165
Haakf/distilbert-base-uncased-padded_right_allsides_news
Haakf
distilbert
8
2
transformers
0
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,893
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Haakf/distilbert-base-uncased-padded_right_allsides_news This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0256 - Validation Loss: 1.9353 - Epoch: 8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -797, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.1715 | 2.0552 | 0 | | 2.1470 | 1.9776 | 1 | | 2.1101 | 1.9531 | 2 | | 2.0782 | 1.9760 | 3 | | 2.0417 | 1.9202 | 4 | | 2.0219 | 1.9425 | 5 | | 2.0121 | 1.9255 | 6 | | 2.0290 | 1.9868 | 7 | | 2.0256 | 1.9353 | 8 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.7.1 - Tokenizers 0.13.2
97dcd72541202454c3a5d0e51e99952c
XLab/rst-sentiment-classification-11b
XLab
t5
6
1
transformers
2
text2text-generation
true
false
false
afl-3.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
11,247
false
<p align="center"> <br> <img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/> <br> </p> # reStructured Pre-training (RST) official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html) #### RST is a new paradigm for language pre-training, which * unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model, * surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc) * achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam In such a pre-training paradigm, * Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing * Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access. ## Model Description We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters. | Model | Description | Recommended Application | ----------- | ----------- |----------- | | rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) | | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker | | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) | | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction | | rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains| | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction | | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification | | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning | | rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning | | **rst-sentiment-classification-11b** | **Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment** | **Sentiment classification, emotion classification** | | rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering| | rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling| | rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks | ## Have a try? ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b") model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b") inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)) ``` ## Data for reStructure Pre-training This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research. We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals. ###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush: | Mine | Signal | #Sample | Use in DataLab | Some Applications | | --- | --- | --- | --- | --- | | [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion| | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification | | [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning| | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion | | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning | | [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation| | [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition| [Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation | | [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference| |[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension| | [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension| | [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension | | [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension| | [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension| | [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension | | [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification| | [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion| | [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition| | [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion| ## Bibtext for Citation Info ``` @article{yuan2022restructured, title={reStructured Pre-training}, author={Yuan, Weizhe and Liu, Pengfei}, journal={arXiv preprint arXiv:2206.11147}, year={2022} } ```
fcfd1e5ec3fbcd1cec2452260b89f125
yasu320001/xlm-roberta-base-finetuned-panx-all
yasu320001
xlm-roberta
10
3
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,319
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1656 - F1: 0.8589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2905 | 1.0 | 715 | 0.1783 | 0.8310 | | 0.1461 | 2.0 | 1430 | 0.1600 | 0.8455 | | 0.0948 | 3.0 | 2145 | 0.1656 | 0.8589 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0+cu116 - Datasets 1.16.1 - Tokenizers 0.10.3
b40cc788b02505378c0968c2892433e8
julien-c/kan-bayashi-jsut_tts_train_tacotron2
julien-c
null
17
3
espnet
0
text-to-speech
false
false
false
cc-by-4.0
['ja']
['jsut']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'text-to-speech']
false
true
true
1,903
false
## Example ESPnet2 TTS model ### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave` ♻️ Imported from https://zenodo.org/record/4381098/ This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/). ### Training ![](./exp/tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent/images/attn_loss.png) ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
3be44f9755f55eea813472fbb092214a
Helsinki-NLP/opus-mt-zlw-fiu
Helsinki-NLP
marian
12
9
transformers
0
translation
true
true
false
apache-2.0
['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu']
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
3,471
false
### zlw-fiu * source language name: West Slavic languages * target language name: Finno-Ugrian languages * OPUS readme: [README.md](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/README.md) * model: transformer * source language codes: dsb, cs, csb_Latn, hsb, pl, zlw * target language codes: hu, vro, fi, liv_Latn, mdf, krl, fkv_Latn, mhr, et, sma, udm, vep, myv, kpv, se, izh, fiu * dataset: opus * release date: 2021-02-18 * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2021-02-18.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.zip) * a sentence-initial language token is required in the form of >>id<<(id = valid, usually three-letter target language ID) * Training data: * ces-fin: Tatoeba-train (1000000) * ces-hun: Tatoeba-train (1000000) * pol-est: Tatoeba-train (1000000) * pol-fin: Tatoeba-train (1000000) * pol-hun: Tatoeba-train (1000000) * Validation data: * ces-fin: Tatoeba-dev, 1000 * ces-hun: Tatoeba-dev, 1000 * est-pol: Tatoeba-dev, 1000 * fin-pol: Tatoeba-dev, 1000 * hun-pol: Tatoeba-dev, 1000 * mhr-pol: Tatoeba-dev, 461 * total-size-shuffled: 5426 * devset-selected: top 5000 lines of Tatoeba-dev.src.shuffled! * Test data: * newssyscomb2009.ces-hun: 502/9733 * newstest2009.ces-hun: 2525/54965 * Tatoeba-test.ces-fin: 88/408 * Tatoeba-test.ces-hun: 1911/10336 * Tatoeba-test.multi-multi: 4562/25497 * Tatoeba-test.pol-chm: 5/36 * Tatoeba-test.pol-est: 15/98 * Tatoeba-test.pol-fin: 609/3293 * Tatoeba-test.pol-hun: 1934/11285 * test set translations file: [test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.test.txt) * test set scores file: [eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/zlw-fiu/opus-2021-02-18.eval.txt) * BLEU-scores |Test set|score| |---|---| |Tatoeba-test.ces-fin|57.2| |Tatoeba-test.ces-hun|42.6| |Tatoeba-test.multi-multi|39.4| |Tatoeba-test.pol-hun|36.6| |Tatoeba-test.pol-fin|36.1| |Tatoeba-test.pol-est|20.9| |newssyscomb2009.ces-hun|13.9| |newstest2009.ces-hun|13.9| |Tatoeba-test.pol-chm|2.0| * chr-F-scores |Test set|score| |---|---| |Tatoeba-test.ces-fin|0.71| |Tatoeba-test.ces-hun|0.637| |Tatoeba-test.multi-multi|0.616| |Tatoeba-test.pol-hun|0.605| |Tatoeba-test.pol-fin|0.592| |newssyscomb2009.ces-hun|0.449| |newstest2009.ces-hun|0.443| |Tatoeba-test.pol-est|0.372| |Tatoeba-test.pol-chm|0.007| ### System Info: * hf_name: zlw-fiu * source_languages: dsb,cs,csb_Latn,hsb,pl,zlw * target_languages: hu,vro,fi,liv_Latn,mdf,krl,fkv_Latn,mhr,et,sma,udm,vep,myv,kpv,se,izh,fiu * opus_readme_url: https://object.pouta.csc.fi/Tatoeba-MT-models/zlw-fiu/opus-2021-02-18.zip/README.md * original_repo: Tatoeba-Challenge * tags: ['translation'] * languages: ['dsb', 'cs', 'csb_Latn', 'hsb', 'pl', 'zlw', 'hu', 'vro', 'fi', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'et', 'sma', 'udm', 'vep', 'myv', 'kpv', 'se', 'izh', 'fiu'] * src_constituents: ['dsb', 'ces', 'csb_Latn', 'hsb', 'pol'] * tgt_constituents: ['hun', 'vro', 'fin', 'liv_Latn', 'mdf', 'krl', 'fkv_Latn', 'mhr', 'est', 'sma', 'udm', 'vep', 'myv', 'kpv', 'sme', 'izh'] * src_multilingual: True * tgt_multilingual: True * helsinki_git_sha: a0966db6db0ae616a28471ff0faf461b36fec07d * transformers_git_sha: 3857f2b4e34912c942694489c2b667d9476e55f5 * port_machine: bungle * port_time: 2021-06-29-15:24
d32abd34e24a214c6df0bc3ed4967e8f
Helsinki-NLP/opus-mt-en-de
Helsinki-NLP
marian
12
147,323
transformers
9
translation
true
true
true
cc-by-4.0
null
null
null
2
0
2
0
1
0
1
['translation']
false
true
true
3,263
false
### opus-mt-en-de ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citation-information) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation - **Language(s):** - Source Language: English - Target Language: German - **License:** CC-BY-4.0 - **Resources for more information:** - [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) ## Uses #### Direct Use This model can be used for translation and text-to-text generation. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Further details about the dataset for this model can be found in the OPUS readme: [en-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-de/README.md) #### Training Data ##### Preprocessing * pre-processing: normalization + SentencePiece * dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT) * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.test.txt) ## Evaluation #### Results * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-de/opus-2020-02-26.eval.txt) #### Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009.en.de | 23.5 | 0.540 | | news-test2008.en.de | 23.5 | 0.529 | | newstest2009.en.de | 22.3 | 0.530 | | newstest2010.en.de | 24.9 | 0.544 | | newstest2011.en.de | 22.5 | 0.524 | | newstest2012.en.de | 23.0 | 0.525 | | newstest2013.en.de | 26.9 | 0.553 | | newstest2015-ende.en.de | 31.1 | 0.594 | | newstest2016-ende.en.de | 37.0 | 0.636 | | newstest2017-ende.en.de | 29.9 | 0.586 | | newstest2018-ende.en.de | 45.2 | 0.690 | | newstest2019-ende.en.de | 40.9 | 0.654 | | Tatoeba.en.de | 47.3 | 0.664 | ## Citation Information ```bibtex @InProceedings{TiedemannThottingal:EAMT2020, author = {J{\"o}rg Tiedemann and Santhosh Thottingal}, title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld}, booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)}, year = {2020}, address = {Lisbon, Portugal} } ``` ## How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de") ```
3349a9b424364b03df86ff3f6db70e6d
Helsinki-NLP/opus-mt-bem-en
Helsinki-NLP
marian
10
13
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-bem-en * source languages: bem * target languages: en * OPUS readme: [bem-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.bem.en | 33.4 | 0.491 |
66b4f9b2f6185b2cfd2320470fd365eb
sd-concepts-library/rd-chaos
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,008
false
### RD chaos on Stable Diffusion This is the `<rd-chaos>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<rd-chaos> 0](https://huggingface.co/sd-concepts-library/rd-chaos/resolve/main/concept_images/3.jpeg) ![<rd-chaos> 1](https://huggingface.co/sd-concepts-library/rd-chaos/resolve/main/concept_images/0.jpeg) ![<rd-chaos> 2](https://huggingface.co/sd-concepts-library/rd-chaos/resolve/main/concept_images/1.jpeg) ![<rd-chaos> 3](https://huggingface.co/sd-concepts-library/rd-chaos/resolve/main/concept_images/2.jpeg)
fbdd2647aa22012c596cfa2fdd2b7dfe
Sushant45/Pub-clustered
Sushant45
distilbert
8
24
transformers
0
question-answering
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,858
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Sushant45/Pub-clustered This model is a fine-tuned version of [nandysoham16/16-clustered_aug](https://huggingface.co/nandysoham16/16-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3589 - Train End Logits Accuracy: 0.8889 - Train Start Logits Accuracy: 0.8924 - Validation Loss: 0.4049 - Validation End Logits Accuracy: 0.9231 - Validation Start Logits Accuracy: 0.9231 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.3589 | 0.8889 | 0.8924 | 0.4049 | 0.9231 | 0.9231 | 0 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.9.0 - Tokenizers 0.13.2
4a0bb1026f005adee00c5ef975f12aa4
doyoungkim/bert-base-uncased-sst2-distilled
doyoungkim
bert
10
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,458
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-sst2-distilled This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.2676 - Accuracy: 0.9025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3797 | 1.0 | 2105 | 0.2512 | 0.9002 | | 0.3036 | 2.0 | 4210 | 0.2643 | 0.8933 | | 0.2609 | 3.0 | 6315 | 0.2831 | 0.8956 | | 0.2417 | 4.0 | 8420 | 0.2676 | 0.9025 | | 0.2305 | 5.0 | 10525 | 0.2740 | 0.9025 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.8.1 - Datasets 1.11.0 - Tokenizers 0.10.1
67005e8ff25b177aa581c743aed8d631
vumichien/whisper-medium-mix-jp-ver2
vumichien
whisper
22
22
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,971
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2790 - Wer: 8.3986 - Cer: 5.2582 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:| | 0.1691 | 1.01 | 1000 | 0.1871 | 10.1740 | 6.3509 | | 0.0916 | 2.02 | 2000 | 0.1691 | 8.9797 | 5.5499 | | 0.0452 | 3.03 | 3000 | 0.1902 | 8.9814 | 5.5867 | | 0.0213 | 4.04 | 4000 | 0.2062 | 8.9375 | 5.6531 | | 0.0096 | 5.05 | 5000 | 0.2284 | 8.7331 | 5.6202 | | 0.0041 | 6.05 | 6000 | 0.2395 | 8.5051 | 5.3009 | | 0.0022 | 7.06 | 7000 | 0.2535 | 8.5507 | 5.3640 | | 0.001 | 8.07 | 8000 | 0.2656 | 8.5557 | 5.3791 | | 0.0006 | 9.08 | 9000 | 0.2721 | 8.4037 | 5.2739 | | 0.0004 | 10.09 | 10000 | 0.2790 | 8.3986 | 5.2582 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
ae18c9e98fa8a7984f72e60e1f4276b9
Helsinki-NLP/opus-mt-de-pag
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-de-pag * source languages: de * target languages: pag * OPUS readme: [de-pag](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pag/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pag/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.de.pag | 24.3 | 0.469 |
e7ef6277167414aa2d682e6751f49eae
Aldraz/distilbert-base-uncased-finetuned-emotion
Aldraz
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,339
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2319 - Accuracy: 0.921 - F1: 0.9214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3369 | 0.8985 | 0.8947 | | No log | 2.0 | 500 | 0.2319 | 0.921 | 0.9214 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.9.1+cpu - Datasets 2.1.0 - Tokenizers 0.11.6
bb04fc9bfab40f6c1b10b0bec9c105af
jonatasgrosman/exp_w2v2t_ja_vp-es_s673
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ja']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'ja']
false
true
true
469
false
# exp_w2v2t_ja_vp-es_s673 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
bd5691055bd1ccb45d6290726e073e61
robertou2/roberta-base-bne-finetuned-amazon_reviews_multi
robertou2
roberta
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['amazon_reviews_multi']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,346
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2368 - Accuracy: 0.9325 ## Model description Modelo de prueba del curso NLP de 0 a 100 sesion 4 ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1919 | 1.0 | 1250 | 0.1690 | 0.933 | | 0.0972 | 2.0 | 2500 | 0.2368 | 0.9325 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.4 - Tokenizers 0.11.6
22d6e31ce63302c720ff28c6fd1e0c13
espnet/ftshijt_mls_asr_transformer_valid.acc.best
espnet
null
31
3
espnet
0
automatic-speech-recognition
false
false
false
cc-by-4.0
['es']
['mls']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'automatic-speech-recognition']
false
true
true
1,805
false
## Example ESPnet2 ASR model ### `ftshijt/mls_asr_transformer_valid.acc.best` ♻️ Imported from https://zenodo.org/record/4458452/ This model was trained by ftshijt using mls/asr1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```python # coming soon ``` ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
2ec5a13d6a1504315ff87a708628d1d8
mariagrandury/roberta-base-finetuned-sms-spam-detection
mariagrandury
roberta
13
333
transformers
2
text-classification
true
false
false
mit
null
['sms_spam']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,274
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-sms-spam-detection This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the sms_spam dataset. It achieves the following results on the evaluation set: - Loss: 0.0133 - Accuracy: 0.998 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0363 | 1.0 | 250 | 0.0156 | 0.996 | | 0.0147 | 2.0 | 500 | 0.0133 | 0.998 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
80c106671253535fbeadaad7b3cab90d
anas-awadalla/gpt2-span-head-few-shot-k-512-finetuned-squad-seed-4
anas-awadalla
gpt2
20
5
transformers
0
question-answering
true
false
false
mit
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
966
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-span-head-few-shot-k-512-finetuned-squad-seed-4 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.11.6
9206421965902a33ca5564a0cfef8685
mrojas/roberta-clinical-wl-es-finetuned-ner
mrojas
roberta
14
8
transformers
0
token-classification
true
false
false
apache-2.0
null
['wl']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,559
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-clinical-wl-es-finetuned-ner This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the wl dataset. It achieves the following results on the evaluation set: - Loss: 0.6227 - Precision: 0.6865 - Recall: 0.7355 - F1: 0.7102 - Accuracy: 0.8268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.028 | 1.0 | 500 | 0.6870 | 0.6558 | 0.6855 | 0.6703 | 0.8035 | | 0.5923 | 2.0 | 1000 | 0.6248 | 0.6851 | 0.7235 | 0.7038 | 0.8244 | | 0.4928 | 3.0 | 1500 | 0.6227 | 0.6865 | 0.7355 | 0.7102 | 0.8268 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
fcd5e320e441f696671b369b980f379b
RANG012/SENATOR
RANG012
distilbert
13
6
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,022
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SENATOR This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2707 - Accuracy: 0.916 - F1: 0.9167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
fc98eb5d4a4703c0179b8cbbb6029114
dfurman/Swin-base-chesapeake-land-cover-v0
dfurman
swin
14
7
transformers
0
image-classification
true
false
false
apache-2.0
null
['imagefolder']
null
0
0
0
0
0
0
0
['image-classification', 'generated_from_trainer']
true
true
true
1,436
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Swin-base-chesapeake-land-cover-v0 This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0430 - Accuracy: 0.9899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0326 | 1.15 | 100 | 0.1309 | 0.9588 | | 0.0102 | 2.3 | 200 | 0.0430 | 0.9899 | | 0.0082 | 3.45 | 300 | 0.0466 | 0.9914 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
873981c1da3378cb5cc8c84e85cb567d
explosion/vi_udv25_vietnamesevtb_trf
explosion
null
28
2
spacy
0
token-classification
false
false
false
cc-by-sa-4.0
['vi']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
2,085
false
UD v2.5 benchmarking pipeline for UD_Vietnamese-VTB | Feature | Description | | --- | --- | | **Name** | `vi_udv25_vietnamesevtb_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (81 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `!`, `"`, `,`, `-`, `.`, `...`, `:`, `;`, `?`, `@`, `A`, `C`, `CC`, `E`, `I`, `L`, `LBKT`, `M`, `N`, `NP`, `Nb`, `Nc`, `Np`, `Nu`, `Ny`, `P`, `R`, `RBKT`, `T`, `V`, `VP`, `X`, `Y`, `Z` | | **`morphologizer`** | `POS=NOUN`, `POS=ADP`, `POS=X\|Polarity=Neg`, `POS=VERB`, `POS=ADJ`, `POS=PUNCT`, `POS=X`, `POS=SCONJ`, `NumType=Card\|POS=NUM`, `POS=DET`, `POS=CCONJ`, `POS=PROPN`, `POS=AUX`, `POS=PART`, `POS=INTJ` | | **`parser`** | `ROOT`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `iobj`, `list`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `0` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 87.90 | | `TOKEN_P` | 86.84 | | `TOKEN_R` | 89.00 | | `TOKEN_ACC` | 98.42 | | `SENTS_F` | 94.33 | | `SENTS_P` | 96.23 | | `SENTS_R` | 92.50 | | `TAG_ACC` | 88.05 | | `POS_ACC` | 90.19 | | `MORPH_ACC` | 96.95 | | `DEP_UAS` | 68.08 | | `DEP_LAS` | 60.64 | | `LEMMA_ACC` | 89.35 |
21ad063c6f1a6f4fce967aec44a20ae2
Meow412/finetuning-sentiment-model-A3
Meow412
distilbert
13
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,045
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-A3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3212 - Accuracy: 0.8760 - F1: 0.3516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.2
c8aebbdc342ed781d4ba3c705a2111c0
PaddlePaddle/ernie-2.0-large-zh
PaddlePaddle
ernie
7
0
paddlenlp
0
null
false
false
false
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,590
false
# PaddlePaddle/ernie-2.0-large-zh ## Introduction Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing. Current pre-training procedures usually focus on training the model with several simple tasks to grasp the co-occurrence of words or sentences. However, besides co-occurring, there exists other valuable lexical, syntactic and semantic information in training corpora, such as named entity, semantic closeness and discourse relations. In order to extract to the fullest extent, the lexical, syntactic and semantic information from training corpora, we propose a continual pre-training framework named ERNIE 2.0 which builds and learns incrementally pre-training tasks through constant multi-task learning. Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on 16 tasks including English tasks on GLUE benchmarks and several common tasks in Chinese. More detail: https://arxiv.org/abs/1907.12412 ## Available Models - ernie-2.0-base-en - ernie-2.0-large-en - ernie-2.0-base-zh - ernie-2.0-large-zh ## How to Use? Click on the *Use in paddlenlp* button on the top right! ## Citation Info ```text @article{ernie2.0, title = {ERNIE 2.0: A Continual Pre-training Framework for Language Understanding}, author = {Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Tian, Hao and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:1907.12412}, year = {2019}, } ```
442869cbd7f31d29188e3be2e5b9040c
HyperMoon/wav2vec2-base-finetuned-deepfake-0919
HyperMoon
wav2vec2
10
12
transformers
0
audio-classification
true
false
false
apache-2.0
null
['asvspoof2019']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,574
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-deepfake-0919 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the asvspoof2019 dataset. It achieves the following results on the evaluation set: - Loss: 0.3335 - Accuracy: 0.8974 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3025 | 1.0 | 1586 | 0.3335 | 0.8974 | | 0.4214 | 2.0 | 3172 | 0.3331 | 0.8974 | | 0.4378 | 3.0 | 4758 | 0.3307 | 0.8974 | | 0.3993 | 4.0 | 6344 | 0.3331 | 0.8974 | | 0.2839 | 5.0 | 7930 | 0.3315 | 0.8974 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
24043b57b27f66a03239031e805ef5ab
commanderstrife/ADE-Bio_ClinicalBERT-NER
commanderstrife
bert
12
7
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,739
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ADE-Bio_ClinicalBERT-NER This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1926 - Precision: 0.7830 - Recall: 0.8811 - F1: 0.8291 - Accuracy: 0.9437 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2389 | 1.0 | 201 | 0.2100 | 0.7155 | 0.8292 | 0.7681 | 0.9263 | | 0.0648 | 2.0 | 402 | 0.1849 | 0.7716 | 0.8711 | 0.8183 | 0.9392 | | 0.2825 | 3.0 | 603 | 0.1856 | 0.7834 | 0.8788 | 0.8284 | 0.9422 | | 0.199 | 4.0 | 804 | 0.1875 | 0.7796 | 0.8781 | 0.8259 | 0.9430 | | 0.0404 | 5.0 | 1005 | 0.1926 | 0.7830 | 0.8811 | 0.8291 | 0.9437 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
58f04d17b09f3258645dcdeeeeb2b8d0
spacy/es_core_news_lg
spacy
null
28
38
spacy
1
token-classification
false
false
false
gpl-3.0
['es']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
29,913
false
### Details: https://spacy.io/models/es#es_core_news_lg Spanish pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `es_core_news_lg` | | **Version** | `3.5.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) | | **Sources** | [UD Spanish AnCora v2.8](https://github.com/UniversalDependencies/UD_Spanish-AnCora) (Martínez Alonso, Héctor; Zeman, Daniel)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran)<br />[spaCy lookups data](https://github.com/explosion/spacy-lookups-data) (Explosion)<br />[Explosion fastText Vectors (cbow, OSCAR Common Crawl + Wikipedia)](https://spacy.io) (Explosion) | | **License** | `GNU GPL 3.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (468 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=ADP`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PROPN`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=PRON\|PronType=Int,Rel`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `POS=SCONJ`, `POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Number=Plur\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=PUNCT\|PunctType=Peri`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Comm`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `POS=ADV`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `POS=PRON\|PronType=Ind`, `POS=ADV\|Polarity=Neg`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `POS=PUNCT\|PunctType=Quot`, `POS=PUNCT`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumType=Card\|POS=NUM`, `POS=VERB\|VerbForm=Ger`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|POS=ADV`, `POS=AUX\|VerbForm=Inf`, `Number=Plur\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Dem`, `POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `AdvType=Tim\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `NumForm=Digit\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|POS=NOUN`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `AdvType=Tim\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `POS=SPACE`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PUNCT\|PunctType=Colo`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PUNCT\|PunctType=Semi`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `POS=PUNCT\|PunctType=Dash`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=NOUN\|VerbForm=Inf`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `POS=DET\|PronType=Ind`, `POS=DET\|PronType=Int,Rel`, `AdvType=Tim\|POS=ADV`, `POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Degree=Abs\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Case=Dat\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=SCONJ\|PronType=Int,Rel`, `Case=Acc,Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc,Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=SYM`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Dat\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Com\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=NOUN\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Mood=Imp\|Number=Plur,Sing\|POS=VERB\|Person=1,2\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Case=Acc\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Number=Sing\|POS=VERB\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Abs\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Fin`, `POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Case=Acc,Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Case=Acc,Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Definite=Def\|Foreign=Yes\|POS=DET\|PronType=Art`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|PunctType=Quot\|VerbForm=Inf`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Case=Com\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Number=Sing\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Case=Dat\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc,Dat\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Acc,Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Number=Sing\|POS=PRON\|PronType=Tot`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=PRON\|PronType=Dem`, `Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `POS=AUX\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Acc,Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `AdvType=Tim\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN\|VerbForm=Part`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Case=Acc,Dat\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Com\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `POS=X`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Case=Com\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc,Dat\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Imp\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Number=Sing\|POS=AUX\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Case=Dat\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=NOUN\|PunctType=Comm`, `Degree=Cmp\|POS=ADJ`, `Gender=Masc\|POS=ADJ`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `POS=PRON\|PronType=Neg`, `Case=Acc,Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Int,Rel`, `Definite=Def\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=ADP`, `Foreign=Yes\|POS=CCONJ`, `Foreign=Yes\|POS=PROPN` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:impers`, `expl:pass`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` | | **`ner`** | `LOC`, `MISC`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 100.00 | | `TOKEN_P` | 99.89 | | `TOKEN_R` | 99.95 | | `TOKEN_F` | 99.92 | | `POS_ACC` | 98.51 | | `MORPH_ACC` | 98.19 | | `MORPH_MICRO_P` | 99.56 | | `MORPH_MICRO_R` | 98.98 | | `MORPH_MICRO_F` | 99.27 | | `SENTS_P` | 96.90 | | `SENTS_R` | 98.50 | | `SENTS_F` | 97.70 | | `DEP_UAS` | 91.40 | | `DEP_LAS` | 88.19 | | `TAG_ACC` | 96.14 | | `LEMMA_ACC` | 96.58 | | `ENTS_P` | 89.67 | | `ENTS_R` | 89.78 | | `ENTS_F` | 89.72 |
5f9542c3e3a9fc63cd00b1b3d4eca7e9
susnato/xlm-roberta-base-finetuned-panx-it
susnato
xlm-roberta
9
12
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,317
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3544 - F1: 0.8235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7074 | 1.0 | 210 | 0.4237 | 0.7311 | | 0.3172 | 2.0 | 420 | 0.3662 | 0.7820 | | 0.1855 | 3.0 | 630 | 0.3544 | 0.8235 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
6ccf9c659b4c1194aa6361ac89e2f69d
anas-awadalla/albert-xxl-v2-finetuned-squad
anas-awadalla
albert
16
3
transformers
1
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
960
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-xxl-v2-finetuned-squad This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.11.6
eac60fc8c8007539bbef8a54e45c3d87
GItaf/roberta-base-roberta-base-finetuned-mbti-0911
GItaf
roberta
13
2
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,079
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-roberta-base-finetuned-mbti-0911 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 4.1338 - eval_runtime: 25.7058 - eval_samples_per_second: 67.495 - eval_steps_per_second: 8.442 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
00675fe601a9bd00daa8e01bd176f50b
Helsinki-NLP/opus-mt-lua-fr
Helsinki-NLP
marian
10
20
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-lua-fr * source languages: lua * target languages: fr * OPUS readme: [lua-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lua-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lua-fr/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.lua.fr | 25.7 | 0.429 |
cbfdd14a9d4fcbd38c473db0db3d9918
nestoralvaro/mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base
nestoralvaro
mt5
12
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,477
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-base-finetuned-xsum-data_prep_2021_12_26___t1_7.csv___topic_text_google_mt5_base This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 2.8146 - Rouge2: 0.6707 - Rougel: 2.8187 - Rougelsum: 2.8098 - Gen Len: 6.4901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.0 | 1.0 | 3869 | nan | 2.8146 | 0.6707 | 2.8187 | 2.8098 | 6.4901 | ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
d9dc50c20ce5b69307b0a49ae972e53b
FolkFoxWalker/my_awesome_billsum_model
FolkFoxWalker
t5
11
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
['billsum']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,703
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5057 - Rouge1: 0.1437 - Rouge2: 0.0544 - Rougel: 0.12 - Rougelsum: 0.12 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.7978 | 0.1276 | 0.039 | 0.1077 | 0.1076 | 19.0 | | No log | 2.0 | 124 | 2.5889 | 0.1371 | 0.0489 | 0.1153 | 0.1151 | 19.0 | | No log | 3.0 | 186 | 2.5234 | 0.1429 | 0.054 | 0.1196 | 0.1194 | 19.0 | | No log | 4.0 | 248 | 2.5057 | 0.1437 | 0.0544 | 0.12 | 0.12 | 19.0 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
073bde416531cf0f13ed48f2794cdfd6
mabaji/thepoet
mabaji
gpt2
8
15
transformers
0
text-generation
true
false
false
apache-2.0
['ar']
null
null
0
0
0
0
0
0
0
['text-generation']
false
true
true
724
false
Thepoet is an Arabic poem generator, pre-trained language model based on OpenAi GPT2 architechture. Special thanks to aubmindlab for their pretrained Arabic model - Aragpt2 - large (https://huggingface.co/aubmindlab/aragpt2-large) AraGPT2-large adafactor 1024 1280 20 36 2.98GB/792M Trained on two huge (APCD) datasets: 512MB Arabic Poem Comprehensive Dataset from Kaggle (https://www.kaggle.com/datasets/mohamedkhaledelsafty/best-arabic-poem-comprehensive-dataset) 150MB Arabic Poem Dataset from Kaggle(https://www.kaggle.com/datasets/ahmedabelal/arabic-poetry) ## Eval results Final perplexity reached was 119.5661 ### BibTeX entry and citation info ```bibtex @inproceedings{Mohamad El Abaji, year={2022} } ```
6cb93547fc78a4925d75ca36ced9d480
Geotrend/distilbert-base-tr-cased
Geotrend
distilbert
6
5
transformers
0
fill-mask
true
false
false
apache-2.0
['tr']
['wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
1,215
false
# distilbert-base-tr-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). ## How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-tr-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-tr-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). ### How to cite ```bibtex @inproceedings{smallermdistilbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ``` ## Contact Please contact [email protected] for any question, feedback or request.
9d5de1a7d32d081c9bb24dbd9f835c39
alexamiredjibi/Multimodal-Trajectory-Classifier-30
alexamiredjibi
distilbert
8
0
transformers
0
null
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
997
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Multimodal-Trajectory-Classifier-30 This model is a fine-tuned version of [alexamiredjibi/Multimodal-Trajectory-Classifier](https://huggingface.co/alexamiredjibi/Multimodal-Trajectory-Classifier) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.7.1 - Tokenizers 0.13.2
d8a3d887f1390f26860a1bd6f9a8bc38
tommy19970714/wav2vec2-base-960h
tommy19970714
wav2vec2
7
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['librispeech_asr']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition']
false
true
true
3,629
false
# Wav2Vec2-Base-960h This repository is a reimplementation of [official Facebook’s wav2vec](https://huggingface.co/facebook/wav2vec2-base-960h). There is no description of converting the wav2vec [pretrain model](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20) to a pytorch.bin file. We are rebuilding pytorch.bin from the pretrain model. Here is the conversion method. ```bash pip install transformers[sentencepiece] pip install fairseq -U git clone https://github.com/huggingface/transformers.git cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py . wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_960h.pt -O ./wav2vec_small_960h.pt mkdir dict wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt mkdir outputs python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./wav2vec_small_960h.pt --dict_path ./dict ``` # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Tokenizer, Wav2Vec2ForCTC from datasets import load_dataset import soundfile as sf import torch # load model and tokenizer tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") # define function to read in sound file def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) # tokenize input_values = tokenizer(ds["speech"][:2], return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = tokenizer.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer import soundfile as sf import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda") tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): input_values = tokenizer(batch["speech"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = tokenizer.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 3.4 | 8.6 | # Reference [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) [Facebook's huggingface Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h) [Paper](https://arxiv.org/abs/2006.11477)
c72a1eb8a21cde8ca1214642546bd9fd
wolinski/constituency-brackets-20
wolinski
bert
9
42
transformers
0
token-classification
false
true
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
2,619
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # constituency-brackets-20 This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2020 - Train Acc: 0.9356 - Validation Loss: 0.2929 - Validation Acc: 0.9120 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Acc | Validation Loss | Validation Acc | Epoch | |:----------:|:---------:|:---------------:|:--------------:|:-----:| | 2.4703 | 0.3783 | 1.4719 | 0.5858 | 0 | | 1.2149 | 0.6600 | 0.8922 | 0.7269 | 1 | | 0.8721 | 0.7343 | 0.6914 | 0.7779 | 2 | | 0.7186 | 0.7715 | 0.6028 | 0.8037 | 3 | | 0.6239 | 0.7987 | 0.5427 | 0.8240 | 4 | | 0.5432 | 0.8342 | 0.4469 | 0.8677 | 5 | | 0.4521 | 0.8665 | 0.4092 | 0.8760 | 6 | | 0.4100 | 0.8761 | 0.3867 | 0.8819 | 7 | | 0.3792 | 0.8855 | 0.3761 | 0.8849 | 8 | | 0.3526 | 0.8926 | 0.3469 | 0.8938 | 9 | | 0.3304 | 0.8981 | 0.3433 | 0.8944 | 10 | | 0.3091 | 0.9049 | 0.3329 | 0.8977 | 11 | | 0.2935 | 0.9081 | 0.3178 | 0.9028 | 12 | | 0.2769 | 0.9138 | 0.3140 | 0.9032 | 13 | | 0.2614 | 0.9173 | 0.2994 | 0.9114 | 14 | | 0.2472 | 0.9213 | 0.2954 | 0.9128 | 15 | | 0.2344 | 0.9260 | 0.2899 | 0.9142 | 16 | | 0.2229 | 0.9292 | 0.2971 | 0.9092 | 17 | | 0.2136 | 0.9322 | 0.2872 | 0.9143 | 18 | | 0.2020 | 0.9356 | 0.2929 | 0.9120 | 19 | ### Framework versions - Transformers 4.26.0 - TensorFlow 2.9.2 - Datasets 2.8.0 - Tokenizers 0.13.2
f0b922b945695cbf0248efa9c366e8bc
bofenghuang/whisper-medium-cv11-german
bofenghuang
whisper
17
38
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'whisper-event']
true
true
true
4,483
false
<style> img { display: inline; } </style> ![Model architecture](https://img.shields.io/badge/Model_Architecture-seq2seq-lightgrey) ![Model size](https://img.shields.io/badge/Params-769M-lightgrey) ![Language](https://img.shields.io/badge/Language-German-lightgrey) # Fine-tuned whisper-medium model for ASR in German This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium), trained on the mozilla-foundation/common_voice_11_0 de dataset. When using the model make sure that your speech input is also sampled at 16Khz. **This model also predicts casing and punctuation.** ## Performance *Below are the WERs of the pre-trained models on the [Common Voice 9.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0). These results are reported in the original [paper](https://cdn.openai.com/papers/whisper.pdf).* | Model | Common Voice 9.0 | | --- | :---: | | [openai/whisper-small](https://huggingface.co/openai/whisper-small) | 13.0 | | [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 8.5 | | [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) | 6.4 | *Below are the WERs of the fine-tuned models on the [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0).* | Model | Common Voice 11.0 | | --- | :---: | | [bofenghuang/whisper-small-cv11-german](https://huggingface.co/bofenghuang/whisper-small-cv11-german) | 11.35 | | [bofenghuang/whisper-medium-cv11-german](https://huggingface.co/bofenghuang/whisper-medium-cv11-german) | 7.05 | | [bofenghuang/whisper-large-v2-cv11-german](https://huggingface.co/bofenghuang/whisper-large-v2-cv11-german) | **5.76** | ## Usage Inference with 🤗 Pipeline ```python import torch from datasets import load_dataset from transformers import pipeline device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Load pipeline pipe = pipeline("automatic-speech-recognition", model="bofenghuang/whisper-medium-cv11-german", device=device) # NB: set forced_decoder_ids for generation utils pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language="de", task="transcribe") # Load data ds_mcv_test = load_dataset("mozilla-foundation/common_voice_11_0", "de", split="test", streaming=True) test_segment = next(iter(ds_mcv_test)) waveform = test_segment["audio"] # NB: decoding option # limit the maximum number of generated tokens to 225 pipe.model.config.max_length = 225 + 1 # sampling # pipe.model.config.do_sample = True # beam search # pipe.model.config.num_beams = 5 # return # pipe.model.config.return_dict_in_generate = True # pipe.model.config.output_scores = True # pipe.model.config.num_return_sequences = 5 # Run generated_sentences = pipe(waveform)["text"] ``` Inference with 🤗 low-level APIs ```python import torch import torchaudio from datasets import load_dataset from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Load model model = AutoModelForSpeechSeq2Seq.from_pretrained("bofenghuang/whisper-medium-cv11-german").to(device) processor = AutoProcessor.from_pretrained("bofenghuang/whisper-medium-cv11-german", language="german", task="transcribe") # NB: set forced_decoder_ids for generation utils model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="de", task="transcribe") # 16_000 model_sample_rate = processor.feature_extractor.sampling_rate # Load data ds_mcv_test = load_dataset("mozilla-foundation/common_voice_11_0", "de", split="test", streaming=True) test_segment = next(iter(ds_mcv_test)) waveform = torch.from_numpy(test_segment["audio"]["array"]) sample_rate = test_segment["audio"]["sampling_rate"] # Resample if sample_rate != model_sample_rate: resampler = torchaudio.transforms.Resample(sample_rate, model_sample_rate) waveform = resampler(waveform) # Get feat inputs = processor(waveform, sampling_rate=model_sample_rate, return_tensors="pt") input_features = inputs.input_features input_features = input_features.to(device) # Generate generated_ids = model.generate(inputs=input_features, max_new_tokens=225) # greedy # generated_ids = model.generate(inputs=input_features, max_new_tokens=225, num_beams=5) # beam search # Detokenize generated_sentences = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] # Normalise predicted sentences if necessary ```
f8b6b0a47375a05365402c0d45255aab
muhtasham/small-mlm-glue-mrpc-custom-tokenizer
muhtasham
bert
12
13
transformers
1
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,411
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-mrpc-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.4085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.9986 | 1.09 | 500 | 6.7224 | | 6.2058 | 2.18 | 1000 | 6.3947 | | 5.981 | 3.27 | 1500 | 6.4669 | | 5.8487 | 4.36 | 2000 | 6.6145 | | 5.7411 | 5.45 | 2500 | 6.4085 | ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.8.1.dev0 - Tokenizers 0.13.2
a8f0cba2af6906435d57adc4c955318b
chaitu619/chai_librispeech_asr_train_transducer_v2_raw_en_bpe5000_sp
chaitu619
null
31
0
espnet
0
automatic-speech-recognition
false
false
false
cc-by-4.0
['en']
['librispeech_asr', 'librispeech 960h']
null
0
0
0
0
1
1
0
['espnet', 'audio', 'automatic-speech-recognition']
false
true
true
57,248
false
## ESPnet2 model This model was trained by Chaitanya Narisetty using recipe in [espnet](https://github.com/espnet/espnet/). <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Tue Apr 26 15:33:18 EDT 2022` - python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]` - espnet version: `espnet 202204` - pytorch version: `pytorch 1.8.1+cu111` - Git hash: `8a76ff24eb513d96561fb47d0320dd39c1c3645a` - Commit date: `Tue Apr 19 07:32:58 2022 +0000` ## asr_train_conformer-rnn_transducer_raw_en_bpe5000_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_model_valid.loss.ave_10best/dev_clean|2703|54402|97.7|2.1|0.2|0.3|2.6|31.5| |decode_asr_model_valid.loss.ave_10best/dev_other|2864|50948|93.8|5.6|0.6|0.6|6.8|50.8| |decode_asr_model_valid.loss.ave_10best/test_clean|2620|52576|97.5|2.3|0.2|0.3|2.8|32.7| |decode_asr_model_valid.loss.ave_10best/test_other|2939|52343|94.1|5.3|0.6|0.7|6.6|51.8| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_clean|2703|54402|98.0|1.8|0.2|0.2|2.2|28.2| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_other|2864|50948|94.8|4.5|0.7|0.5|5.7|45.1| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_clean|2620|52576|97.9|1.9|0.2|0.3|2.4|29.3| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_other|2939|52343|94.9|4.3|0.7|0.5|5.6|47.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_model_valid.loss.ave_10best/dev_clean|2703|288456|99.4|0.4|0.3|0.2|0.9|31.5| |decode_asr_model_valid.loss.ave_10best/dev_other|2864|265951|97.7|1.4|0.9|0.8|3.0|50.8| |decode_asr_model_valid.loss.ave_10best/test_clean|2620|281530|99.4|0.4|0.3|0.3|0.9|32.7| |decode_asr_model_valid.loss.ave_10best/test_other|2939|272758|97.9|1.2|0.9|0.8|2.8|51.8| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_clean|2703|288456|99.4|0.3|0.3|0.2|0.8|28.2| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_other|2864|265951|97.9|1.1|1.0|0.6|2.7|45.1| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_clean|2620|281530|99.4|0.3|0.3|0.2|0.9|29.3| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_other|2939|272758|98.1|0.9|1.0|0.6|2.5|47.0| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_model_valid.loss.ave_10best/dev_clean|2703|68010|97.2|2.1|0.7|0.4|3.3|31.5| |decode_asr_model_valid.loss.ave_10best/dev_other|2864|63110|92.7|5.6|1.7|1.2|8.6|50.8| |decode_asr_model_valid.loss.ave_10best/test_clean|2620|65818|97.0|2.2|0.9|0.4|3.4|32.7| |decode_asr_model_valid.loss.ave_10best/test_other|2939|65101|93.0|5.1|1.9|1.0|8.0|51.8| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_clean|2703|68010|97.5|1.8|0.8|0.4|2.9|28.2| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/dev_other|2864|63110|93.5|4.5|1.9|0.9|7.4|45.1| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_clean|2620|65818|97.3|1.9|0.8|0.4|3.0|29.3| |decode_lm_weight0.4_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.loss.ave_10best/test_other|2939|65101|93.9|4.1|1.9|0.8|6.9|47.0| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/transducer/train_conformer-rnn_transducer.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_conformer-rnn_transducer_raw_en_bpe5000_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 0 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 46179 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 25 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 4 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 10000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape - exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe valid_shape_file: - exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape - exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_960_sp/wav.scp - speech - kaldi_ark - - dump/raw/train_960_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - kaldi_ark - - dump/raw/dev/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0015 weight_decay: 1.0e-06 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - ▁THE - S - ▁AND - ▁OF - ▁TO - ▁A - ▁IN - ▁I - ▁HE - ▁THAT - ▁WAS - ED - ▁IT - '''' - ▁HIS - ING - ▁YOU - ▁WITH - ▁FOR - ▁HAD - T - ▁AS - ▁HER - ▁IS - ▁BE - ▁BUT - ▁NOT - ▁SHE - D - ▁AT - ▁ON - LY - ▁HIM - ▁THEY - ▁ALL - ▁HAVE - ▁BY - ▁SO - ▁THIS - ▁MY - ▁WHICH - ▁ME - ▁SAID - ▁FROM - ▁ONE - Y - E - ▁WERE - ▁WE - ▁NO - N - ▁THERE - ▁OR - ER - ▁AN - ▁WHEN - ▁ARE - ▁THEIR - ▁WOULD - ▁IF - ▁WHAT - ▁THEM - ▁WHO - ▁OUT - M - ▁DO - ▁WILL - ▁UP - ▁BEEN - P - R - ▁MAN - ▁THEN - ▁COULD - ▁MORE - C - ▁INTO - ▁NOW - ▁VERY - ▁YOUR - ▁SOME - ▁LITTLE - ES - ▁TIME - RE - ▁CAN - ▁LIKE - LL - ▁ABOUT - ▁HAS - ▁THAN - ▁DID - ▁UPON - ▁OVER - IN - ▁ANY - ▁WELL - ▁ONLY - B - ▁SEE - ▁GOOD - ▁OTHER - ▁TWO - L - ▁KNOW - ▁GO - ▁DOWN - ▁BEFORE - A - AL - ▁OUR - ▁OLD - ▁SHOULD - ▁MADE - ▁AFTER - ▁GREAT - ▁DAY - ▁MUST - ▁COME - ▁HOW - ▁SUCH - ▁CAME - LE - ▁WHERE - ▁US - ▁NEVER - ▁THESE - ▁MUCH - ▁DE - ▁MISTER - ▁WAY - G - ▁S - ▁MAY - ATION - ▁LONG - OR - ▁AM - ▁FIRST - ▁BACK - ▁OWN - ▁RE - ▁AGAIN - ▁SAY - ▁MEN - ▁WENT - ▁HIMSELF - ▁HERE - NESS - ▁THINK - V - IC - ▁EVEN - ▁THOUGHT - ▁HAND - ▁JUST - ▁O - ▁UN - VE - ION - ▁ITS - 'ON' - ▁MAKE - ▁MIGHT - ▁TOO - K - ▁AWAY - ▁LIFE - TH - ▁WITHOUT - ST - ▁THROUGH - ▁MOST - ▁TAKE - ▁DON - ▁EVERY - F - O - ▁SHALL - ▁THOSE - ▁EYES - AR - ▁STILL - ▁LAST - ▁HOUSE - ▁HEAD - ABLE - ▁NOTHING - ▁NIGHT - ITY - ▁LET - ▁MANY - ▁OFF - ▁BEING - ▁FOUND - ▁WHILE - EN - ▁SAW - ▁GET - ▁PEOPLE - ▁FACE - ▁YOUNG - CH - ▁UNDER - ▁ONCE - ▁TELL - AN - ▁THREE - ▁PLACE - ▁ROOM - ▁YET - ▁SAME - IL - US - U - ▁FATHER - ▁RIGHT - EL - ▁THOUGH - ▁ANOTHER - LI - RI - ▁HEART - IT - ▁PUT - ▁TOOK - ▁GIVE - ▁EVER - ▁E - ▁PART - ▁WORK - ERS - ▁LOOK - ▁NEW - ▁KING - ▁MISSUS - ▁SIR - ▁LOVE - ▁MIND - ▁LOOKED - W - RY - ▁ASKED - ▁LEFT - ET - ▁LIGHT - CK - ▁DOOR - ▁MOMENT - RO - ▁WORLD - ▁THINGS - ▁HOME - UL - ▁THING - LA - ▁WHY - ▁MOTHER - ▁ALWAYS - ▁FAR - FUL - ▁WATER - CE - IVE - UR - ▁HEARD - ▁SOMETHING - ▁SEEMED - I - LO - ▁BECAUSE - OL - ▁END - ▁TOLD - ▁CON - ▁YES - ▁GOING - ▁GOT - RA - IR - ▁WOMAN - ▁GOD - EST - TED - ▁FIND - ▁KNEW - ▁SOON - ▁EACH - ▁SIDE - H - TON - MENT - ▁OH - NE - Z - LING - ▁AGAINST - TER - ▁NAME - ▁MISS - ▁QUITE - ▁WANT - ▁YEARS - ▁FEW - ▁BETTER - ENT - ▁HALF - ▁DONE - ▁ALSO - ▁BEGAN - ▁HAVING - ▁ENOUGH - IS - ▁LADY - ▁WHOLE - LESS - ▁BOTH - ▁SEEN - ▁SET - ▁WHITE - ▁COURSE - IES - ▁VOICE - ▁CALLED - ▁D - ▁EX - ATE - ▁TURNED - ▁GAVE - ▁C - ▁POOR - MAN - UT - NA - ▁DEAR - ISH - ▁GIRL - ▁MORNING - ▁BETWEEN - LED - ▁NOR - IA - ▁AMONG - MA - ▁ - ▁SMALL - ▁REST - ▁WHOM - ▁FELT - ▁HANDS - ▁MYSELF - ▁HIGH - ▁M - ▁HOWEVER - ▁HERSELF - ▁P - CO - ▁STOOD - ID - ▁KIND - ▁HUNDRED - AS - ▁ROUND - ▁ALMOST - TY - ▁SINCE - ▁G - AM - ▁LA - SE - ▁BOY - ▁MA - ▁PERHAPS - ▁WORDS - ATED - ▁HO - X - ▁MO - ▁SAT - ▁REPLIED - ▁FOUR - ▁ANYTHING - ▁TILL - ▁UNTIL - ▁BLACK - TION - ▁CRIED - RU - TE - ▁FACT - ▁HELP - ▁NEXT - ▁LOOKING - ▁DOES - ▁FRIEND - ▁LAY - ANCE - ▁POWER - ▁BROUGHT - VER - ▁FIRE - ▁KEEP - PO - FF - ▁COUNTRY - ▁SEA - ▁WORD - ▁CAR - ▁DAYS - ▁TOGETHER - ▁IMP - ▁REASON - KE - ▁INDEED - TING - ▁MATTER - ▁FULL - ▁TEN - TIC - ▁LAND - ▁RATHER - ▁AIR - ▁HOPE - ▁DA - ▁OPEN - ▁FEET - ▁EN - ▁FIVE - ▁POINT - ▁CO - OM - ▁LARGE - ▁B - ▁CL - ME - ▁GONE - ▁CHILD - INE - GG - ▁BEST - ▁DIS - UM - ▁HARD - ▁LORD - OUS - ▁WIFE - ▁SURE - ▁FORM - DE - ▁DEATH - ANT - ▁NATURE - ▁BA - ▁CARE - ▁BELIEVE - PP - ▁NEAR - ▁RO - ▁RED - ▁WAR - IE - ▁SPEAK - ▁FEAR - ▁CASE - ▁TAKEN - ▁ALONG - ▁CANNOT - ▁HEAR - ▁THEMSELVES - CI - ▁PRESENT - AD - ▁MASTER - ▁SON - ▁THUS - ▁LI - ▁LESS - ▁SUN - ▁TRUE - IM - IOUS - ▁THOUSAND - ▁MONEY - ▁W - ▁BEHIND - ▁CHILDREN - ▁DOCTOR - AC - ▁TWENTY - ▁WISH - ▁SOUND - ▁WHOSE - ▁LEAVE - ▁ANSWERED - ▁THOU - ▁DUR - ▁HA - ▁CERTAIN - ▁PO - ▁PASSED - GE - TO - ▁ARM - ▁LO - ▁STATE - ▁ALONE - TA - ▁SHOW - ▁NEED - ▁LIVE - ND - ▁DEAD - ENCE - ▁STRONG - ▁PRE - ▁TI - ▁GROUND - SH - TI - ▁SHORT - IAN - UN - ▁PRO - ▁HORSE - MI - ▁PRINCE - ARD - ▁FELL - ▁ORDER - ▁CALL - AT - ▁GIVEN - ▁DARK - ▁THEREFORE - ▁CLOSE - ▁BODY - ▁OTHERS - ▁SENT - ▁SECOND - ▁OFTEN - ▁CA - ▁MANNER - MO - NI - ▁BRING - ▁QUESTION - ▁HOUR - ▁BO - AGE - ▁ST - ▁TURN - ▁TABLE - ▁GENERAL - ▁EARTH - ▁BED - ▁REALLY - ▁SIX - 'NO' - IST - ▁BECOME - ▁USE - ▁READ - ▁SE - ▁VI - ▁COMING - ▁EVERYTHING - ▁EM - ▁ABOVE - ▁EVENING - ▁BEAUTIFUL - ▁FEEL - ▁RAN - ▁LEAST - ▁LAW - ▁ALREADY - ▁MEAN - ▁ROSE - WARD - ▁ITSELF - ▁SOUL - ▁SUDDENLY - ▁AROUND - RED - ▁ANSWER - ICAL - ▁RA - ▁WIND - ▁FINE - ▁WON - ▁WHETHER - ▁KNOWN - BER - NG - ▁TA - ▁CAPTAIN - ▁EYE - ▁PERSON - ▁WOMEN - ▁SORT - ▁ASK - ▁BROTHER - ▁USED - ▁HELD - ▁BIG - ▁RETURNED - ▁STRANGE - ▁BU - ▁PER - ▁FREE - ▁EITHER - ▁WITHIN - ▁DOUBT - ▁YEAR - ▁CLEAR - ▁SIGHT - ▁GRA - ▁LOST - ▁KEPT - ▁F - PE - ▁BAR - ▁TOWN - ▁SLEEP - ARY - ▁HAIR - ▁FRIENDS - ▁DREAM - ▁FELLOW - PER - ▁DEEP - QUE - ▁BECAME - ▁REAL - ▁PAST - ▁MAKING - RING - ▁COMP - ▁ACT - ▁BAD - HO - STER - ▁YE - ▁MEANS - ▁RUN - MEN - ▁DAUGHTER - ▁SENSE - ▁CITY - ▁SOMETIMES - ▁TOWARDS - ▁ROAD - ▁SP - ▁LU - ▁READY - ▁FOOT - ▁COLD - ▁SA - ▁LETTER - ▁ELSE - ▁MAR - ▁STA - BE - ▁TRUTH - ▁LE - BO - ▁BUSINESS - CHE - ▁JOHN - ▁SUBJECT - ▁COURT - ▁IDEA - ILY - ▁RIVER - ATING - ▁FAMILY - HE - ▁DIDN - ▁GLAD - ▁SEVERAL - IAL - ▁UNDERSTAND - ▁SC - ▁POSSIBLE - ▁DIFFERENT - ▁RETURN - ▁ARMS - ▁LOW - ▁HOLD - ▁TALK - ▁RU - ▁WINDOW - ▁INTEREST - ▁SISTER - SON - ▁SH - ▁BLOOD - ▁SAYS - ▁CAP - ▁DI - ▁HUMAN - ▁CAUSE - NCE - ▁THANK - ▁LATE - GO - ▁CUT - ▁ACROSS - ▁STORY - NT - ▁COUNT - ▁ABLE - DY - LEY - ▁NUMBER - ▁STAND - ▁CHURCH - ▁THY - ▁SUPPOSE - LES - BLE - OP - ▁EFFECT - BY - ▁K - ▁NA - ▁SPOKE - ▁MET - ▁GREEN - ▁HUSBAND - ▁RESPECT - ▁PA - ▁FOLLOWED - ▁REMEMBER - ▁LONGER - ▁AGE - ▁TAKING - ▁LINE - ▁SEEM - ▁HAPPY - LAND - EM - ▁STAY - ▁PLAY - ▁COMMON - ▁GA - ▁BOOK - ▁TIMES - ▁OBJECT - ▁SEVEN - QUI - DO - UND - ▁FL - ▁PRETTY - ▁FAIR - WAY - ▁WOOD - ▁REACHED - ▁APPEARED - ▁SWEET - ▁FALL - BA - ▁PASS - ▁SIGN - ▁TREE - IONS - ▁GARDEN - ▁ILL - ▁ART - ▁REMAIN - ▁OPENED - ▁BRIGHT - ▁STREET - ▁TROUBLE - ▁PAIN - ▁CONTINUED - ▁SCHOOL - OUR - ▁CARRIED - ▁SAYING - HA - ▁CHANGE - ▁FOLLOW - ▁GOLD - ▁SW - ▁FEELING - ▁COMMAND - ▁BEAR - ▁CERTAINLY - ▁BLUE - ▁NE - CA - ▁WILD - ▁ACCOUNT - ▁OUGHT - UD - ▁T - ▁BREATH - ▁WANTED - ▁RI - ▁HEAVEN - ▁PURPOSE - ▁CHARACTER - ▁RICH - ▁PE - ▁DRESS - OS - FA - ▁TH - ▁ENGLISH - ▁CHANCE - ▁SHIP - ▁VIEW - ▁TOWARD - AK - ▁JOY - ▁JA - ▁HAR - ▁NEITHER - ▁FORCE - ▁UNCLE - DER - ▁PLAN - ▁PRINCESS - DI - ▁CHIEF - ▁HAT - ▁LIVED - ▁AB - ▁VISIT - ▁MOR - TEN - ▁WALL - UC - ▁MINE - ▁PLEASURE - ▁SMILE - ▁FRONT - ▁HU - ▁DEAL - OW - ▁FURTHER - GED - ▁TRIED - DA - VA - ▁NONE - ▁ENTERED - ▁QUEEN - ▁PAY - ▁EL - ▁EXCEPT - ▁SHA - ▁FORWARD - ▁EIGHT - ▁ADDED - ▁PUBLIC - ▁EIGHTEEN - ▁STAR - ▁HAPPENED - ▁LED - ▁WALKED - ▁ALTHOUGH - ▁LATER - ▁SPIRIT - ▁WALK - ▁BIT - ▁MEET - LIN - ▁FI - LT - ▁MOUTH - ▁WAIT - ▁HOURS - ▁LIVING - ▁YOURSELF - ▁FAST - ▁CHA - ▁HALL - ▁BEYOND - ▁BOAT - ▁SECRET - ENS - ▁CHAIR - RN - ▁RECEIVED - ▁CAT - RESS - ▁DESIRE - ▁GENTLEMAN - UGH - ▁LAID - EVER - ▁OCCASION - ▁WONDER - ▁GU - ▁PARTY - DEN - ▁FISH - ▁SEND - ▁NEARLY - ▁TRY - CON - ▁SEEMS - RS - ▁BELL - ▁BRA - ▁SILENCE - IG - ▁GUARD - ▁DIE - ▁DOING - ▁TU - ▁COR - ▁EARLY - ▁BANK - ▁FIGURE - IF - ▁ENGLAND - ▁MARY - ▁AFRAID - LER - ▁FO - ▁WATCH - ▁FA - ▁VA - ▁GRE - ▁AUNT - PED - ▁SERVICE - ▁JE - ▁PEN - ▁MINUTES - ▁PAN - ▁TREES - NED - ▁GLASS - ▁TONE - ▁PLEASE - ▁FORTH - ▁CROSS - ▁EXCLAIMED - ▁DREW - ▁EAT - ▁AH - ▁GRAVE - ▁CUR - PA - URE - CENT - ▁MILES - ▁SOFT - ▁AGO - ▁POSITION - ▁WARM - ▁LENGTH - ▁NECESSARY - ▁THINKING - ▁PICTURE - ▁PI - SHIP - IBLE - ▁HEAVY - ▁ATTENTION - ▁DOG - ABLY - ▁STANDING - ▁NATURAL - ▁APPEAR - OV - ▁CAUGHT - VO - ISM - ▁SPRING - ▁EXPERIENCE - ▁PAT - OT - ▁STOPPED - ▁REGARD - ▁HARDLY - ▁SELF - ▁STRENGTH - ▁GREW - ▁KNIGHT - ▁OPINION - ▁WIDE - ▁INSTEAD - ▁SOUTH - ▁TRANS - ▁CORNER - ▁LEARN - ▁ISLAND - ▁MI - ▁THIRD - ▁STE - ▁STRAIGHT - ▁TEA - ▁BOUND - ▁SEEING - ▁JU - ▁DINNER - ▁BEAUTY - ▁PEACE - AH - ▁REP - ▁SILENT - ▁CRE - ALLY - RIC - ▁STEP - ▁VER - ▁JO - GER - ▁SITTING - ▁THIRTY - ▁SAVE - ENED - ▁GLANCE - ▁REACH - ▁ACTION - ▁SAL - ▁SAD - ▁STONE - ITIES - ▁FRENCH - ▁STRUCK - ▁PAPER - ▁WHATEVER - ▁SUB - ▁DISTANCE - ▁WRONG - ▁KNOWLEDGE - ▁SAFE - ▁SNOW - ▁MUSIC - ▁FIFTY - RON - ▁ATTEMPT - ▁GOVERNMENT - TU - ▁CROWD - ▁BESIDES - ▁LOVED - ▁BOX - ▁DIRECTION - ▁TRAIN - ▁NORTH - ▁THICK - ▁GETTING - AV - ▁FLOOR - ▁COMPANY - ▁BLOW - ▁PLAIN - TRO - ▁BESIDE - ▁ROCK - ▁IMMEDIATELY - FI - ▁SHADOW - ▁SIT - ORS - ILE - ▁DRINK - ▁SPOT - ▁DANGER - ▁AL - ▁SAINT - ▁SLOWLY - ▁PALACE - IER - ▁RESULT - ▁PETER - ▁FOREST - ▁BELONG - ▁SU - ▁PAR - RIS - ▁TEARS - ▁APPEARANCE - ▁GATE - BU - ITION - ▁QUICKLY - ▁QUIET - ▁LONDON - ▁START - ▁BROWN - TRA - KIN - ▁CONSIDER - ▁BATTLE - ▁ANNE - ▁PIECE - ▁DIED - ▁SUCCESS - ▁LIPS - ▁FILLED - ▁FORGET - ▁POST - IFIED - ▁MARGARET - ▁FOOD - HAM - ▁PLEASANT - ▁FE - ▁EXPRESSION - ▁POCKET - ▁FRESH - ▁WEAR - TRI - ▁BROKEN - ▁LAUGHED - GING - ▁FOLLOWING - WN - IP - ▁TOUCH - ▁YOUTH - ATIVE - ▁LEG - ▁WEEK - ▁REMAINED - ▁EASY - NER - RK - ▁ENTER - ▁FIGHT - ▁PLACED - ▁TRAVEL - ▁SIMPLE - ▁GIRLS - ▁WAITING - ▁STOP - ▁WAVE - AU - ▁WISE - ▁CAMP - TURE - UB - ▁VE - ▁OFFICE - ▁GRAND - ▁FIT - ▁JUDGE - UP - MENTS - ▁QUICK - HI - ▁FLO - RIES - VAL - ▁COMFORT - ▁PARTICULAR - ▁STARTED - ▁SUIT - ▁NI - ▁PALE - ▁IMPOSSIBLE - ▁HOT - ▁CONVERSATION - ▁SCENE - ▁BOYS - ▁WIN - ▁BRE - ▁SOCIETY - ▁OUTSIDE - ▁WRITE - ▁EFFORT - ▁TALKING - ▁FORTUNE - ▁NINE - ▁WA - ▁SINGLE - ▁RULE - ▁PORT - ▁WINTER - ▁CAST - ▁CRA - ▁HAPPEN - ▁CRO - ▁SHUT - NING - ▁GUN - ▁NOBLE - ▁BEGIN - ▁PATH - ▁SKY - ▁WONDERFUL - ▁SUDDEN - ▁ARMY - ▁CHE - ▁WORTH - ▁MOUNTAIN - ▁MIN - AG - ▁FLU - ▁GRACE - ▁CHAPTER - ▁BELOW - ▁RING - ▁TURNING - ▁IRON - ▁TOP - ▁AFTERNOON - ORY - ▁EVIL - ▁TRUST - ▁BOW - ▁TRI - ▁SAIL - ▁CONTENT - ▁HORSES - ITE - ▁SILVER - AP - ▁LAD - ▁RUNNING - ▁HILL - ▁BEGINNING - ▁MAD - ▁HABIT - GRA - ▁CLOTHES - ▁MORROW - ▁CRY - ▁FASHION - ▁PRESENCE - ▁Z - FE - ▁ARRIVED - ▁QUARTER - ▁PERFECT - ▁WO - ▁TRA - ▁USUAL - ▁NECK - ▁MARRIED - ▁SEAT - ▁WI - ▁GAR - ▁SAND - ▁SHORE - ▁GIVING - NY - ▁PROBABLY - ▁MINUTE - ▁EXPECT - ▁DU - ▁SHOT - ▁INSTANT - ▁DEGREE - ▁COLOR - ▁WEST - RT - ▁MARCH - ▁BIRD - ▁SHOWED - ▁GREATER - ▁SERIOUS - ▁CARRY - ▁COVERED - ▁FORMER - ▁LOUD - ▁MOVED - ▁MASS - ▁SEEK - ▁CHO - GEN - ▁ROMAN - IB - ▁MOON - ▁BOARD - ▁STREAM - ▁EASILY - ▁WISHED - ▁SEARCH - ▁COULDN - ▁MONTHS - ▁SICK - LIE - ▁DUTY - ▁TWELVE - ▁FAINT - ▁STRANGER - ▁SURPRISE - ▁KILL - ▁LEAVING - ▁JOURNEY - ▁SCARCELY - ▁RAISED - ▁SPEAKING - ▁TERRIBLE - ▁TOM - ▁FIELD - ▁GAME - ▁QUA - ▁PROMISE - ▁LIE - ▁CONDITION - ▁TRO - ▁PERSONAL - ▁TALL - ▁STICK - ▁THREW - ▁MARRY - ▁VAN - ▁BURN - ▁ACCORDING - ▁RISE - ▁ATTACK - ▁SWORD - ▁GUESS - ▁THOUGHTS - ▁THIN - ▁THROW - ▁CALM - SIDE - ▁VILLAGE - ▁DEN - ▁ANXIOUS - ▁MER - GI - ▁EXPECTED - ▁BALL - ▁ESPECIALLY - ▁CHARGE - ▁MEASURE - ISE - ▁NICE - ▁TRYING - ▁ALLOW - ▁SHARP - ▁BREAD - ▁HONOUR - ▁HONOR - ▁ENTIRELY - ▁BILL - ▁BRI - ▁WRITTEN - ▁AR - ▁BROKE - ▁KILLED - ▁MARK - ▁VEN - ▁LADIES - ▁LEARNED - ▁FLOWERS - PLE - ▁FORTY - ▁OFFER - ▁HAPPINESS - ▁PRAY - ▁CLASS - ▁FER - ▁PRINCIPLE - GU - ▁BOOKS - ▁SHAPE - ▁SUMMER - ▁JACK - ▁DRAW - ▁GOLDEN - ▁DECIDED - ▁LEAD - ▁UNLESS - ▁HARM - ▁LISTEN - HER - ▁SHOOK - ▁INFLUENCE - ▁PERFECTLY - ▁MARRIAGE - ▁BROAD - ▁ESCAPE - ▁STATES - ▁MIDDLE - ▁PLANT - ▁MIL - ▁MOVEMENT - ▁NOISE - ▁ENEMY - ▁HISTORY - ▁BREAK - ROUS - ▁UNDERSTOOD - ▁LATTER - FER - ▁COMES - ▁MERELY - ▁SIMPLY - WI - ▁IMAGINE - ▁LOWER - ▁CONDUCT - ▁BORN - WA - ▁YARD - ▁KA - ▁CLOSED - ▁NOTE - GA - ▁STRA - RAN - ▁EXIST - EV - ▁SPEECH - ▁BITTER - JO - ▁MAKES - ▁GRASS - ▁REPLY - ▁CHANGED - ▁MON - ▁LYING - ▁DANCE - ▁FINALLY - ▁AMERICAN - ▁ENJOY - ▁CONTAIN - ▁MEANT - USE - ▁OBSERVED - THER - ▁LAUGH - ▁AFTERWARDS - ▁BEAT - ▁RACE - ▁EQUAL - ▁RAIN - PS - ▁STEPS - ▁BENEATH - ▁TAIL - ▁TASTE - IO - EY - ▁CHAR - ▁GE - GN - TIN - ▁GROW - ▁TE - IANS - ▁MOVE - ▁REPEATED - ▁DRIVE - TUR - ▁SI - CLOCK - ▁BRAVE - ▁MADAME - ▁LOT - ▁CASTLE - ▁HI - AND - ▁FUTURE - ▁RELATION - ▁SORRY - ▁HEALTH - ▁DICK - ▁R - ▁BUILDING - ▁EDGE - ▁BLESS - ▁SPITE - WE - ▁MIS - ▁PRISONER - ▁ALLOWED - ▁PH - ▁CATCH - MER - ETH - ▁COAT - ▁COMPLETE - ▁WOULDN - ▁CREATURE - ▁YELLOW - ▁IMPORTANT - ▁ADD - ▁PASSING - ▁DARKNESS - ▁CARRIAGE - ▁MILL - ▁FIFTEEN - NCY - ▁HUNG - ▁OB - ▁PLEASED - ▁SPREAD - ▁CURIOUS - ▁WORSE - ▁CIRCUMSTANCES - ▁GI - LAR - ▁CAL - ▁HY - ▁MERE - ▁JANE - ▁EAST - BI - ▁CUP - ▁BLIND - ▁PASSION - ▁DISCOVERED - ▁NOTICE - ▁REPORT - ▁SPACE - ▁PRESENTLY - ▁SORROW - ▁PACK - ▁DIN - CY - ▁DRY - ▁ANCIENT - ▁DRESSED - ▁COVER - ▁VO - ▁EXISTENCE - ▁EXACTLY - ▁BEAST - ▁PROPER - ▁DROPPED - ▁CLEAN - ▁COLOUR - ▁HOST - ▁CHAMBER - ▁FAITH - LET - ▁DETERMINED - ▁PRIEST - ▁STORM - ▁SKIN - ▁DARE - ▁PERSONS - ▁PICK - ▁NARROW - ▁SUPPORT - ▁PRIVATE - ▁SMILED - ▁COUSIN - ▁DRAWING - ▁ATTEND - ▁COOK - ▁PREVENT - ▁VARIOUS - ▁BLA - ▁FIXED - ▁WEAK - THE - ▁HOLE - ▁BOTTOM - ▁NOBODY - ADE - ▁LEGS - ITCH - ▁INDIVIDUAL - ▁EARS - LIKE - ▁ADVANTAGE - ▁FRANCE - ▁BON - ▁WINE - ▁LIVES - OD - ▁WALLS - ▁TIRED - ▁SHOP - ▁ANIMAL - ▁CRU - ▁WROTE - ▁ROYAL - ▁CONSIDERED - ▁MORAL - ▁COMPANION - ▁LOSE - ▁ISN - ▁BAG - ▁LAKE - ▁INTER - ▁COM - ▁LETTERS - ▁LUCK - ▁EAR - ▁GERMAN - ▁PET - ▁SAKE - ▁DROP - ▁PAID - ▁BREAKFAST - ▁LABOR - ▁DESERT - ▁DECLARED - ▁HUM - ▁STUDY - ▁INSTANCE - ONE - ▁SOMEWHAT - ▁CLOTH - ▁SPECIAL - ▁COLONEL - ▁SONG - ▁MAIN - ▁VALUE - ▁PROUD - ▁EXPRESS - ▁NATION - ▁HANDSOME - ▁CONFESS - ▁PU - ▁PASSAGE - ▁PERIOD - ▁CUSTOM - ▁HURT - ▁SHOULDER - ▁CHRIST - ZA - ▁RECEIVE - ▁DIFFICULT - ▁DEPEND - ▁MEETING - ▁CHI - ▁GEN - LIGHT - ▁BELIEVED - ▁SOCIAL - ▁DIFFICULTY - ▁GREATEST - ▁DRAWN - ▁GRANT - ▁BIRDS - ▁ANGRY - ▁HEAT - UFF - ▁DUE - ▁PLACES - ▁SIN - ▁COURAGE - ▁EVIDENTLY - ▁GENTLE - ▁CRUEL - ▁GEORGE - ▁GRI - ▁SERVANT - ▁U - ▁PURE - OOK - ▁KNOWS - ▁KNOWING - LF - ▁WRITING - ▁REMEMBERED - ▁CU - ▁HOLDING - ▁TENDER - ▁QUI - ▁BURST - ▁SURELY - IGN - ▁VALLEY - ▁FU - ▁BUTTER - ▁SPOKEN - ▁STORE - ▁DISC - ▁CHRISTIAN - ▁PARIS - ▁HENRY - ▁FINISHED - ▁PROVE - ▁FOOL - ▁SOLDIERS - ▁LANGUAGE - ▁INSIDE - ▁BAN - ▁FALLEN - ROW - ▁MAL - ▁BABY - ▁SITUATION - ▁WATCHED - ANS - ▁RUIN - ▁GENTLEMEN - ▁FRO - ▁FANCY - ▁ACCEPT - ▁SEASON - ▁OURSELVES - ▁SAN - ▁SPEED - IZED - ▁COOL - ▁SERVE - ▁VESSEL - ▁WILLIAM - ▁OBLIGED - ▁GROUP - FORM - ▁GOES - UOUS - ▁LEAVES - ▁PECULIAR - ▁NEWS - ▁VAIN - ▁EVERYBODY - ▁PIN - UG - ▁FORGOTTEN - ▁FRA - GAN - ▁CAREFULLY - ▁FLASH - UCH - ▁FUR - ▁MURDER - ▁DELIGHT - ▁WAITED - ▁RENDER - ▁PROPERTY - ▁NOTICED - ▁ROLL - ▁KNOCK - ▁EARNEST - KI - ▁HONEST - ▁PROMISED - ▁BAL - AW - ▁WALKING - ANG - ▁SQUARE - ▁QUIETLY - ▁CLOUD - WOOD - ▁FORMED - ▁HIGHER - ▁BUILT - ▁FATE - ▁TEACH - MY - ▁FALSE - ▁YORK - ▁DUST - ▁CLIMB - ▁FOND - ▁GROWN - ▁DESCEND - ▁RAG - ▁FRUIT - ▁GENERALLY - ▁OFFERED - ▁ER - ▁NURSE - POSE - ▁SPENT - ▁JOIN - ▁STATION - ▁MEANING - ▁SMOKE - HOOD - ▁ROUGH - JU - ▁LIKELY - ▁SURFACE - ▁KE - ▁MONTH - ▁POSSESSION - ▁TONGUE - ▁DUKE - ▁NOSE - ▁LAUGHING - ▁WEATHER - ▁WHISPERED - ▁SYSTEM - ▁LAWS - DDLE - ▁TOUCHED - ▁TRADE - LD - ▁SURPRISED - RIN - ▁ARCH - ▁WEALTH - FOR - ▁TEMPER - ▁FRANK - ▁GAL - ▁BARE - ▁OPPORTUNITY - ▁CLAIM - ▁ANIMALS - ▁REV - ▁COST - ▁WASH - ZE - ▁CORN - ▁OPPOSITE - ▁POLICE - ▁IDEAS - LON - ▁KEY - ▁READING - ▁COLLECT - CHED - ▁H - ▁CROWN - ▁TAR - ▁SWIFT - ▁SHOULDERS - ▁ICE - ▁GRAY - ▁SHARE - ▁PREPARED - ▁GRO - ▁UND - ▁TER - ▁EMPTY - CING - ▁SMILING - ▁AVOID - ▁DIFFERENCE - ▁EXPLAIN - ▁POUR - ▁ATTRACT - ▁OPENING - ▁WHEEL - ▁MATERIAL - ▁BREAST - ▁SUFFERING - ▁DISTINCT - ▁BOOT - ▁ROW - ▁FINGERS - HAN - ▁ALTOGETHER - ▁FAT - ▁PAPA - ▁BRAIN - ▁ASLEEP - ▁GREY - ▁SUM - ▁GAS - ▁WINDOWS - ▁ALIVE - ▁PROCEED - ▁FLOWER - ▁LEAP - ▁PUR - ▁PIECES - ▁ALTER - ▁MEMORY - IENT - ▁FILL - ▁CLO - ▁THROWN - ▁KINGDOM - ▁RODE - IUS - ▁MAID - ▁DIM - ▁BAND - ▁VIRTUE - ▁DISH - ▁GUEST - ▁LOSS - ▁CAUSED - ▁MOTION - ▁POT - ▁MILLION - ▁FAULT - ▁LOVELY - ▁HERO - PPING - ▁UNITED - ▁SPI - SOME - BRA - ▁MOUNTAINS - ▁NU - ▁SATISFIED - ▁DOLLARS - ▁LOVER - ▁CONCEAL - ▁VAST - ▁PULL - ▁HATH - ▁RUSH - ▁J - ▁DESPAIR - EX - ▁HEIGHT - ▁CE - ▁BENT - ▁PITY - ▁RISING - ATH - ▁PRIDE - ▁HURRY - KA - ▁SETTLED - ▁JUSTICE - ▁LIFTED - PEN - ▁SOLDIER - ▁FINDING - ▁REMARK - ▁REGULAR - ▁STRUGGLE - ▁MACHINE - ▁SING - ▁HURRIED - ▁SUFFICIENT - ▁REPRESENT - ▁DOUBLE - ▁ALARM - ▁SUPPER - ▁DREADFUL - ▁FORE - ATOR - ▁STOCK - ▁TIN - ▁EXAMPLE - ▁ROOF - ▁FLOW - ▁SUPPOSED - ▁PRESERV - ▁L - ▁LISTENED - OC - ▁STO - ▁SECURE - ▁FRIGHTENED - ▁DISTURB - ▁EMOTION - ▁SERVANTS - ▁YO - ▁BUY - ▁FORCED - ▁KITCHEN - ▁TERROR - ▁STAIRS - ▁SIXTY - KER - ▁ORDINARY - ▁DIRECTLY - ▁HEADS - ▁METHOD - ▁FORGIVE - ▁AWFUL - ▁REFLECT - ▁GREATLY - ▁TALKED - ▁RIDE - STONE - ▁FAVOUR - ▁WELCOME - ▁SEIZED - OU - ▁CONTROL - ▁ORDERED - ▁ANGEL - ▁USUALLY - ▁POET - ▁BOLD - LINE - ▁ADVENTURE - ▁WATCHING - ▁FOLK - ▁MISTRESS - IZE - ▁GROWING - ▁CAVE - ▁EVIDENCE - ▁FINGER - ▁SEVENTEEN - ▁MOVING - EOUS - ▁DOESN - ▁COW - ▁TYPE - ▁BOIL - ▁TALE - ▁DELIVER - ▁FARM - ▁MONSIEUR - ▁GATHERED - ▁FEELINGS - ▁RATE - ▁REMARKED - ▁PUTTING - ▁MAT - ▁CONTRARY - ▁CRIME - ▁PLA - ▁COL - ▁NEARER - TES - ▁CIVIL - ▁SHAME - ▁LOOSE - ▁DISCOVER - ▁FLAT - ▁TWICE - ▁FAIL - VIS - ▁UNC - EA - ▁EUROPE - ▁PATIENT - ▁UNTO - ▁SUFFER - ▁PAIR - ▁TREASURE - OSE - ▁EAGER - ▁FLY - ▁N - ▁VAL - ▁DAN - ▁SALT - ▁BORE - BBE - ▁ARTHUR - ▁AFFAIRS - ▁SLOW - ▁CONSIST - ▁DEVIL - LAN - ▁AFFECTION - ▁ENGAGED - ▁KISS - ▁YA - ▁OFFICER - IFICATION - ▁LAMP - ▁PARTS - HEN - ▁MILK - ▁PROCESS - ▁GIFT - ▁PULLED - ▁HID - ▁RAY - ▁EXCELLENT - ▁IMPRESSION - ▁AUTHORITY - ▁PROVED - ▁TELLING - TTE - ▁TOWER - ▁CONSEQUENCE - ▁FAVOR - ▁FLEW - ▁CHARLES - ISTS - ▁ADDRESS - ▁FAMILIAR - ▁LIMIT - ▁CONFIDENCE - ▁RARE - ▁WEEKS - ▁WOODS - ▁INTENTION - ▁DIRECT - ▁PERFORM - ▁SOLEMN - ▁DISTANT - ▁IMAGE - ▁PRESIDENT - ▁FIRM - ▁INDIAN - ▁RANK - ▁LIKED - ▁AGREE - ▁HOUSES - ▁WIL - ▁MATTERS - ▁PRISON - ▁MODE - ▁MAJOR - ▁WORKING - ▁SLIP - ▁WEIGHT - ▁AWARE - ▁BUSY - ▁LOOKS - ▁WOUND - ▁THOR - ▁BATH - ▁EXERCISE - ▁SIMILAR - ▁WORE - ▁AMOUNT - ▁QUESTIONS - ▁VIOLENT - ▁EXCUSE - ▁ASIDE - ▁TUR - ▁DULL - OF - ▁EMPEROR - ▁NEVERTHELESS - ▁SHOUT - ▁EXPLAINED - ▁SIZE - ▁ACCOMPLISH - FORD - CAN - ▁MISTAKE - ▁INSTANTLY - ▁SMOOTH - ▁STRIKE - ▁BOB - ISED - ▁HORROR - ▁SCIENCE - ▁PROTEST - ▁MANAGE - ▁OBEY - ▁NECESSITY - ▁SPLENDID - ▁PRESS - ▁INTERESTING - ▁RELIGION - ▁UNKNOWN - ▁FIERCE - ▁DISAPPEARED - ▁HOLY - ▁HATE - ▁PLAYED - ▁LIN - ▁NATURALLY - ▁DROVE - ▁LOUIS - TIES - ▁BRAND - INESS - RIE - ▁SHOOT - ▁CONSENT - ▁SEATED - ▁LINES - GUE - ▁AGREED - ▁CIRCLE - ▁STIR - ▁STREETS - ▁TASK - ▁RID - ▁PRODUCED - ▁ACCIDENT - ▁WITNESS - ▁LIBERTY - ▁DETAIL - ▁MINISTER - ▁POWERFUL - ▁SAVAGE - ▁SIXTEEN - ▁PRETEND - ▁COAST - ▁SQU - ▁UTTER - ▁NAMED - ▁CLEVER - ▁ADMIT - ▁COUPLE - ▁WICKED - ▁MESSAGE - ▁TEMPLE - ▁STONES - ▁YESTERDAY - ▁HILLS - DAY - ▁SLIGHT - ▁DIAMOND - ▁POSSIBLY - ▁AFFAIR - ▁ORIGINAL - ▁HEARING - ▁WORTHY - ▁SELL - NEY - ICK - ▁COTTAGE - ▁SACRIFICE - ▁PROGRESS - ▁SHOCK - ▁DESIGN - ▁SOUGHT - ▁PIT - ▁SUNDAY - ▁OTHERWISE - ▁CABIN - ▁PRAYER - ▁DWELL - ▁GAIN - ▁BRIDGE - ▁PARTICULARLY - ▁YIELD - ▁TREAT - RIGHT - ▁OAK - ▁ROPE - WIN - ▁ORDERS - ▁SUSPECT - ▁EDWARD - AB - ▁ELEVEN - ▁TEETH - ▁OCCURRED - DDING - ▁AMERICA - ▁FALLING - ▁LION - ▁DEPART - ▁KEEPING - ▁DEMAND - ▁PAUSED - ▁CEASED - INA - ▁FUN - ▁CHEER - ▁PARDON - ▁NATIVE - LUS - LOW - ▁DOGS - ▁REQUIRED - ILITY - ▁ELECT - ▁ENTERTAIN - ITUDE - ▁HUGE - ▁CARRYING - ▁BLU - ▁INSIST - ▁SATISFACTION - ▁HUNT - ▁COUNTENANCE - ▁UPPER - ▁MAIDEN - ▁FAILED - ▁JAMES - ▁FOREIGN - ▁GATHER - ▁TEST - BOARD - ▁TERMS - ▁SILK - ▁BEG - ▁BROTHERS - ▁PAGE - ▁KNEES - ▁SHOWN - ▁PROFESSOR - ▁MIGHTY - ▁DEFI - ▁CHARM - ▁REQUIRE - ▁LOG - MORE - ▁PROOF - ▁POSSESSED - ▁SOFTLY - ▁UNFORTUNATE - ▁PRICE - ▁SEVERE - ▁SINGING - ▁STAGE - ▁FREEDOM - ▁SHOUTED - ▁FARTHER - ▁MAJESTY - ▁PREVIOUS - ▁GUIDE - ▁MATCH - ▁CHEST - ▁INTENDED - ▁BI - ▁EXCITEMENT - ▁OFFICERS - ▁SUR - ▁SHAKE - ▁SENTIMENT - ▁GENTLY - ▁SUCCEEDED - ▁MENTION - ▁LOCK - ▁ACQUAINTANCE - ▁IMAGINATION - ▁PHYSICAL - ▁LEADING - ▁SLAVE - ▁CART - ▁POINTED - ▁STEAM - ▁SHADE - ▁PIPE - ▁BASE - ▁INVENT - ▁ALAS - ▁WORKED - ▁REGRET - ▁BUR - ▁FAITHFUL - ▁MENTIONED - ▁RECORD - ▁COMPLAIN - ▁SUPERIOR - ▁BAY - ▁PAL - EMENT - UE - ▁SEVENTY - ▁HOTEL - ▁SHEEP - ▁MEAL - ▁ADVICE - ▁HIDDEN - ▁DEMANDED - ▁CONSCIOUS - ▁BROW - ▁POSSESS - ▁FOURTH - ▁EVENTS - ▁FRI - ▁PRAISE - ▁ADVANCED - ▁RESOLVED - ▁STUFF - ▁CHEERFUL - ▁BIRTH - ▁GRIEF - ▁AFFORD - ▁FAIRY - ▁WAKE - ▁SIDES - ▁SUBSTANCE - ▁ARTICLE - ▁LEVEL - ▁MIST - ▁JOINED - ▁PRACTICAL - ▁CLEARLY - ▁TRACE - ▁AWAKE - ▁OBSERVE - ▁BASKET - ▁LACK - VILLE - ▁SPIRITS - ▁EXCITED - ▁ABANDON - ▁SHINING - ▁FULLY - ▁CALLING - ▁CONSIDERABLE - ▁SPRANG - ▁MILE - ▁DOZEN - ▁PEA - ▁DANGEROUS - ▁WIT - ▁JEW - ▁POUNDS - ▁FOX - ▁INFORMATION - ▁LIES - ▁DECK - NNY - ▁PAUL - ▁STARS - ▁ANGER - ▁SETTLE - ▁WILLING - ▁ADAM - ▁FACES - ▁SMITH - ▁IMPORTANCE - ▁STRAIN - WAR - ▁SAM - ▁FEATHER - ▁SERVED - ▁AUTHOR - ▁PERCEIVED - ▁FLAME - ▁DIVINE - ▁TRAIL - ▁ANYBODY - ▁SIGH - ▁DELICATE - KY - ▁FOLD - ▁HAVEN - ▁DESIRED - ▁CURIOSITY - ▁PRACTICE - ▁CONSIDERATION - ▁ABSOLUTELY - ▁CITIZEN - ▁BOTTLE - ▁INTERESTED - ▁MEAT - ▁OCCUPIED - ▁CHOOSE - ▁THROAT - ETTE - ▁CANDLE - ▁DAWN - ▁PROTECT - ▁SENTENCE - IED - ▁ROCKS - ▁PORTION - ▁APPARENTLY - ▁PRESENTED - ▁TIGHT - ▁ACTUALLY - ▁DYING - ▁HAM - ▁DAILY - ▁SUFFERED - ▁POLITICAL - ▁BODIES - ▁MODERN - ▁COMPLETELY - ▁SOONER - TAN - ▁PROP - ▁ADVANCE - ▁REFUSED - ▁FARMER - ▁POLITE - ▁THUNDER - ▁BRIEF - ▁ELSIE - ▁SAILOR - ▁SUGGESTED - ▁PLATE - ▁AID - ▁FLESH - ▁WEEP - ▁BUCK - ▁ANTI - ▁OCEAN - ▁SPEND - WELL - ▁ODD - ▁GOVERNOR - ▁ENTRANCE - ▁SUSPICION - ▁STEPPED - ▁RAPIDLY - ▁CHECK - ▁HIDE - ▁FLIGHT - ▁CLUB - ▁ENTIRE - ▁INDIANS - ASH - ▁CAPITAL - ▁MAMMA - HAR - ▁CORRECT - ▁CRACK - ▁SENSATION - ▁WORST - ▁PACE - ▁MIDST - ▁AUGUST - ▁PROPORTION - ▁INNOCENT - LINESS - ▁REGARDED - ▁DRIVEN - ORD - ▁HASTE - ▁EDUCATION - ▁EMPLOY - ▁TRULY - ▁INSTRUMENT - ▁MAG - ▁FRAME - ▁FOOLISH - ▁TAUGHT - ▁HANG - ▁ARGUMENT - ▁NINETEEN - ▁ELDER - ▁NAY - ▁NEEDED - ▁NEIGHBOR - ▁INSTRUCT - ▁PAPERS - ▁REWARD - ▁EQUALLY - ▁FIELDS - ▁DIG - HIN - ▁CONDITIONS - JA - ▁SPAR - ▁REQUEST - ▁WORN - ▁REMARKABLE - ▁LOAD - ▁WORSHIP - ▁PARK - ▁KI - ▁INTERRUPTED - ▁SKILL - ▁TERM - LAC - ▁CRITIC - ▁DISTRESS - ▁BELIEF - ▁STERN - IGHT - ▁TRACK - ▁HUNTING - ▁JEWEL - ▁GRADUALLY - ▁GLOW - ▁RUSHED - ▁MENTAL - ▁VISITOR - ▁PICKED - ▁BEHOLD - ▁EXPRESSED - ▁RUB - ▁SKI - ARTAGNAN - ▁MOREOVER - ▁OPERATION - ▁CAREFUL - ▁KEEN - ▁ASSERT - ▁WANDER - ▁ENEMIES - ▁MYSTERIOUS - ▁DEPTH - ▁PREFER - ▁CROSSED - ▁CHARMING - ▁DREAD - ▁FLOUR - ▁ROBIN - ▁TRE - ▁RELIEF - ▁INQUIRED - ▁APPLE - ▁HENCE - ▁WINGS - ▁CHOICE - ▁JUD - OO - ▁SPECIES - ▁DELIGHTED - IUM - ▁RAPID - ▁APPEAL - ▁FAMOUS - ▁USEFUL - ▁HELEN - ▁NEWSPAPER - ▁PLENTY - ▁BEARING - ▁NERVOUS - ▁PARA - ▁URGE - ▁ROAR - ▁WOUNDED - ▁CHAIN - ▁PRODUCE - ▁REFLECTION - ▁MERCHANT - ▁QUARREL - ▁GLORY - ▁BEGUN - ▁BARON - CUS - ▁QUEER - ▁MIX - ▁GAZE - ▁WHISPER - ▁BURIED - ▁DIV - ▁CARD - ▁FREQUENTLY - ▁TIP - ▁KNEE - ▁REGION - ▁ROOT - ▁LEST - ▁JEALOUS - CTOR - ▁SAVED - ▁ASKING - ▁TRIP - QUA - ▁UNION - HY - ▁COMPANIONS - ▁SHIPS - ▁HALE - ▁APPROACHED - ▁HARRY - ▁DRUNK - ▁ARRIVAL - ▁SLEPT - ▁FURNISH - HEAD - ▁PIG - ▁ABSENCE - ▁PHIL - ▁HEAP - ▁SHOES - ▁CONSCIOUSNESS - ▁KINDLY - ▁EVIDENT - ▁SCAR - ▁DETERMIN - ▁GRASP - ▁STEAL - ▁OWE - ▁KNIFE - ▁PRECIOUS - ▁ELEMENT - ▁PROCEEDED - ▁FEVER - ▁LEADER - ▁RISK - ▁EASE - ▁GRIM - ▁MOUNT - ▁MEANWHILE - ▁CENTURY - OON - ▁JUDGMENT - ▁AROSE - ▁VISION - ▁SPARE - ▁EXTREME - ▁CONSTANT - ▁OBSERVATION - ▁THRUST - ▁DELAY - ▁CENT - ▁INCLUD - ▁LIFT - ▁ADMIRE - ▁ISSUE - ▁FRIENDSHIP - ▁LESSON - ▁PRINCIPAL - ▁MOURN - ▁ACCEPTED - ▁BURNING - ▁CAPABLE - ▁EXTRAORDINARY - ▁SANG - ▁REMOVED - ▁HOPED - ▁HORN - ▁ALICE - ▁MUD - ▁APARTMENT - ▁FIGHTING - ▁BLAME - ▁TREMBLING - ▁SOMEBODY - ▁ANYONE - ▁BRIDE - ▁READER - ▁ROB - ▁EVERYWHERE - ▁LABOUR - ▁RECALL - ▁BULL - ▁HIT - ▁COUNCIL - ▁POPULAR - ▁CHAP - ▁TRIAL - ▁DUN - ▁WISHES - ▁BRILLIANT - ▁ASSURED - ▁FORGOT - ▁CONTINUE - ▁ACKNOWLEDG - ▁RETREAT - ▁INCREASED - ▁CONTEMPT - ▁GRANDFATHER - ▁SYMPATHY - ▁GHOST - ▁STRETCHED - ▁CREATURES - ▁CAB - ▁HIND - ▁PLAYING - ▁MISERABLE - ▁MEMBERS - ▁KINDNESS - ▁HIGHEST - ▁PRIM - ▁KISSED - ▁DESERVE - ▁HUT - ▁BEGGED - ▁EIGHTY - ▁CLOSELY - ▁WONDERED - ▁MILITARY - ▁REMIND - ▁ACCORDINGLY - ▁LARGER - ▁MAINTAIN - ▁ENGINE - ▁MOTIVE - ▁DESTROY - ▁STRIP - ▁HANS - ▁AHEAD - ▁INFINITE - ▁PROMPT - ▁INFORMED - TTLE - ▁PEER - ▁PRESSED - ▁TRAP - ▁SOMEWHERE - ▁BOUGHT - ▁VISIBLE - ▁ASHAMED - ▁TEAR - ▁NEIGHBOUR - ▁CONSTITUTION - ▁INTELLIGENCE - ▁PROFESSION - ▁HUNGRY - RIDGE - ▁SMELL - ▁STORIES - ▁LISTENING - ▁APPROACH - ▁STRING - ▁EXPLANATION - ▁IMMENSE - ▁RELIGIOUS - ▁THROUGHOUT - ▁HOLLOW - ▁AWAIT - ▁FLYING - ▁SCREAM - ▁ACTIVE - ▁RUM - ▁PRODUCT - ▁UNHAPPY - ▁VAGUE - ARIES - ▁ELIZABETH - ▁STUPID - ▁DIGNITY - ▁ISABEL - GAR - ▁BRO - ▁PITCH - ▁COMRADE - ▁STIFF - ▁RECKON - ▁SOLD - ▁SPARK - ▁STRO - ▁CRYING - ▁MAGIC - ▁REPEAT - PORT - ▁MARKED - ▁COMFORTABLE - ▁PROJECT - ▁BECOMING - ▁PARENTS - ▁SHELTER - ▁STOLE - ▁HINT - ▁NEST - ▁TRICK - ▁THOROUGHLY - ▁HOSPITAL - ▁WEAPON - ▁ROME - ▁STYLE - ▁ADMITTED - ▁SAFETY - FIELD - ▁UNDERSTANDING - ▁TREMBLE - ▁PRINT - ▁SLAVES - ▁WEARY - ▁ARTIST - ▁CREDIT - BURG - ▁CONCLUSION - ▁SELDOM - ▁UNUSUAL - ▁CLOUDS - ▁UNABLE - ▁GAY - ▁HANGING - ▁SCR - ▁BOWED - ▁DAVID - ▁VOL - ▁PUSHED - ▁ESCAPED - MOND - ▁WARN - ▁BETRAY - ▁EGGS - ▁PLAINLY - ▁EXHIBIT - ▁DISPLAY - ▁MEMBER - ▁GRIN - ▁PROSPECT - ▁BRUSH - ▁BID - ▁SUCCESSFUL - ▁EXTENT - ▁PERSUADE - ▁MID - ▁MOOD - ▁ARRANGED - ▁UNIVERSAL - ▁JIM - ▁SIGNAL - ▁WHILST - ▁PHILIP - ▁WOLF - RATE - ▁EAGERLY - ▁BILLY - ▁RETURNING - ▁CONSCIENCE - ▁FORTUNATE - ▁FEMALE - ▁GLEAM - ▁HASTILY - ▁PROVIDED - ▁OBTAIN - ▁INSTINCT - ▁CONCERNED - ▁CONCERNING - ▁SOMEHOW - ▁PINK - ▁RAGE - ▁ACCUSTOMED - ▁UNCONSCIOUS - ▁ADVISE - ▁BRANCHES - ▁TINY - ▁REFUSE - ▁BISHOP - ▁SUPPLY - ▁PEASANT - ▁LAWYER - ▁WASTE - ▁CONNECTION - ▁DEVELOP - ▁CORRESPOND - ▁PLUM - ▁NODDED - ▁SLIPPED - ▁EU - ▁CONSTANTLY - CUM - MMED - ▁FAIRLY - HOUSE - ▁KIT - ▁RANG - ▁FEATURES - ▁PAUSE - ▁PAINFUL - ▁JOE - ▁WHENCE - ▁LAUGHTER - ▁COACH - ▁CHRISTMAS - ▁EATING - ▁WHOLLY - ▁APART - ▁SUPER - ▁REVOLUTION - ▁LONELY - ▁CHEEKS - ▁THRONE - ▁CREW - ▁ATTAIN - ▁ESTABLISHED - TIME - ▁DASH - ▁FRIENDLY - ▁OPERA - ▁EARL - ▁EXHAUST - ▁CLIFF - ▁REVEAL - ▁ADOPT - ▁CENTRE - ▁MERRY - ▁SYLVIA - ▁IDEAL - ▁MISFORTUNE - ▁FEAST - ▁ARAB - ▁NUT - ▁FETCH - ▁FOUGHT - ▁PILE - ▁SETTING - ▁SOURCE - ▁PERSIST - ▁MERCY - ▁BARK - ▁LUC - ▁DEEPLY - ▁COMPARE - ▁ATTITUDE - ▁ENDURE - ▁DELIGHTFUL - ▁BEARD - ▁PATIENCE - ▁LOCAL - ▁UTTERED - ▁VICTORY - ▁TREATED - ▁SEPARATE - ▁WAG - ▁DRAGG - ▁TITLE - ▁TROOPS - ▁TRIUMPH - ▁REAR - ▁GAINED - ▁SINK - ▁DEFEND - ▁TIED - ▁FLED - ▁DARED - ▁INCREASE - ▁POND - ▁CONQUER - ▁FOREHEAD - ▁FAN - ▁ANXIETY - ▁ENCOUNTER - ▁SEX - ▁HALT - ▁SANK - ▁CHEEK - ▁HUMBLE - ▁WRITER - ▁EMPLOYED - ▁DISTINGUISHED - ▁RAISE - ▁WHIP - ▁GIANT - ▁RANGE - ▁OBTAINED - ▁FLAG - ▁MAC - ▁JUMPED - ▁DISCOVERY - ▁NATIONAL - ▁COMMISSION - ▁POSITIVE - ▁LOVING - ▁EXACT - ▁MURMURED - ▁GAZED - ▁REFER - ▁COLLEGE - ▁ENCOURAGE - ▁NOVEL - ▁CLOCK - ▁MORTAL - ▁ROLLED - ▁RAT - IZING - ▁GUILTY - ▁VICTOR - WORTH - ▁PRA - ▁APPROACHING - ▁RELATIVE - ▁ESTATE - ▁UGLY - ▁METAL - ▁ROBERT - ▁TENT - ▁ADMIRATION - ▁FOURTEEN - ▁BARBAR - ▁WITCH - ELLA - ▁CAKE - ▁SHONE - ▁MANAGED - ▁VOLUME - ▁GREEK - ▁DANCING - ▁WRETCHED - ▁CONDEMN - ▁MAGNIFICENT - ▁CONSULT - J - ▁ORGAN - ▁FLEET - ▁ARRANGEMENT - ▁INCIDENT - ▁MISERY - ▁ARROW - ▁STROKE - ▁ASSIST - ▁BUILD - ▁SUCCEED - ▁DESPERATE - ▁WIDOW - UDE - ▁MARKET - ▁WISDOM - ▁PRECISE - ▁CURRENT - ▁SPOIL - ▁BADE - ▁WOODEN - ▁RESIST - ▁OBVIOUS - ▁SENSIBLE - FALL - ▁ADDRESSED - ▁GIL - ▁COUNSEL - ▁PURCHASE - ▁SELECT - ▁USELESS - ▁STARED - ▁ARREST - ▁POISON - ▁FIN - ▁SWALLOW - ▁BLOCK - ▁SLID - ▁NINETY - ▁SPORT - ▁PROVIDE - ▁ANNA - ▁LAMB - ▁INTERVAL - ▁JUMP - ▁DESCRIBED - ▁STRIKING - ▁PROVISION - ▁PROPOSED - ▁MELANCHOLY - ▁WARRIOR - ▁SUGGEST - ▁DEPARTURE - ▁BURDEN - ▁LIMB - ▁TROUBLED - ▁MEADOW - ▁SACRED - ▁SOLID - ▁TRU - ▁LUCY - ▁RECOVER - ▁ENERGY - ▁POWDER - ▁RESUMED - ▁INTENSE - ▁BRITISH - ▁STRAW - ▁AGREEABLE - ▁EVERYONE - ▁CONCERN - ▁VOYAGE - ▁SOUTHERN - ▁BOSOM - ▁UTTERLY - ▁FEED - ▁ESSENTIAL - ▁CONFINE - ▁HOUSEHOLD - ▁EXTREMELY - ▁WONDERING - ▁LIST - ▁PINE - PHA - ▁EXPERIMENT - ▁JOSEPH - ▁MYSTERY - ▁RESTORE - ▁BLUSH - FOLD - ▁CHOSEN - ▁INTELLECT - ▁CURTAIN - OLOGY - ▁MOUNTED - ▁LAP - ▁EPI - ▁PUNISH - ▁WEDDING - ▁RECOGNIZED - ▁DRIFT - ▁PREPARATION - ▁RESOLUTION - ▁OPPRESS - ▁FIX - ▁VICTIM - OGRAPH - ▁SUMMON - ▁JULIA - ▁FLOOD - ▁WAL - ULATION - ▁SLIGHTLY - ▁LODGE - ▁WIRE - ▁CONFUSION - ▁UNEXPECTED - ▁CONCEIVE - ▁PRIZE - ▁JESUS - ▁ADDITION - ▁RUDE - ▁FATAL - ▁CARELESS - ▁PATCH - ▁KO - ▁CATHERINE - ▁PARLIAMENT - ▁PROFOUND - ▁ALOUD - ▁RELIEVE - ▁PUSH - ABILITY - ▁ACCOMPANIED - ▁SOVEREIGN - ▁SINGULAR - ▁ECHO - ▁COMPOSED - ▁SHAKING - ATORY - ▁ASSISTANCE - ▁TEACHER - ▁HORRIBLE - ▁STRICT - ▁VERSE - ▁PUNISHMENT - ▁GOWN - ▁MISTAKEN - ▁VARI - ▁SWEPT - ▁GESTURE - ▁BUSH - ▁STEEL - ▁AFFECTED - ▁DIRECTED - ▁SURROUNDED - ▁ABSURD - ▁SUGAR - ▁SCRAP - ▁IMMEDIATE - ▁SADDLE - ▁TY - ▁ARISE - ▁SIGHED - ▁EXCHANGE - ▁IMPATIENT - ▁SNAP - ▁EMBRACE - ▁DISEASE - ▁PROFIT - ▁RIDING - ▁RECOVERED - ▁GOVERN - ▁STRETCH - ▁CONVINCED - ▁LEANING - ▁DOMESTIC - ▁COMPLEX - ▁MANIFEST - ▁INDULGE - ▁GENIUS - ▁AGENT - ▁VEIL - ▁DESCRIPTION - ▁INCLINED - ▁DECEIVE - ▁DARLING - ▁REIGN - HU - ▁ENORMOUS - ▁RESTRAIN - ▁DUTIES - BURY - TTERED - ▁POLE - ▁ENABLE - ▁EXCEPTION - ▁INTIMATE - ▁COUNTESS - ▁TRIBE - ▁HANDKERCHIEF - ▁MIDNIGHT - ▁PROBLEM - ▁TRAMP - ▁OIL - CAST - ▁CRUSH - ▁DISCUSS - ▁RAM - ▁TROT - ▁UNRE - ▁WHIRL - ▁LOCKED - ▁HORIZON - ▁OFFICIAL - ▁SCHEME - ▁DROWN - ▁PIERRE - ▁PERMITTED - ▁CONNECTED - ▁ASSURE - ▁COCK - ▁UTMOST - ▁DEVOTED - ▁RELI - ▁SUFFICIENTLY - ▁INTELLECTUAL - ▁CARPET - ▁OBJECTION - ▁AFTERWARD - ▁REALITY - ▁NEGRO - ▁RETAIN - ▁ASCEND - ▁CEASE - ▁KATE - ▁MARVEL - KO - ▁BOND - MOST - ▁COAL - GATE - ▁IGNORANT - ▁BREAKING - ▁TWIN - ▁ASTONISHMENT - ▁COFFEE - ▁JAR - ▁CITIES - ▁ORIGIN - ▁EXECUT - ▁FINAL - ▁INHABITANTS - ▁STABLE - ▁CHIN - ▁PARTIES - ▁PLUNGE - ▁GENEROUS - ▁DESCRIBE - ▁ANNOUNCED - ▁MERIT - ▁REVERE - ▁ERE - ACIOUS - ZI - ▁DISAPPOINT - ▁SUGGESTION - ▁DOUBTLESS - ▁TRUNK - ▁STAMP - ▁JOB - ▁APPOINTED - ▁DIVIDED - ▁ACQUAINTED - CHI - ▁ABSOLUTE - ▁FEARFUL - ▁PRIVILEGE - ▁CRAFT - ▁STEEP - ▁HUNTER - ▁FORBID - ▁MODEST - ▁ENDEAVOUR - ▁SWEEP - ▁BEHELD - ▁ABSORB - ▁CONSTRUCT - ▁EMPIRE - ▁EXPEDITION - ▁ERECT - ▁OFFEND - ▁INTEND - ▁PERMIT - ▁DESTROYED - ▁CONTRACT - ▁THIRST - ▁WAGON - ▁EVA - ▁GLOOM - ▁ATMOSPHERE - ▁RESERVE - ▁VOTE - ▁GER - ▁NONSENSE - ▁PREVAIL - ▁QUALITY - ▁CLASP - ▁CONCLUDED - ▁RAP - ▁KATY - ▁ETERNAL - ▁MUTTERED - ▁NEGLECT - ▁SQUIRE - ▁CREEP - LOCK - ▁ELECTRIC - ▁HAY - ▁EXPENSE - ▁SCORN - ▁RETIRED - ▁STOUT - ▁MURMUR - ▁SHARPLY - ▁DISTRICT - ▁LEAF - ▁FAILURE - WICK - ▁JEAN - ▁NUMEROUS - ▁INFANT - ▁REALIZED - ▁TRAVELLER - ▁HUNGER - ▁JUNE - ▁MUN - ▁RECOMMEND - ▁CREP - ZZLE - ▁RICHARD - WORK - ▁MONTE - ▁PREACH - ▁PALM - AVI - ▁ANYWHERE - ▁DISPOSITION - ▁MIRROR - ▁VENTURE - ▁POUND - ▁CIGAR - ▁INVITED - ▁BENCH - ▁PROTECTION - ▁BENEFIT - ▁THOMAS - ▁CLERK - ▁REPROACH - ▁UNIFORM - ▁GENERATION - ▁SEAL - ▁COMPASS - ▁WARNING - ▁EXTENDED - ▁DIFFICULTIES - ▁MAYBE - ▁GROAN - ▁AFFECT - ▁COMB - ▁EARN - ▁WESTERN - ▁IDLE - ▁SCORE - ▁TAP - ▁ASTONISHED - ▁INTRODUCED - ▁LEISURE - ▁LIEUTENANT - ▁VIOLENCE - ▁FIRMLY - ▁MONSTER - ▁UR - ▁PROPERLY - ▁TWIST - ▁PIRATE - ▁ROBBER - ▁BATTER - ▁WEPT - ▁LEANED - ▁FOG - ▁ORNAMENT - ▁ANDREW - ▁BUSHES - ▁REPUBLIC - ▁CONFIDENT - ▁LEAN - ▁DART - ▁STOOP - ▁CURL - ▁COUNTER - ▁NORTHERN - ▁PEARL - ▁NEAREST - ▁FRANCIS - ▁WANDERING - ▁FREQUENT - ▁STARTLED - ▁STATEMENT - ▁OCCUR - ▁BLOOM - ▁NERVE - ▁INSPECT - ▁INDUCE - ▁FLATTER - ▁DATE - ▁AMBITION - ▁SLOPE - ▁MALE - ▁MADAM - ▁MONK - ▁RENT - ▁CONFIRM - ▁INVESTIGAT - ▁RABBIT - ▁REGIMENT - ▁SUBMIT - ▁SPELL - ▁FURIOUS - ▁RAIL - ▁BESTOW - ▁RALPH - ▁SCATTERED - ▁COMPELLED - ▁THREAD - ▁CHILL - ▁DENY - ▁PRONOUNC - ▁MANKIND - ▁CATTLE - ▁EXECUTION - ▁REBEL - ▁SUPREME - ▁VALUABLE - ▁LIKEWISE - ▁CONVEY - ▁TIDE - ▁GLOOMY - ▁COIN - ▁ACTUAL - ▁TAX - ▁PROVINCE - ▁GRATEFUL - ▁SPIRITUAL - ▁VANISHED - ▁DIANA - ▁HAUNT - ▁DRAGON - ▁CRAWL - ▁CHINA - ▁GRATITUDE - ▁NEAT - ▁FINISH - ▁INTENT - ▁FRIGHT - ▁EMBARRASS - ▁THIRTEEN - ▁RUTH - ▁SLIGHTEST - ▁DEVELOPMENT - ▁INTERVIEW - ▁SPECTACLE - ▁BROOK - VIE - ▁WEAKNESS - ▁AUDIENCE - ▁CONSEQUENTLY - ▁ABROAD - ▁ASPECT - ▁PAINTED - ▁RELEASE - ▁INSULT - ▁SOOTH - ▁DISAPPOINTMENT - ▁EMERG - ▁BRIG - ▁ESTEEM - ▁INVITATION - ▁PASSENGER - ▁PUBLISH - ▁PIANO - ▁IRISH - ▁DESK - ▁BEATEN - ▁FIFTH - ▁IMPULSE - ▁SWEAR - ▁EATEN - ▁PURPLE - ▁COMMITTED - ▁COUNTRIES - ▁PERCEIVE - ISON - ▁CELEBRAT - ▁GRANDMOTHER - ▁SHUDDER - ▁SUNSHINE - ▁SPANISH - ▁HITHERTO - ▁MARILLA - ▁SNAKE - ▁MOCK - ▁INTERFERE - ▁WALTER - ▁AMID - ▁MARBLE - ▁MISSION - TERIOR - ▁DRIVING - ▁FURNITURE - ▁STEADY - ▁CIRCUMSTANCE - ▁INTERPRET - ▁ENCHANT - ▁ERROR - ▁CONVICTION - ▁HELPLESS - ▁MEDICINE - ▁QUALITIES - ▁ITALIAN - ▁HASTENED - ▁OCCASIONALLY - ▁PURSUED - ▁HESITATED - ▁INDEPENDENT - ▁OLIVER - ▁LINGER - UX - ▁EXAMINED - ▁REPENT - ▁PHYSICIAN - ▁CHASE - ▁BELOVED - ▁ATTACHED - ▁FLORENCE - ▁HONEY - ▁MOUSE - ▁CRIES - ▁BAKE - ▁POEM - ▁DESTRUCTION - ▁FULFIL - ▁MESSENGER - ▁TRISTRAM - ▁FANCIED - ▁EXCESS - ▁CURSE - ▁CHU - ▁QUANTITY - ▁THORNTON - ▁CREATED - ▁CONTINUALLY - ▁LIGHTNING - ▁BORNE - ▁TOTAL - ▁DISPOSED - ▁RIFLE - ▁POLLY - ▁GOAT - ▁BACKWARD - ▁VIRGINIA - ▁KICK - ▁PERIL - ▁QUO - ▁GLORIOUS - ▁MULTITUDE - ▁LEATHER - ▁ABSENT - ▁DEMON - ▁DEBT - ▁TORTURE - ▁ACCORD - ▁MATE - ▁CATHOLIC - ▁PILL - ▁LIBRARY - ▁PURSUIT - ▁SHIRT - ▁DEAREST - ▁COLLAR - ▁BEACH - ▁ROBE - ▁DECLARE - ▁BRANCH - ▁TEMPT - ▁STEADILY - ▁DISGUST - ▁SILLY - ▁ARRIVE - ▁DRANK - ▁LEVI - ▁COMMUNICAT - ▁RACHEL - ▁WASHINGTON - ▁RESIGN - ▁MEANTIME - ▁LACE - ▁ENGAGEMENT - ▁QUIVER - ▁SEPARATED - ▁DISCUSSION - ▁VENTURED - ▁SURROUNDING - ▁POLISH - ▁NAIL - ▁SWELL - ▁JOKE - ▁LINCOLN - ▁STUDENT - ▁GLITTER - ▁RUSSIAN - ▁READILY - ▁CHRIS - ▁POVERTY - ▁DISGRACE - ▁CHEESE - ▁HEAVILY - ▁SCALE - ▁STAFF - ▁ENTREAT - ▁FAREWELL - ▁LUNCH - ▁PEEP - ▁MULE - ▁SOMEONE - ▁DISAPPEAR - ▁DECISION - ▁PISTOL - ▁PUN - ▁SPUR - ▁ASSUMED - ▁EXTEND - ▁ENTHUSIASM - ▁DEFINITE - ▁UNDERTAKE - ▁COMMITTEE - ▁SIMON - ▁FENCE - ▁APPLIED - ▁RELATED - ▁VICE - ▁UNPLEASANT - ▁PROBABLE - ▁PROCURE - ▁FROWN - ▁CLOAK - ▁HUMANITY - ▁FAMILIES - ▁PHILOSOPHER - ▁DWARF - ▁OVERCOME - ▁DEFEAT - ▁FASTENED - ▁MARSH - ▁CLASSES - ▁TOMB - ▁GRACIOUS - ▁REMOTE - ▁CELL - ▁SHRIEK - ▁RESCUE - ▁POOL - ▁ORGANIZ - ▁CHOSE - ▁CUTTING - ▁COWARD - ▁BORDER - ▁DIRTY - ▁MONKEY - ▁HOOK - ▁CHUCK - ▁EMILY - ▁JEST - ▁PLAC - ▁WEIGH - ▁ASSOCIATE - ▁GLIMPSE - ▁STUCK - ▁BOLT - ▁MURDERER - ▁PONY - ▁DISTINGUISH - ▁INSTITUTION - ▁CUNNING - ▁COMPLIMENT - ▁APPETITE - ▁REPUTATION - ▁FEEBLE - ▁KIN - ▁SERIES - ▁GRACEFUL - ▁PLATFORM - ▁BREEZE - ▁PHRASE - ▁CLAY - MONT - ▁RATTL - ▁OPPOSITION - ▁LANE - ▁BOAST - ▁GROWTH - ▁INCLINATION - ▁BEHAVE - ▁SUSAN - ▁DISTINCTION - ▁DISLIKE - ▁NICHOLAS - ▁SATISFY - ▁DRAMA - ▁ELBOW - ▁GAZING - ▁CONSUM - ▁SPIN - ▁OATH - ▁CHANNEL - ▁CHARACTERISTIC - ▁SPEAR - ▁SLAIN - ▁SAUCE - ▁FROG - ▁CONCEPTION - ▁TIMID - ▁ZEAL - ▁APPARENT - SHIRE - ▁CENTER - ▁VARIETY - ▁DUSK - ▁APT - ▁COLUMN - ▁REVENGE - ▁RIVAL - ▁IMITAT - ▁PASSIONATE - ▁SELFISH - ▁NORMAN - ▁REPAIR - ▁THRILL - ▁TREATMENT - ▁ROSA - ▁MARTIN - ▁INDIFFERENT - ▁THITHER - ▁GALLANT - ▁PEPPER - ▁RECOLLECT - ▁VINE - ▁SCARCE - ▁SHIELD - ▁MINGLED - CLOSE - ▁HARSH - ▁BRICK - ▁HUMOR - ▁MISCHIEF - ▁TREMENDOUS - ▁FUNCTION - ▁SMART - ▁SULTAN - ▁DISMISS - ▁THREATENED - ▁CHEAP - ▁FLOCK - ▁ENDEAVOR - ▁WHISK - ▁ITALY - ▁WAIST - ▁FLUTTER - ▁SMOKING - ▁MONARCH - ▁AFRICA - ▁ACCUSE - ▁HERBERT - ▁REFRESH - ▁REJOICE - ▁PILLOW - ▁EXPECTATION - ▁POETRY - ▁HOPELESS - ▁PERISH - ▁PHILOSOPHY - ▁WHISTLE - ▁BERNARD - ▁LAMENT - ▁IMPROVE - ▁SUP - ▁PERPLEX - ▁FOUNTAIN - ▁LEAGUE - ▁DESPISE - ▁IGNORANCE - ▁REFERENCE - ▁DUCK - ▁GROVE - ▁PURSE - ▁PARTNER - ▁PROPHET - ▁SHIVER - ▁NEIGHBOURHOOD - ▁REPRESENTATIVE - SAIL - ▁WIP - ▁ACQUIRED - ▁CHIMNEY - ▁DOCTRINE - ▁MAXIM - ▁ANGLE - ▁MAJORITY - ▁AUTUMN - ▁CONFUSED - ▁CRISTO - ▁ACHIEVE - ▁DISGUISE - ▁REDUCED - ▁EARLIER - ▁THEATRE - ▁DECIDE - MINATED - OLOGICAL - ▁OCCUPATION - ▁VIGOROUS - ▁CONTINENT - ▁DECLINE - ▁COMMUNITY - ▁MOTIONLESS - ▁HATRED - ▁COMMUNICATION - ▁BOWL - ▁COMMENT - ▁APPROVE - ▁CEREMONY - ▁CRIMINAL - ▁SCIENTIFIC - ▁DUCHESS - ▁VIVID - ▁SHIFT - ▁AVAIL - ▁DAMP - ▁JOHNSON - ▁SLENDER - ▁CONTRAST - ▁AMUSEMENT - ▁PLOT - ▁LYN - ▁ASSOCIATION - ▁SNATCH - ▁UNCERTAIN - ▁PRESSURE - ▁PERCH - ▁APPLY - ▁PLANET - ▁NOTWITHSTANDING - ▁SWUNG - ▁STIRRED - ▁ATTENDANT - ▁ENJOYMENT - ▁WORRY - ▁ALBERT - ▁NAKED - ▁TALENT - ▁MARIAN - ▁REFORM - ▁DELIBERATE - ▁INTELLIGENT - ▁SENSITIVE - ▁YONDER - ▁PUPIL - ▁FRIGHTFUL - ▁DOUBTFUL - ▁STANDARD - ▁MAGISTRATE - ▁SHEPHERD - ▁STOMACH - ▁DEPOSIT - ▁RENEW - ▁HEDGE - ▁FRANCS - ▁POSSIBILITY - ▁RESEMBLE - ▁FATIGUE - ▁PORTRAIT - ▁FAVORITE - ▁CREAM - ▁BURG - ▁SECRETARY - ▁DIVERS - ▁ACTIVITY - ▁SPECULAT - ▁HUMOUR - ▁FITTED - ▁EXTERNAL - ▁CETERA - ▁WRAPPED - ▁WHIT - ▁FRED - ▁EXAMINATION - ▁LODGING - ▁OWING - ▁JAW - ▁CROW - ▁BALANCE - ▁PUFF - ▁TENDERNESS - ▁PORTHOS - ▁ANCHOR - ▁INTERRUPT - ▁NECESSARILY - ▁PERPETUAL - ▁AGONY - ▁POPE - ▁SCHOLAR - ▁SCOTLAND - ▁SUPPRESS - ▁WRATH - ▁WRECK - ▁EXCEED - ▁PERFECTION - ▁INDIA - ▁TRADITION - ▁SECTION - ▁EASTERN - ▁DOORWAY - ▁WIVES - ▁CONVENTION - ▁ANNOUNC - ▁EGYPT - ▁CONTRADICT - ▁SCRATCH - ▁CENTRAL - ▁GLOVE - ▁WAX - ▁PREPARE - ▁ACCOMPANY - ▁INCREASING - ▁LIBERAL - ▁RAISING - ▁ORANGE - ▁SHOE - ▁ATTRIBUTE - ▁LITERATURE - ▁PUZZLED - ▁WITHDRAW - ▁WHITHER - ▁HAWK - ▁MOONLIGHT - ▁EXAMINE - ▁HAPPILY - ▁PRECEDE - ▁DETECTIVE - ▁INCHES - ▁SOLITARY - ▁DUTCH - ▁NAPOLEON - ▁UNEASY - ▁CARDINAL - ▁BLEW - ▁FOWL - ▁DECORAT - ▁CHILDHOOD - ▁TORMENT - ▁LOSING - ▁PERMISSION - ▁BLANK - ▁UPSTAIRS - ▁CAPACITY - ▁TRIFLE - ▁FOLLY - ▁RECOGNIZE - ▁REMOVE - ▁VENGEANCE - ▁ENTERPRISE - ▁BEDROOM - ▁ANYHOW - ▁INQUIRY - ▁ASHES - ▁DRAG - ▁HUSH - ▁AWKWARD - ▁SATURDAY - ▁GENUINE - ▁SURVIV - ▁SKIRT - ▁AFFECTIONATE - ▁TANG - ▁MUTUAL - ▁DISPUTE - ▁EAGLE - ▁INCOME - ▁BIND - ▁FAME - ▁IMPROVEMENT - ROVING - ▁DIFFER - ▁AWOKE - ▁SLEEVE - ▁SOLITUDE - ▁FAVOURITE - JI - ▁DETECT - ▁COMPREHEND - ▁PREPARING - ▁SERPENT - ▁SUMMIT - ▁KNOT - ▁KNIT - ▁COPY - ▁STOPPING - ▁FADED - ▁HIDEOUS - ▁JULIE - STEAD - ▁SHINE - ▁CONFLICT - ▁PROPOSITION - ▁REFUGE - ▁GALLERY - ▁BUNDLE - ▁AXE - ▁SLAVERY - ▁MASK - ▁ALYOSHA - ▁LADDER - ▁DEPARTMENT - ▁DISCHARGE - ▁DEPRESS - ▁GALLOP - ▁SCARLET - ▁KITTY - ▁RECEIVING - ▁SURRENDER - ▁SUSTAIN - ▁TWILIGHT - ▁CONGRESS - ▁IRELAND - ▁FUNNY - ▁LEND - ▁CONSTITUTE - ▁FUNERAL - ▁CRYSTAL - ▁SPAIN - ▁EXCEEDINGLY - ▁DAMN - ▁COMMUN - ▁CIVILIZATION - ▁PREJUDICE - ▁PORCH - ▁ASSISTANT - ▁INDUSTRY - ▁TUMBLE - ▁DEFENCE - ▁HITHER - ▁SMOT - ▁COLONI - ▁AMAZEMENT - ▁MARGUERITE - ▁MIRACLE - ▁INHERIT - ▁BEGGAR - ▁ENVELOPE - ▁INDIGNATION - ▁NATASHA - ▁PROPOSAL - ▁FRAGMENT - ▁ROUSED - ▁ROAST - ENCIES - ▁COMMENCED - ▁RESOURCE - ▁POPULATION - ▁QUOTH - ▁PURSUE - ▁EDUCAT - ▁AFFLICT - ▁CONTACT - ▁CRIMSON - ▁DIVISION - ▁DISORDER - ▁COPPER - ▁SOLICIT - ▁MODERATE - ▁DRUM - ▁SWIM - ▁SALUTE - ▁ASSUME - ▁MUSCLE - ▁OVERWHELM - ▁SHAKESPEARE - ▁STRUGGLING - ▁TRANQUIL - ▁CHICKEN - ▁TREAD - ▁CLAW - ▁BIBLE - ▁RIDGE - ▁THREAT - ▁VELVET - ▁EXPOSED - ▁IDIOT - ▁BARREL - ▁PENNY - ▁TEMPTATION - ▁DANGLARS - ▁CENTURIES - ▁DISTRIBUT - ▁REJECT - ▁RETORTED - ▁CONCENTRAT - ▁CORDIAL - ▁MOTOR - ▁CANNON - KEEP - ▁WRETCH - ▁ASSURANCE - ▁THIEF - ▁SURVEY - ▁VITAL - ▁RAILWAY - ▁JACKSON - ▁CRASH - ▁GROWL - ▁COMBAT - ▁RECOLLECTION - ▁SECURITY - ▁JACOB - ▁CLUTCH - ▁BLANKET - ▁NANCY - ▁CELLAR - ▁CONVENIENT - ▁INDIGNANT - ▁COARSE - ▁WORM - ▁SCREEN - ▁TRANSPORT - ▁BULLET - ▁APPRECIATE - ▁DEVOTION - ▁INVISIBLE - ▁DRIED - ▁MIXTURE - ▁CANDID - ▁PERFORMANCE - ▁RIPE - ▁EXQUISITE - ▁BARGAIN - ▁TOBACCO - ▁LOYAL - ▁MOULD - ▁ATTENTIVE - ▁DOROTHY - ▁BRUTE - ▁ESTABLISHMENT - ▁ABILITY - ▁INHABIT - ▁OBSCURE - ▁BORROW - ▁ESSENCE - ▁DISMAY - ▁FLEE - ▁BLADE - ▁PLUCK - ▁COFFIN - ▁SUNSET - ▁STEPHEN - ▁ECONOMIC - ▁HOLIDAY - ▁MECHANICAL - ▁COTTON - ▁AWAKENED - ▁SEIZE - ▁RIDICULOUS - ▁SANCHO - ▁HESITATION - ▁CORPSE - ▁SAVING - HOLD - FOOT - ▁ELDEST - ▁DESPITE - ▁EDITH - ▁CHERISH - ▁RESISTANCE - ▁WILSON - ▁ARGUE - ▁INQUIRE - ▁APPREHENSION - ▁AVENUE - ▁DRAKE - ▁PROPOSE - HURST - ▁INFERIOR - ▁STAIRCASE - ▁WHEREFORE - ▁CARLYLE - ▁COUCH - ▁ROUTE - ▁POLITICS - ▁TOMORROW - ▁THRONG - ▁NAUGHT - ▁SUNLIGHT - ▁INDIFFERENCE - ▁OBEDIENCE - ▁RECEPTION - ▁VEGETABLE - ▁IMPERFECT - ▁RESIDENCE - ▁TURKEY - ▁VIOLET - ▁SARAH - ▁ALTAR - ▁GRIEVE - ▁JERK - ▁ENSU - ▁MAGICIAN - ▁BLOSSOM - ▁LANTERN - ▁RESOLUTE - ▁THOUGHTFULLY - ▁FORTNIGHT - ▁TRUMPET - ▁VALJEAN - ▁UNWILLING - ▁LECTURE - ▁WHEREUPON - ▁HOLLAND - ▁CHANGING - ▁CREEK - ▁SLICE - ▁NORMAL - ▁ANNIE - ▁ACCENT - ▁FREDERICK - ▁DISAGREEABLE - ▁RUBBED - ▁DUMB - ▁ESTABLISH - ▁IMPORT - ▁AFFIRM - ▁MATTHEW - ▁BRISK - ▁CONVERT - ▁BENDING - ▁IVAN - ▁MADEMOISELLE - ▁MICHAEL - ▁EASIER - ▁JONES - ▁FACING - ▁EXCELLENCY - ▁LITERARY - ▁GOSSIP - ▁DEVOUR - ▁STAGGER - ▁PENCIL - ▁AVERAGE - ▁HAMMER - ▁TRIUMPHANT - ▁PREFERRED - ▁APPLICATION - ▁OCCUPY - ▁AUTHORITIES - BURN - ▁ASCERTAIN - ▁CORRIDOR - ▁DELICIOUS - ▁PRACTISE - ▁UNIVERSE - ▁SHILLING - ▁CONTEST - ▁ASHORE - ▁COMMIT - ▁ADMINISTRATION - ▁STUDIED - ▁RIGID - ▁ADORN - ▁ELSEWHERE - ▁INNOCENCE - ▁JOURNAL - ▁LANDSCAPE - ▁TELEGRAPH - ▁ANGRILY - ▁CAMPAIGN - ▁UNJUST - ▁CHALLENGE - ▁TORRENT - ▁RELATE - ▁ASSEMBLED - ▁IMPRESSED - ▁CANOE - ▁CONCLUD - ▁QUIXOTE - ▁SATISFACTORY - ▁NIECE - ▁DEAF - ▁RAFT - ▁JIMMY - ▁GLID - ▁REGULAT - ▁CHATTER - ▁GLACIER - ▁ENVY - ▁STATUE - ▁BOSTON - ▁RICHMOND - ▁DENIED - ▁FANNY - ▁SOLOMON - ▁VULGAR - ▁STALK - ▁REPLACE - ▁SPOON - ▁BASIN - ▁FEATURE - ▁CONVICT - ▁ARCHITECT - ▁ADMIRAL - ▁RIBBON - ▁PERMANENT - ▁APRIL - ▁JOLLY - ▁NEIGHBORHOOD - ▁IMPART - BOROUGH - CAMP - ▁HORRID - ▁IMMORTAL - ▁PRUDENCE - ▁SPANIARD - ▁SUPPOSING - ▁TELEPHONE - ▁TEMPERATURE - ▁PENETRATE - ▁OYSTER - ▁APPOINTMENT - ▁EGYPTIAN - ▁DWELT - ▁NEPHEW - ▁RAILROAD - ▁SEPTEMBER - ▁DEVICE - ▁WHEAT - ▁GILBERT - ▁ELEGANT - ▁ADVERTISE - ▁RATIONAL - ▁TURTLE - ▁BROOD - ▁ASSEMBLY - ▁CULTIVATE - ▁EDITOR - ▁SPECIMEN - ▁UNDOUBTEDLY - ▁WHALE - ▁DROPPING - ▁BALLOON - ▁MEDICAL - COMB - ▁COMPOSITION - ▁FOOTSTEPS - ▁LAUNCELOT - ▁DISCOURSE - ▁ERRAND - ▁CONVERSE - ▁ADVANCING - ▁DOWNSTAIRS - ▁TUMULT - ▁CORRUPT - ▁SUFFICE - ▁ANGUISH - ▁SHAGGY - ▁RETIRE - ▁TIMBER - ▁BLAZE - ▁ABSTRACT - ▁EMBROIDER - ▁PHOTOGRAPH - ▁PROSPERITY - ▁TERRIBLY - ▁TERRITORY - ▁THRESHOLD - ▁PAVEMENT - ▁INJURED - ▁LIMP - ▁AGITATION - ▁RASCAL - ▁PRESUME - ▁OBSERVING - ▁OBSTACLE - ▁SIMPLICITY - ▁SLUMBER - ▁SUPPLIED - ▁COMBINATION - ▁DRAIN - ▁WILDERNESS - ▁BELIEVING - ▁VILLAIN - ▁RECKLESS - ▁INJURY - ▁CLAPP - ▁FRIDAY - ▁HERCULES - ▁KENNEDY - ▁SYMPTOM - ▁SLEDGE - ▁CEILING - ▁LEMON - ▁PLAGUE - ▁MONDAY - ▁CANVAS - ▁IMPATIENCE - ▁UNCOMFORTABLE - ▁ACCESS - ▁FROZEN - ▁SENATOR - ▁FRANZ - ▁SWIMMING - ▁BARRIER - ▁ADJUST - ▁COMPARISON - ▁PROCLAIM - ▁WRINKL - ▁OVERLOOK - ▁MITYA - ▁GUILT - ▁PERCEPTION - ▁PRECAUTION - ▁SPECTATOR - ▁SURPRISING - ▁DISTRACT - ▁DISDAIN - ▁BONNET - ▁MAGNET - ▁PROFESS - ▁CONFOUND - ▁NARRATIVE - ▁STRUCTURE - ▁SKETCH - ▁ULTIMATE - ▁GLOBE - ▁INSECT - FICIENCY - ▁ORCHARD - ▁AMIABLE - ▁DESCENT - ▁INDEPENDENCE - ▁MANUFACTURE - ▁SPRINKLE - ▁NIGHTINGALE - ▁CUSHION - ▁EMINENT - ▁SCOTT - ▁ARRAY - ▁COSETTE - ▁WAVING - ▁EXTRACT - ▁IRREGULAR - ▁PERSECUT - ▁DERIVED - ▁WITHDREW - ▁CAUTION - ▁SUSPICIOUS - ▁MEMORIES - ▁NOWHERE - ▁SUBTLE - ▁THOROUGH - Q - ▁APPROPRIATE - ▁SLAUGHTER - ▁YOURSELVES - ▁THUMB - ▁TWAS - ▁ABODE - ▁BIDDING - ▁CONSPICUOUS - ▁REBECCA - ▁SERGEANT - ▁APRON - ▁ANTICIPATE - ▁DISCIPLINE - ▁GLANCING - ▁PILGRIM - ▁SULLEN - ▁CONTRIBUTE - ▁PRAIRIE - ▁CARVED - ▁COMMERCE - ▁EXCLAMATION - ▁MUSCULAR - ▁NOVEMBER - ▁PHENOMENA - ▁SYMBOL - ▁UMBRELLA - ▁DIMINISH - ▁PARLOUR - ▁THREATENING - ▁STUMP - ▁EXTENSIVE - ▁PLEASING - ▁REMEMBRANCE - ▁COMBINED - ▁SHERIFF - ▁SHAFT - ▁LAURA - ▁INTERCOURSE - ▁STRICKEN - ▁SUPPLIES - ▁LANDLORD - ▁SHRINK - ▁PRICK - ▁CAESAR - ▁DRUG - ▁BEWILDERED - ▁NAUTILUS - ▁BRUTAL - ▁COMMERCIAL - ▁MAGGIE - ▁SPHERE - ▁VIRGIN - ▁BRETHREN - ▁DESTINY - ▁POLICY - ▁TERRIFIED - ▁HOUSEKEEPER - ▁CRAZY - ▁ARDENT - ▁DISCERN - ▁WRAP - ▁MARQUIS - ▁RUSSIA - MOUTH - ▁BRITAIN - ▁HARBOUR - ▁CONCERT - ▁DONKEY - ▁DAMAGE - ▁SLIM - ABOUT - ▁LUXURY - ▁MONSTROUS - ▁TENDENCY - ▁PARADISE - ▁CULTURE - ▁JULIUS - ▁RAOUL - ▁REMEDY - ▁DECAY - ▁SCOLD - ▁SPLIT - ▁ASSAULT - ▁DECEMBER - ▁MOSCOW - ▁EXPLORE - ▁TROUSERS - ▁WRIST - PIECE - ▁MUSKET - ▁VALENTINE - ▁TYRANT - ▁ABRAHAM - ▁MEDIUM - ▁ARTIFICIAL - ▁FACULTY - ▁OBLIGATION - ▁RESEMBLANCE - ▁INQUIRIES - ▁DETAIN - ▁SWARM - ▁PLEDGE - ▁ADMIRABLE - ▁DEFECT - ▁SUPERINTEND - ▁PATRIOT - ▁CLUNG - ▁DISMAL - ▁RECIT - ▁IGNOR - ▁AMELIA - ▁JUSTIFY - ▁ELEPHANT - ▁ESTIMATE - ▁KNELT - ▁SERVING - ▁WHIM - ▁SHRILL - ▁STUDIO - ▁TEXT - ▁ALEXANDER - ▁WROUGHT - ▁ABUNDANT - ▁SITUATED - ▁REGAIN - ▁FIERY - ▁SNEER - ▁SWEAT - ▁GLARE - ▁NIGH - ▁ESCORT - ▁INEVITABLE - ▁PSMITH - ▁RELUCTANT - ▁PRECEDING - ▁RESORT - ▁OUTRAGE - ▁AMBASSADOR - ▁CONSOLATION - ▁RECOGNITION - ▁REMORSE - ▁BEHALF - ▁FORMIDABLE - ▁GRAVITY - ▁DIVIDE - ▁CONFRONT - ▁GIGANTIC - ▁OCTOBER - ▁FLANK - ▁SLEW - ▁CLARA - ▁FILM - ▁BULK - ▁POMP - ▁ELEANOR - ▁EMPHASIS - ▁JAPANESE - ▁CAVALRY - ▁EXCLUSIVE - ▁PERFUME - ▁BRONZE - ▁FEDERAL - ▁LIQUID - ▁RUBBING - ▁OVEN - DOLPH - ▁CONVULS - ▁DEPRIVED - ▁RESPONSIBILITY - ▁SIGNIFICANT - ▁WAISTCOAT - ▁CLUSTER - ▁MARTHA - ▁REVERSE - ▁ATTORNEY - ▁DROOP - ▁SKILFUL - ▁HABITUAL - ▁PUMP - ▁INTERVEN - ▁OWL - ▁CONJECTURE - ▁FANTASTIC - ▁RESPONSIBLE - ▁DESTINED - ▁DOCUMENT - ▁THEREUPON - ▁GODDESS - ▁PACIFIC - ▁WARRANT - ▁COSTUME - ▁BRIDLE - ▁CALIFORNIA - ▁DEMOCRATIC - ▁EUSTACE - ▁SQUIRREL - ▁UNCOMMON - ▁MARVELLOUS - ▁PLOUGH - ▁TRAGEDY - ▁VAULT - ▁HESITATE - ▁REFRAIN - ▁ADMIRING - ▁CORPORAL - ▁ENTITLED - ▁SHREWD - ▁SQUEEZ - ▁ACCURATE - ▁TEMPEST - ▁MONUMENT - ▁SIEGE - ▁CHINESE - ▁RAVEN - ▁LOUNG - ▁ASSASSIN - ▁INFLICT - ▁AGITATED - ▁DESIRABLE - ▁EARLIEST - ▁LAUNCH - ▁PILOT - ▁PULSE - ▁MUTE - LEIGH - ▁LIQUOR - ▁SCARECROW - ▁SKULL - ▁DESOLATE - ▁SUBLIME - ▁SERENE - ▁RECESS - ▁WAKING - ▁CHARLOTTE - ▁CIRCULAR - ▁INJUSTICE - ▁PINOCCHIO - ▁PRISCILLA - ▁THYSELF - ▁OCCURRENCE - ▁CASUAL - ▁FRANTIC - ▁LEGEND - ▁FERTIL - ▁BACKGROUND - ▁DELICACY - ▁ESTRALLA - ▁MANUSCRIPT - ▁RESPONSE - ▁UNIVERSITY - ▁WOLVES - ▁SCANDAL - ▁STUMBLE - ▁HOARSE - ▁BODILY - ▁CONVENT - ▁EXAMINING - ▁INCAPABLE - ▁PERCEIVING - ▁PHILADELPHIA - ▁SUBSEQUENT - ▁THIEVES - ▁ACCUMULAT - ▁DAMSEL - ▁SCOTCH - ▁UNDERNEATH - ▁NOBILITY - ▁SMASH - ▁REVOLT - ▁ENGAGE - ▁CATHEDRAL - ▁CHAMPION - ▁DESPATCH - ▁ETERNITY - ▁JANUARY - ▁PLEADED - ▁PROBABILITY - ▁JIMMIE - ▁PARALLEL - ▁FISHERMAN - ▁JERRY - ▁SWORE - ▁DRAUGHT - ▁OPPONENT - ▁PRIMITIVE - ▁SIGNIFICANCE - ▁SUBSTANTIAL - ▁AMAZED - ▁DUNBAR - ▁COMMEND - ▁CONTEMPLATE - ▁TESTIMONY - ▁IMPERIAL - ▁ADAPT - ▁JUICE - ▁CALAMIT - CULAR - ▁CHATEAU - ▁PHOENIX - ▁PRUDENT - ▁SOLUTION - ▁VILLEFORT - ▁REACTION - ▁RELAX - ▁YU - ▁PROHIBIT - ▁DISTRUST - ▁PLUNDER - ▁WELFARE - ▁NAVIGAT - ▁PARLOR - ▁LAZY - ▁DETACH - OMETER - ▁PRIV - ▁DISCOURAGE - ▁OBSTINATE - ▁REJOICING - ▁SERMON - ▁VEHICLE - ▁FANCIES - ▁ENLIGHTEN - ▁ACUTE - ▁ILLUSION - ▁ANTHEA - ▁MARTIAN - ▁EXCITE - ▁GENEROSITY - OLOGIST - ▁AMAZING - ▁UNWORTHY - ▁INTERNAL - ▁INCENSE - ▁VIBRAT - ▁ADHERE - ROACH - ▁FEBRUARY - ▁MEXICAN - ▁POTATOES - ▁INCESSANT - ▁INTERPOSED - ▁PARCEL - ▁VEXED - ▁PROMOTE - MIDST - ▁ARISTOCRAT - ▁CYRIL - ▁EMBARK - ▁ABUNDANCE - ▁LITERALLY - ▁SURGEON - ▁TERRACE - ▁ATLANTIC - ▁MARTYR - ▁SPECK - ▁SENATE - ▁LOAF - ▁ADMINISTER - ▁APPREHEND - ▁SUBDUED - ▁TEMPORARY - ▁DOMINION - ▁ELABORATE - ▁DIGNIFIED - ▁ELIZA - ▁SPLASH - ▁CONSEIL - ▁DEXTER - ▁UNSEEN - ▁TRAGIC - VOCATION - ▁GRATIFY - ▁BACHELOR - ▁DEFENSE - ▁EXCURSION - ▁FACULTIES - ▁PROPRIETOR - ▁SYMPATHETIC - ▁UNNECESSARY - ▁RADIANT - ▁VACANT - ▁OUNCE - ▁SCREW - ▁PHENOMENON - ▁PROMINENT - ▁WORRIED - ▁STUDIES - ▁CLIMATE - ▁KEITH - ▁ARAMIS - ▁BLISS - ▁CONTINUAL - ▁SURPASS - ▁HEBREW - ▁IDENTITY - ▁PROVOKE - ▁TEMPERAMENT - ▁CHARIOT - ▁HARBOR - ▁NINTH - ▁PRIOR - ▁DESIROUS - ▁JERUSALEM - ▁UNDERTAKING - ▁EDISON - ▁MIRTH - ▁SCOUT - ▁APPARATUS - ▁ILLUSTRATION - ▁INTELLIGIBLE - ▁INVARIABLY - ▁PIERCED - ▁REVIEW - ▁FLICKER - ▁HAZARD - ▁REVELATION - ▁DIXON - ▁EXCITING - ▁GOSPEL - ▁CONSTANCE - ▁OVERTAKE - ▁GUINEA - ▁ALADDIN - ▁CHICAGO - ▁TULLIVER - ▁HAMILTON - ▁GARRISON - ▁DISCIPLE - ▁INTENSITY - ▁TRAITOR - ▁CHANCELLOR - ▁PROVERB - ▁DAGGER - ▁FORESEE - ▁CONFIDE - ▁GLIMMER - ▁CHAUVELIN - ▁ILLUSTRATE - ▁VOLUNTEER - ▁JUNGLE - ▁STREAK - ▁SUNRISE - ▁DISSOLV - ▁QUEST - ▁AWHILE - ▁FELICITY - ▁LEGISLATURE - ▁LEONORA - ▁MAGAZINE - ▁PITIFUL - ▁COLONY - ▁SHAWL - ▁ARRIVING - ▁FUNDAMENTAL - ▁CARPENTER - ▁OVERFLOW - ▁EXPAND - ▁HARVEST - ▁FEMININE - ▁INNUMERABLE - ▁SCRAMBLE - ▁TWENTIETH - ▁TRIFLING - ▁GHASTL - ▁CONQUEST - ▁DANIEL - ▁FACILIT - ▁FORSAKE - ▁BEHAVIOUR - ▁GORGEOUS - ▁PRODUCING - ▁HAPPIER - ▁PROMISING - ▁RAINBOW - ▁INSTINCTIVELY - ▁DECREE - ▁EYEBROWS - ▁IRRESISTIBLE - ▁PHARAOH - ▁SCROOGE - ▁UNNATURAL - ▁CRUMBS - ▁REFINED - ▁DREARY - ▁TRENCH - ▁CONVINCE - ▁FRINGE - ▁EXTREMITY - ▁INTIMACY - ▁SCOUNDREL - ▁SUFFRAGE - ▁UNEASINESS - ▁BARRICADE - ▁CIRCULAT - ▁SAMUEL - ▁BRUCE - ▁DARCY - <sos/eos> input_size: null init: null model_conf: transducer_weight: 1.0 auxiliary_ctc_weight: 0.3 report_cer: true report_wer: true encoder_conf: main_conf: pos_wise_layer_type: linear pos_wise_act_type: swish pos_enc_layer_type: rel_pos conv_mod_act_type: swish input_conf: block_type: conv2d dropout_rate_pos_enc: 0.1 dim_output: 512 dim_conv: 512 body_conf: - block_type: conformer dim_linear: 2048 dim_hidden: 512 heads: 8 dropout_rate: 0.1 dropout_rate_pos_enc: 0.1 dropout_rate_pos_wise: 0.1 dropout_rate_att: 0.1 normalize_before: true macaron_style: true conv_mod_kernel: 31 num_blocks: 12 joint_network_conf: dim_joint_space: 640 use_preprocessor: true token_type: bpe bpemodel: data/en_token_list/bpe_unigram5000/bpe.model non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: default frontend_conf: n_fft: 512 hop_length: 160 fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz decoder: rnn decoder_conf: rnn_type: lstm num_layers: 1 dim_embedding: 512 dim_hidden: 512 dropout: 0.1 dropout_embed: 0.2 required: - output_dir - token_list version: '202204' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
234c0d2c43a155e73218dc1afe5f9aae
edvinkxs/finetuning-sentiment-model-3000-samples
edvinkxs
distilbert
24
12
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,055
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3498 - Accuracy: 0.8867 - F1: 0.8903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.1
73a959fae90930d23c3f21df18e1a1ba
mwrob/distilbert-base-uncased-sexist
mwrob
distilbert
10
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
938
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-sexist This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.11.0
3a8091b123e3d6e7d15715794d06ff6c
ekojs/satdata-sentiment-tuned
ekojs
roberta
11
3
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,493
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # satdata-sentiment-tuned This model is a fine-tuned version of [w11wo/indonesian-roberta-base-sentiment-classifier](https://huggingface.co/w11wo/indonesian-roberta-base-sentiment-classifier) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2700 - F1: 0.9310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 38 | 0.2717 | 0.9273 | | No log | 2.0 | 76 | 0.2709 | 0.9273 | | No log | 3.0 | 114 | 0.2704 | 0.9310 | | No log | 4.0 | 152 | 0.2701 | 0.9310 | | No log | 5.0 | 190 | 0.2700 | 0.9310 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu117 - Datasets 2.7.1 - Tokenizers 0.13.2
26e95ca2b8adb98270ed885f89aecee7
google/ul2
google
t5
14
1,663
transformers
86
text2text-generation
true
false
false
apache-2.0
['en']
['c4']
null
2
1
1
0
6
3
3
[]
false
true
true
11,874
false
# Introduction UL2 is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), apre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. ![model image](https://raw.githubusercontent.com/google-research/google-research/master/ul2/figs/ul2.png) **Abstract** Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization. For more information, please take a look at the original paper. Paper: [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) Authors: *Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler* # Training The checkpoint was iteratively pre-trained on C4 and fine-tuned on a variety of datasets ## PreTraining The model is pretrained on the C4 corpus. For pretraining, the model is trained on a total of 1 trillion tokens on C4 (2 million steps) with a batch size of 1024. The sequence length is set to 512/512 for inputs and targets. Dropout is set to 0 during pretraining. Pre-training took slightly more than one month for about 1 trillion tokens. The model has 32 encoder layers and 32 decoder layers, `dmodel` of 4096 and `df` of 16384. The dimension of each head is 256 for a total of 16 heads. Our model uses a model parallelism of 8. The same same sentencepiece tokenizer as T5 of vocab size 32000 is used (click [here](https://huggingface.co/docs/transformers/v4.20.0/en/model_doc/t5#transformers.T5Tokenizer) for more information about the T5 tokenizer). UL-20B can be interpreted as a model that is quite similar to T5 but trained with a different objective and slightly different scaling knobs. UL-20B was trained using the [Jax](https://github.com/google/jax) and [T5X](https://github.com/google-research/t5x) infrastructure. The training objective during pretraining is a mixture of different denoising strategies that are explained in the following: ## Mixture of Denoisers To quote the paper: > We conjecture that a strong universal model has to be exposed to solving diverse set of problems > during pre-training. Given that pre-training is done using self-supervision, we argue that such diversity > should be injected to the objective of the model, otherwise the model might suffer from lack a certain > ability, like long-coherent text generation. > Motivated by this, as well as current class of objective functions, we define three main paradigms that > are used during pre-training: - **R-Denoiser**: The regular denoising is the standard span corruption introduced in [T5](https://huggingface.co/docs/transformers/v4.20.0/en/model_doc/t5) that uses a range of 2 to 5 tokens as the span length, which masks about 15% of input tokens. These spans are short and potentially useful to acquire knowledge instead of learning to generate fluent text. - **S-Denoiser**: A specific case of denoising where we observe a strict sequential order when framing the inputs-to-targets task, i.e., prefix language modeling. To do so, we simply partition the input sequence into two sub-sequences of tokens as context and target such that the targets do not rely on future information. This is unlike standard span corruption where there could be a target token with earlier position than a context token. Note that similar to the Prefix-LM setup, the context (prefix) retains a bidirectional receptive field. We note that S-Denoising with very short memory or no memory is in similar spirit to standard causal language modeling. - **X-Denoiser**: An extreme version of denoising where the model must recover a large part of the input, given a small to moderate part of it. This simulates a situation where a model needs to generate long target from a memory with relatively limited information. To do so, we opt to include examples with aggressive denoising where approximately 50% of the input sequence is masked. This is by increasing the span length and/or corruption rate. We consider a pre-training task to be extreme if it has a long span (e.g., ≥ 12 tokens) or have a large corruption rate (e.g., ≥ 30%). X-denoising is motivated by being an interpolation between regular span corruption and language model like objectives. See the following diagram for a more visual explanation: ![mixture-of-denoisers](https://raw.githubusercontent.com/google-research/google-research/master/ul2/figs/mod.png) **Important**: For more details, please see sections 3.1.2 of the [paper](https://arxiv.org/pdf/2205.05131v1.pdf). ## Fine-tuning The model was continously fine-tuned after N pretraining steps where N is typically from 50k to 100k. In other words, after each Nk steps of pretraining, the model is finetuned on each downstream task. See section 5.2.2 of [paper](https://arxiv.org/pdf/2205.05131v1.pdf) to get an overview of all datasets that were used for fine-tuning). As the model is continuously finetuned, finetuning is stopped on a task once it has reached state-of-the-art to save compute. In total, the model was trained for 2.65 million steps. **Important**: For more details, please see sections 5.2.1 and 5.2.2 of the [paper](https://arxiv.org/pdf/2205.05131v1.pdf). ## Contribution This model was contributed by [Daniel Hesslow](https://huggingface.co/Seledorn). ## Examples The following shows how one can predict masked passages using the different denoising strategies. Given the size of the model the following examples need to be run on at least a 40GB A100 GPU. ### S-Denoising For *S-Denoising*, please make sure to prompt the text with the prefix `[S2S]` as shown below. ```python from transformers import T5ForConditionalGeneration, AutoTokenizer import torch model = T5ForConditionalGeneration.from_pretrained("google/ul2", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("google/ul2") input_string = "[S2S] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man with a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere <extra_id_0>" inputs = tokenizer(input_string, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(inputs, max_length=200) print(tokenizer.decode(outputs[0])) # -> <pad>. Dudley was a very good boy, but he was also very stupid.</s> ``` ### R-Denoising For *R-Denoising*, please make sure to prompt the text with the prefix `[NLU]` as shown below. ```python from transformers import T5ForConditionalGeneration, AutoTokenizer import torch model = T5ForConditionalGeneration.from_pretrained("google/ul2", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("google/ul2") input_string = "[NLU] Mr. Dursley was the director of a firm called <extra_id_0>, which made <extra_id_1>. He was a big, solid man with a bald head. Mrs. Dursley was thin and <extra_id_2> of neck, which came in very useful as she spent so much of her time <extra_id_3>. The Dursleys had a small son called Dudley and <extra_id_4>" inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda") outputs = model.generate(inputs, max_length=200) print(tokenizer.decode(outputs[0])) # -> "<pad><extra_id_0> Burrows<extra_id_1> brooms for witches and wizards<extra_id_2> had a lot<extra_id_3> scolding Dudley<extra_id_4> a daughter called Petunia. Dudley was a nasty, spoiled little boy who was always getting into trouble. He was very fond of his pet rat, Scabbers.<extra_id_5> Burrows<extra_id_3> screaming at him<extra_id_4> a daughter called Petunia</s> " ``` ### X-Denoising For *X-Denoising*, please make sure to prompt the text with the prefix `[NLG]` as shown below. ```python from transformers import T5ForConditionalGeneration, AutoTokenizer import torch model = T5ForConditionalGeneration.from_pretrained("google/ul2", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("google/ul2") input_string = "[NLG] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man wiht a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere. <extra_id_0>" model.cuda() inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda") outputs = model.generate(inputs, max_length=200) print(tokenizer.decode(outputs[0])) # -> "<pad><extra_id_0> Burrows<extra_id_1> a lot of money from the manufacture of a product called '' Burrows'''s ''<extra_id_2> had a lot<extra_id_3> looking down people's throats<extra_id_4> a daughter called Petunia. Dudley was a very stupid boy who was always getting into trouble. He was a big, fat, ugly boy who was always getting into trouble. He was a big, fat, ugly boy who was always getting into trouble. He was a big, fat, ugly boy who was always getting into trouble. He was a big, fat, ugly boy who was always getting into trouble. He was a big, fat, ugly boy who was always getting into trouble. He was a big, fat, ugly boy who was always getting into trouble. He was a big, fat, ugly boy who was always getting into trouble. He was a big, fat," ```
ddb9e8758a0dd67ddf11f341dcd6be1c
sd-dreambooth-library/true-guweiz-style
sd-dreambooth-library
null
24
3
diffusers
3
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
3
3
0
0
0
0
0
['text-to-image']
false
true
true
1,886
false
### True-GUWEIZ-Style on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook #### Model by Allenbv This your the Stable Diffusion model fine-tuned the True-GUWEIZ-Style concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)`: **truegwz** You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept: a truegwz paint of {} ![descarga 0](https://huggingface.co/sd-dreambooth-library/true-guweiz-style/resolve/main/concept_images/descarga_(13).png) ![descarga 1](https://huggingface.co/sd-dreambooth-library/true-guweiz-style/resolve/main/concept_images/descarga_(12).png) ![descarga 2](https://huggingface.co/sd-dreambooth-library/true-guweiz-style/resolve/main/concept_images/descarga_(10).png) ![descarga 3](https://huggingface.co/sd-dreambooth-library/true-guweiz-style/resolve/main/concept_images/descarga_(9).png) ![descarga 4](https://huggingface.co/sd-dreambooth-library/true-guweiz-style/resolve/main/concept_images/descarga_(8).png)
3feee417779f2eb2cb8f5ea3178c088a
sd-concepts-library/thegeneral
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,025
false
### thegeneral on Stable Diffusion This is the `<bobknight>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<bobknight> 0](https://huggingface.co/sd-concepts-library/thegeneral/resolve/main/concept_images/0.jpeg) ![<bobknight> 1](https://huggingface.co/sd-concepts-library/thegeneral/resolve/main/concept_images/3.jpeg) ![<bobknight> 2](https://huggingface.co/sd-concepts-library/thegeneral/resolve/main/concept_images/2.jpeg) ![<bobknight> 3](https://huggingface.co/sd-concepts-library/thegeneral/resolve/main/concept_images/1.jpeg)
ce1a854d5de901707662ab9f0d67475b
Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA
Helsinki-NLP
marian
10
7
transformers
1
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
1,108
false
### opus-mt-SCANDINAVIA-SCANDINAVIA * source languages: da,fo,is,no,nb,nn,sv * target languages: da,fo,is,no,nb,nn,sv * OPUS readme: [da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.da.sv | 69.2 | 0.811 |
deac144913fe1791436d28a56d7514e6
Helsinki-NLP/opus-mt-de-no
Helsinki-NLP
marian
11
128
transformers
0
translation
true
true
false
apache-2.0
['de', False]
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
2,112
false
### deu-nor * source group: German * target group: Norwegian * OPUS readme: [deu-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-nor/README.md) * model: transformer-align * source language(s): deu * target language(s): nno nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.deu.nor | 33.2 | 0.554 | ### System Info: - hf_name: deu-nor - source_languages: deu - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['de', 'no'] - src_constituents: {'deu'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-nor/opus-2020-06-17.test.txt - src_alpha3: deu - tgt_alpha3: nor - short_pair: de-no - chrF2_score: 0.5539999999999999 - bleu: 33.2 - brevity_penalty: 0.956 - ref_len: 32928.0 - src_name: German - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: de - tgt_alpha2: no - prefer_old: False - long_pair: deu-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
f28ce616dcdb7da47cfc462789ba1a77
google/t5-efficient-base-nh8
google
t5
12
48
transformers
0
text2text-generation
true
true
true
apache-2.0
['en']
['c4']
null
0
0
0
0
0
0
0
['deep-narrow']
false
true
true
6,248
false
# T5-Efficient-BASE-NH8 (Deep-Narrow version) T5-Efficient-BASE-NH8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. ## Details model architecture This model checkpoint - **t5-efficient-base-nh8** - is of model type **Base** with the following variations: - **nh** is **8** It has **194.62** million parameters and thus requires *ca.* **778.48 MB** of memory in full precision (*fp32*) or **389.24 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | #Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. ## Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. ## Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. ## Downstream Performance TODO: Add table if available ## Computational Complexity TODO: Add table if available ## More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986#issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future.
5f255325c0ee48ea39cbb382c2ba9377
sd-concepts-library/yoshi
sd-concepts-library
null
10
0
null
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,078
false
### Yoshi on Stable Diffusion This is the `<yoshi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<yoshi> 0](https://huggingface.co/sd-concepts-library/yoshi/resolve/main/concept_images/2.jpeg) ![<yoshi> 1](https://huggingface.co/sd-concepts-library/yoshi/resolve/main/concept_images/3.jpeg) ![<yoshi> 2](https://huggingface.co/sd-concepts-library/yoshi/resolve/main/concept_images/1.jpeg) ![<yoshi> 3](https://huggingface.co/sd-concepts-library/yoshi/resolve/main/concept_images/4.jpeg) ![<yoshi> 4](https://huggingface.co/sd-concepts-library/yoshi/resolve/main/concept_images/0.jpeg)
f89e4f0ef03ead0a2a55093dcaf09252
gabrielsgaspar/bert-base-uncased-emotions-augmented
gabrielsgaspar
bert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,752
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-emotions-augmented This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9815 - Accuracy: 0.7539 - F1: 0.7506 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8475 | 1.0 | 819 | 0.6336 | 0.7655 | 0.7651 | | 0.5594 | 2.0 | 1638 | 0.6109 | 0.7695 | 0.7680 | | 0.4596 | 3.0 | 2457 | 0.6528 | 0.7601 | 0.7556 | | 0.3663 | 4.0 | 3276 | 0.6992 | 0.7631 | 0.7612 | | 0.2809 | 5.0 | 4095 | 0.7773 | 0.7571 | 0.7542 | | 0.2142 | 6.0 | 4914 | 0.8879 | 0.7541 | 0.7504 | | 0.1671 | 7.0 | 5733 | 0.9476 | 0.7552 | 0.7517 | | 0.1416 | 8.0 | 6552 | 0.9815 | 0.7539 | 0.7506 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
73a14b9a56d011bdc16c4d09c3ab7e58