modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-24 12:28:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-24 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
willcai/wav2vec2_common_voice_accents_scotland | willcai | 2022-03-23T11:15:11Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-22T19:55:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_scotland
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_scotland
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7171 | 1.28 | 400 | 1.1618 |
| 0.4391 | 2.56 | 800 | 0.2422 |
| 0.2259 | 3.83 | 1200 | 0.2071 |
| 0.1813 | 5.11 | 1600 | 0.2126 |
| 0.1531 | 6.39 | 2000 | 0.2010 |
| 0.1383 | 7.67 | 2400 | 0.2004 |
| 0.13 | 8.95 | 2800 | 0.2069 |
| 0.1193 | 10.22 | 3200 | 0.2081 |
| 0.1124 | 11.5 | 3600 | 0.2051 |
| 0.1023 | 12.78 | 4000 | 0.2175 |
| 0.097 | 14.06 | 4400 | 0.2261 |
| 0.0863 | 15.34 | 4800 | 0.2301 |
| 0.0823 | 16.61 | 5200 | 0.2334 |
| 0.079 | 17.89 | 5600 | 0.2252 |
| 0.0743 | 19.17 | 6000 | 0.2393 |
| 0.0696 | 20.45 | 6400 | 0.2481 |
| 0.0644 | 21.73 | 6800 | 0.2416 |
| 0.064 | 23.0 | 7200 | 0.2449 |
| 0.0584 | 24.28 | 7600 | 0.2660 |
| 0.0544 | 25.56 | 8000 | 0.2630 |
| 0.0523 | 26.84 | 8400 | 0.2677 |
| 0.0494 | 28.12 | 8800 | 0.2730 |
| 0.0462 | 29.39 | 9200 | 0.2752 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
willcai/wav2vec2_common_voice_accents_us | willcai | 2022-03-23T11:03:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-22T18:14:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_us
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_us
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.549 | 1.28 | 400 | 0.8521 |
| 0.4066 | 2.56 | 800 | 0.2407 |
| 0.2262 | 3.83 | 1200 | 0.2070 |
| 0.1828 | 5.11 | 1600 | 0.2134 |
| 0.1565 | 6.39 | 2000 | 0.2060 |
| 0.1448 | 7.67 | 2400 | 0.2100 |
| 0.1333 | 8.95 | 2800 | 0.2036 |
| 0.121 | 10.22 | 3200 | 0.2192 |
| 0.1146 | 11.5 | 3600 | 0.2154 |
| 0.1108 | 12.78 | 4000 | 0.2223 |
| 0.1017 | 14.06 | 4400 | 0.2331 |
| 0.094 | 15.34 | 4800 | 0.2257 |
| 0.0896 | 16.61 | 5200 | 0.2229 |
| 0.0825 | 17.89 | 5600 | 0.2229 |
| 0.0777 | 19.17 | 6000 | 0.2417 |
| 0.0719 | 20.45 | 6400 | 0.2433 |
| 0.0659 | 21.73 | 6800 | 0.2447 |
| 0.0651 | 23.0 | 7200 | 0.2446 |
| 0.0587 | 24.28 | 7600 | 0.2542 |
| 0.056 | 25.56 | 8000 | 0.2587 |
| 0.0521 | 26.84 | 8400 | 0.2640 |
| 0.0494 | 28.12 | 8800 | 0.2753 |
| 0.0465 | 29.39 | 9200 | 0.2722 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Newt007/multi-class-attacks | Newt007 | 2022-03-23T10:30:59Z | 0 | 0 | null | [
"license:afl-3.0",
"region:us"
] | null | 2022-03-23T10:28:31Z | ---
license: afl-3.0
---
---
language:
- python 3.7
---
libraries:
- keras==2.0.2
- tensorflow==2.4.1 |
Daniele/italian-spellchecker | Daniele | 2022-03-23T10:19:19Z | 35 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"seq2seq",
"it",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-21T14:33:20Z | ---
language:
- it
tags:
- seq2seq
license: mit
---
# Italian Contextual Spellchecker
The model is a fine-tuned version of [IT5](https://huggingface.co/models?search=it5)[1], specifically modelled for computing a spellchecking in the shape of a sequence-to-sequence task.
### USAGE
The input sequence should have the structure <b>seq: <i>your text</i>.</b>. Missing the seq token at the beginning or the final punctuation mark may lead to bad performances. |
pyannote/TestModelForContinuousIntegration | pyannote | 2022-03-23T09:24:42Z | 5 | 0 | pyannote-audio | [
"pyannote-audio",
"pytorch",
"tensorboard",
"pyannote",
"pyannote-audio-model",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-model
license: mit
inference: false
---
## Dummy model used for continuous integration purposes
```bash
$ pyannote-audio-train protocol=Debug.SpeakerDiarization.Debug \
task=VoiceActivityDetection \
task.duration=2. \
model=DebugSegmentation \
trainer.max_epochs=10
```
|
Alvenir/bert-punct-restoration-da | Alvenir | 2022-03-23T09:05:15Z | 17,347 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"punctuation restoration",
"da",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-22T17:33:25Z | ---
language: da
tags:
- bert
- punctuation restoration
license: apache-2.0
datasets:
- custom
---
# Bert Punctuation Restoration Danish
This model performs the punctuation restoration task in Danish. The method used is sequence classification similar to how NER models
are trained.
## Model description
TODO
### How to use
The model requires some additional inference code, hence we created an awesome little pip package for inference.
The inference code is based on the `TokenClassificationPipeline` pipeline from huggingface.
First, install the little package by running
```
pip install punctfix
```
Then restoration is as simple as the following snippet:
```python
>>> from punctfix import PunctFixer
>>> fixer = PunctFixer(language="da")
>>> example_text = "mit navn det er rasmus og jeg kommer fra firmaet alvenir det er mig som har trænet denne lækre model"
>>> print(fixer.punctuate(example_text))
'Mit navn det er Rasmus og jeg kommer fra firmaet Alvenir. Det er mig som har trænet denne lækre model.'
>>> example_text = "en dag bliver vi sku glade for at vi nu kan sætte punktummer og kommaer i en sætning det fungerer da meget godt ikke"
>>> print(fixer.punctuate(example_text))
'En dag bliver vi sku glade for, at vi nu kan sætte punktummer og kommaer i en sætning. Det fungerer da meget godt, ikke?'
```
## Training data
To Do
## Training procedure
To Do
### Preprocessing
TODO
## Evaluation results
TODO
|
bigmorning/my-gpt-model-3 | bigmorning | 2022-03-23T08:22:22Z | 3 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-23T05:52:35Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-gpt-model-3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-gpt-model-3
This model is a fine-tuned version of [bigmorning/my-gpt-model](https://huggingface.co/bigmorning/my-gpt-model) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.1163
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 5.1163 | 0 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
krishnayogik/distilbert-base-uncased-finetuned-emotion | krishnayogik | 2022-03-23T07:27:09Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-23T07:14:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9247696388302888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
- Accuracy: 0.9245
- F1: 0.9248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8359 | 1.0 | 250 | 0.3316 | 0.901 | 0.8967 |
| 0.2584 | 2.0 | 500 | 0.2258 | 0.9245 | 0.9248 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
aaraki/wav2vec2-base-finetuned-ks | aaraki | 2022-03-23T05:55:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-03-23T04:52:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9949
- Accuracy: 0.6958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0231 | 1.0 | 399 | 0.9949 | 0.6958 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mimicheng/codeparrot-ds-sample | mimicheng | 2022-03-23T05:30:38Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-22T22:13:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5057 | 0.93 | 5000 | 1.6003 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Axon/resnet50-v1 | Axon | 2022-03-22T23:54:44Z | 0 | 0 | null | [
"Axon",
"Elixir",
"dataset:ImageNet",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04Z | ---
license: apache-2.0
tags:
- Axon
- Elixir
datasets:
- ImageNet
---
# ResNet
This ResNet50 model was translated from the ONNX ResNetv1 model found
at https://github.com/onnx/models/tree/main/vision/classification/resnet into Axon using [AxonOnnx](https://github.com/elixir-nx/axon_onnx)
The following description is copied from the relevant description at the ONNX repository.
## Use cases
These ResNet models perform image classification - they take images as input and classify the major object in the image into a set of pre-defined classes. They are trained on ImageNet dataset which contains images from 1000 classes. ResNet models provide very high accuracies with affordable model sizes. They are ideal for cases when high accuracy of classification is required.
ImageNet trained models are often used as the base layers for a transfer learning approach to training a model in your domain. Transfer learning can significantly reduce the processing necessary to train an accurate model in your domain. This model was published here with the expectation that it would be useful to the Elixir community for transfer learning and other similar approaches.
## Description
Deeper neural networks are more difficult to train. Residual learning framework ease the training of networks that are substantially deeper. The research explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It also provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset the residual nets were evaluated with a depth of up to 152 layers — 8× deeper than VGG nets but still having lower complexity.
## Model
ResNet models consists of residual blocks and came up to counter the effect of deteriorating accuracies with more layers due to network not learning the initial layers.
ResNet v1 uses post-activation for the residual blocks.
### Input
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224.
The inference was done using jpeg image.
### Preprocessing
The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferably happen at preprocessing.
### Output
The model outputs image scores for each of the 1000 classes of ImageNet.
### Postprocessing
The post-processing involves calculating the softmax probability scores for each class. You can also sort them to report the most probable classes. Check [imagenet_postprocess.py](../imagenet_postprocess.py) for code.
## Dataset
Dataset used for train and validation: [ImageNet (ILSVRC2012)](http://www.image-net.org/challenges/LSVRC/2012/). Check [imagenet_prep](../imagenet_prep.md) for guidelines on preparing the dataset.
## References
* **ResNetv1**
[Deep residual learning for image recognition](https://arxiv.org/abs/1512.03385)
He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016.
* **ONNX source model**
[onnx/models vision/classification/resnet resnet50-v1-7.onnx](https://github.com/onnx/models/tree/main/vision/classification/resnet/README)
|
CogComp/roberta-temporal-predictor | CogComp | 2022-03-22T20:15:03Z | 15 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"arxiv:2202.00436",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | ---
license: mit
widget:
- text: "The man turned on the faucet <mask> water flows out."
- text: "The woman received her pension <mask> she retired."
---
# roberta-temporal-predictor
A RoBERTa-base model that is fine-tuned on the [The New York Times Annotated Corpus](https://catalog.ldc.upenn.edu/LDC2008T19)
to predict temporal precedence of two events. This is used as the ``temporality prediction'' component
in our ROCK framework for reasoning about commonsense causality. See our [paper](https://arxiv.org/abs/2202.00436) for more details.
# Usage
You can directly use this model for filling-mask tasks, as shown in the example widget.
However, for better temporal inference, it is recommended to symmetrize the outputs as
$$
P(E_1 \prec E_2) = \frac{1}{2} (f(E_1,E_2) + f(E_2,E_1))
$$
where ``f(E_1,E_2)`` denotes the predicted probability for ``E_1`` to occur preceding ``E_2``.
For simplicity, we implement the following TempPredictor class that incorporate this symmetrization automatically.
Below is an example usage for the ``TempPredictor`` class:
```python
from transformers import (RobertaForMaskedLM, RobertaTokenizer)
from src.temp_predictor import TempPredictor
TORCH_DEV = "cuda:0" # change as needed
tp_roberta_ft = src.TempPredictor(
model=RobertaForMaskedLM.from_pretrained("CogComp/roberta-temporal-predictor"),
tokenizer=RobertaTokenizer.from_pretrained("CogComp/roberta-temporal-predictor"),
device=TORCH_DEV
)
E1 = "The man turned on the faucet."
E2 = "Water flows out."
t12 = tp_roberta_ft(E1, E2, top_k=5)
print(f"P('{E1}' before '{E2}'): {t12}")
```
# BibTeX entry and citation info
```bib
@misc{zhang2022causal,
title={Causal Inference Principles for Reasoning about Commonsense Causality},
author={Jiayao Zhang and Hongming Zhang and Dan Roth and Weijie J. Su},
year={2022},
eprint={2202.00436},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
blckwdw61/sysformver1 | blckwdw61 | 2022-03-22T19:46:14Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-22T18:35:28Z | # CES BERT sysform model
Fine-tuned BERT cased model |
msamogh/autonlp-cai-out-of-scope-649919116 | msamogh | 2022-03-22T15:27:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:msamogh/autonlp-data-cai-out-of-scope",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-19T21:40:42Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 2.438401649319185
---
# What do the class labels mean?
0 - out of scope
1 - in scope
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919116
- CO2 Emissions (in grams): 2.438401649319185
## Validation Metrics
- Loss: 0.5314930081367493
- Accuracy: 0.7526881720430108
- Precision: 0.8490566037735849
- Recall: 0.75
- AUC: 0.8515151515151514
- F1: 0.7964601769911505
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919116
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919116", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919116", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
speeqo/bert-restore-punctuation | speeqo | 2022-03-22T15:01:06Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"punctuation",
"en",
"dataset:yelp_polarity",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-22T14:57:22Z | ---
language:
- en
tags:
- punctuation
license: mit
datasets:
- yelp_polarity
metrics:
- f1
---
# ✨ bert-restore-punctuation
[]()
This a bert-base-uncased model finetuned for punctuation restoration on [Yelp Reviews](https://www.tensorflow.org/datasets/catalog/yelp_polarity_reviews).
The model predicts the punctuation and upper-casing of plain, lower-cased text. An example use case can be ASR output. Or other cases when text has lost punctuation.
This model is intended for direct use as a punctuation restoration model for the general English language. Alternatively, you can use this for further fine-tuning on domain-specific texts for punctuation restoration tasks.
Model restores the following punctuations -- **[! ? . , - : ; ' ]**
The model also restores the upper-casing of words.
-----------------------------------------------
## 🚋 Usage
**Below is a quick way to get up and running with the model.**
1. First, install the package.
```bash
pip install rpunct
```
2. Sample python code.
```python
from rpunct import RestorePuncts
# The default language is 'english'
rpunct = RestorePuncts()
rpunct.punctuate("""in 2018 cornell researchers built a high-powered detector that in combination with an algorithm-driven process called ptychography set a world record
by tripling the resolution of a state-of-the-art electron microscope as successful as it was that approach had a weakness it only worked with ultrathin samples that were
a few atoms thick anything thicker would cause the electrons to scatter in ways that could not be disentangled now a team again led by david muller the samuel b eckert
professor of engineering has bested its own record by a factor of two with an electron microscope pixel array detector empad that incorporates even more sophisticated
3d reconstruction algorithms the resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves""")
# Outputs the following:
# In 2018, Cornell researchers built a high-powered detector that, in combination with an algorithm-driven process called Ptychography, set a world record by tripling the
# resolution of a state-of-the-art electron microscope. As successful as it was, that approach had a weakness. It only worked with ultrathin samples that were a few atoms
# thick. Anything thicker would cause the electrons to scatter in ways that could not be disentangled. Now, a team again led by David Muller, the Samuel B.
# Eckert Professor of Engineering, has bested its own record by a factor of two with an Electron microscope pixel array detector empad that incorporates even more
# sophisticated 3d reconstruction algorithms. The resolution is so fine-tuned the only blurring that remains is the thermal jiggling of the atoms themselves.
```
**This model works on arbitrarily large text in English language and uses GPU if available.**
-----------------------------------------------
## 📡 Training data
Here is the number of product reviews we used for finetuning the model:
| Language | Number of text samples|
| -------- | ----------------- |
| English | 560,000 |
We found the best convergence around _**3 epochs**_, which is what presented here and available via a download.
-----------------------------------------------
## 🎯 Accuracy
The fine-tuned model obtained the following accuracy on 45,990 held-out text samples:
| Accuracy | Overall F1 | Eval Support |
| -------- | ---------------------- | ------------------- |
| 91% | 90% | 45,990
Below is a breakdown of the performance of the model by each label:
| label | precision | recall | f1-score | support|
| --------- | -------------|-------- | ----------|--------|
| **!** | 0.45 | 0.17 | 0.24 | 424
| **!+Upper** | 0.43 | 0.34 | 0.38 | 98
| **'** | 0.60 | 0.27 | 0.37 | 11
| **,** | 0.59 | 0.51 | 0.55 | 1522
| **,+Upper** | 0.52 | 0.50 | 0.51 | 239
| **-** | 0.00 | 0.00 | 0.00 | 18
| **.** | 0.69 | 0.84 | 0.75 | 2488
| **.+Upper** | 0.65 | 0.52 | 0.57 | 274
| **:** | 0.52 | 0.31 | 0.39 | 39
| **:+Upper** | 0.36 | 0.62 | 0.45 | 16
| **;** | 0.00 | 0.00 | 0.00 | 17
| **?** | 0.54 | 0.48 | 0.51 | 46
| **?+Upper** | 0.40 | 0.50 | 0.44 | 4
| **none** | 0.96 | 0.96 | 0.96 |35352
| **Upper** | 0.84 | 0.82 | 0.83 | 5442
-----------------------------------------------
## ☕ Contact
Contact [Daulet Nurmanbetov]([email protected]) for questions, feedback and/or requests for similar models.
----------------------------------------------- |
espnet/YushiUeda_swbd_sentiment_asr_train_asr_conformer_wav2vec2_2 | espnet | 2022-03-22T14:14:26Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:swbd_sentiment",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-22T14:10:53Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- swbd_sentiment
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/YushiUeda_swbd_sentiment_asr_train_asr_conformer_wav2vec2_2`
This model was trained by YushiUeda using swbd_sentiment recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 17089cb2cf5f1275132163f6327defbcc1b1bc1b
pip install -e .
cd egs2/swbd_sentiment/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_swbd_sentiment_asr_train_asr_conformer_wav2vec2_2
```
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_wav2vec2_2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_wav2vec2_2_raw_en_word
ngpu: 1
seed: 2022
num_workers: 2
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 43183
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 100
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 6000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-05
scheduler: warmuplr
scheduler_conf:
warmup_steps: 5000
token_list:
- <blank>
- <unk>
- i
- and
- the
- you
- that
- it
- a
- Neutral
- to
- uh
- '''s'
- of
- know
- Positive
- they
- in
- we
- '''t'
- have
- but
- so
- was
- like
- Negative
- yeah
- is
- just
- um
- well
- do
- for
- think
- don
- there
- or
- 'on'
- '''re'
- my
- what
- really
- be
- with
- not
- if
- are
- one
- he
- '''ve'
- because
- '''m'
- about
- all
- get
- can
- had
- out
- at
- them
- when
- this
- as
- oh
- lot
- up
- people
- some
- then
- would
- go
- right
- mean
- now
- time
- kind
- got
- going
- good
- she
- things
- more
- were
- from
- something
- been
- 'no'
- see
- me
- too
- an
- your
- much
- little
- guess
- how
- where
- our
- very
- here
- their
- thing
- two
- '''ll'
- other
- did
- years
- work
- even
- has
- any
- way
- probably
- those
- could
- say
- real
- back
- '''d'
- year
- down
- home
- than
- want
- didn
- into
- pretty
- okay
- who
- take
- huh
- school
- said
- make
- over
- kids
- never
- always
- put
- by
- her
- stuff
- went
- doing
- three
- these
- 'yes'
- which
- around
- only
- big
- maybe
- 'off'
- anything
- day
- t
- sure
- actually
- come
- money
- him
- different
- everything
- still
- used
- many
- five
- will
- sort
- nice
- us
- last
- his
- thought
- every
- most
- getting
- first
- feel
- bit
- need
- children
- same
- course
- also
- new
- care
- family
- hum
- long
- through
- before
- use
- done
- should
- house
- old
- let
- does
- car
- being
- problem
- doesn
- four
- seems
- though
- pay
- look
- whole
- great
- husband
- haven
- try
- live
- trying
- ever
- why
- read
- better
- find
- far
- keep
- ago
- sometimes
- watch
- interesting
- quite
- area
- hard
- talking
- else
- another
- part
- bad
- having
- twenty
- whatever
- place
- couple
- usually
- 'true'
- high
- texas
- seen
- fact
- s
- enough
- after
- own
- college
- while
- country
- hundred
- somebody
- few
- either
- times
- week
- away
- gonna
- type
- job
- six
- dollars
- tell
- might
- remember
- again
- came
- give
- started
- start
- ten
- made
- play
- able
- dallas
- enjoy
- working
- once
- c
- someone
- life
- least
- v
- everybody
- since
- fun
- both
- talk
- wouldn
- ones
- news
- anyway
- wasn
- person
- heard
- believe
- am
- th
- buy
- may
- point
- call
- night
- y
- almost
- bye
- isn
- system
- wanted
- called
- took
- state
- wife
- child
- half
- women
- goes
- next
- yet
- especially
- love
- looking
- parents
- gone
- such
- gets
- understand
- together
- movie
- until
- w
- days
- end
- saying
- idea
- saw
- music
- mother
- thirty
- couldn
- makes
- stay
- change
- m
- basically
- wonderful
- problems
- guy
- worked
- spend
- help
- lived
- credit
- whether
- seem
- eight
- n
- best
- world
- run
- hear
- bought
- young
- each
- months
- seven
- places
- supposed
- city
- matter
- coming
- exactly
- d
- small
- summer
- comes
- certain
- company
- less
- thinking
- won
- during
- b
- thousand
- agree
- show
- daughter
- sounds
- myself
- funny
- water
- o
- month
- dog
- fifty
- paper
- gotten
- found
- taking
- today
- certainly
- boy
- friends
- number
- mine
- program
- food
- son
- p
- older
- name
- air
- movies
- government
- moved
- schools
- outside
- deal
- close
- tried
- paying
- eat
- drive
- hours
- nine
- rather
- cars
- crime
- important
- war
- living
- between
- business
- anymore
- reason
- weeks
- public
- vote
- situation
- recently
- nothing
- easy
- sit
- pick
- taxes
- turn
- full
- percent
- making
- friend
- book
- happen
- minutes
- middle
- town
- watching
- paid
- eighty
- tax
- several
- listen
- set
- talked
- north
- takes
- reading
- definitely
- law
- jury
- kinds
- married
- u
- enjoyed
- says
- without
- works
- learn
- everyone
- drug
- major
- side
- cost
- room
- education
- morning
- computer
- involved
- mostly
- aren
- health
- l
- anybody
- along
- amount
- man
- against
- weather
- often
- under
- age
- forty
- insurance
- favorite
- hope
- card
- must
- happened
- lives
- left
- drugs
- expensive
- american
- miles
- yourself
- hour
- already
- plano
- cards
- decided
- large
- difference
- ahead
- fifteen
- camping
- told
- although
- second
- r
- woman
- twelve
- knew
- guys
- cut
- neat
- fish
- mind
- wrong
- unless
- sense
- instead
- leave
- wear
- class
- hand
- top
- walk
- bring
- past
- f
- running
- e
- absolutely
- weekend
- line
- books
- question
- team
- wish
- exercise
- interested
- areas
- baby
- states
- liked
- somewhere
- father
- experience
- phone
- case
- men
- lots
- cat
- society
- taken
- changed
- game
- worth
- seventy
- gun
- h
- wonder
- hit
- group
- service
- kept
- shows
- gosh
- early
- interest
- trouble
- control
- themselves
- ha
- finally
- using
- god
- dad
- cook
- hot
- difficult
- nursing
- front
- terms
- growing
- late
- kid
- looked
- felt
- rain
- teach
- tend
- realize
- weren
- sixty
- except
- needs
- social
- budget
- figure
- recycling
- lake
- wanna
- looks
- wh
- forth
- mom
- concerned
- south
- grew
- topic
- ways
- death
- christmas
- regular
- wait
- imagine
- television
- east
- trees
- check
- fairly
- hate
- general
- catch
- dinner
- built
- ready
- fine
- sister
- story
- playing
- starting
- homes
- office
- awful
- radio
- needed
- companies
- changes
- programs
- fishing
- nineteen
- ask
- tough
- cans
- easier
- yard
- cold
- ought
- street
- later
- door
- wants
- students
- national
- space
- across
- brother
- free
- local
- tha
- level
- happens
- sitting
- newspaper
- move
- countries
- store
- subject
- girl
- beautiful
- turned
- soon
- income
- putting
- church
- university
- dress
- information
- lately
- degree
- york
- vacation
- pollution
- totally
- winter
- america
- ah
- ours
- cats
- spent
- happy
- played
- consider
- cases
- spring
- california
- longer
- teacher
- oil
- send
- lost
- sports
- garden
- teachers
- families
- particular
- buying
- amazing
- likes
- football
- united
- teaching
- hey
- benefits
- brought
- gave
- party
- worry
- throw
- testing
- given
- bunch
- near
- nobody
- community
- driving
- open
- personal
- sell
- force
- chance
- wow
- test
- baseball
- within
- biggest
- quality
- building
- example
- seeing
- power
- afford
- support
- caught
- inside
- plan
- seemed
- ninety
- younger
- learned
- generation
- charge
- punishment
- rest
- dogs
- become
- clean
- short
- privacy
- g
- calls
- plus
- particularly
- decide
- terrible
- twice
- fall
- extra
- period
- choice
- hold
- ended
- hadn
- main
- guilty
- depends
- save
- excellent
- price
- strange
- feeling
- size
- trial
- military
- boys
- per
- bet
- judge
- parts
- noticed
- anywhere
- fan
- head
- center
- glad
- clothes
- rate
- stop
- eleven
- white
- stand
- suppose
- guns
- grade
- watched
- bigger
- scary
- issue
- special
- dollar
- green
- its
- jobs
- means
- black
- worse
- knows
- plastic
- low
- spending
- picked
- golf
- gas
- single
- neighborhood
- necessarily
- alone
- cooking
- newspapers
- pull
- fast
- completely
- road
- student
- crimes
- houses
- paint
- medical
- learning
- fair
- restaurant
- miss
- lawn
- giving
- washington
- doctor
- word
- killed
- recycle
- light
- cash
- visit
- familiar
- grass
- itself
- season
- chicken
- rid
- president
- stayed
- normally
- whenever
- machine
- graduate
- eighteen
- capital
- shouldn
- virginia
- private
- field
- magazines
- kill
- market
- apartment
- anyone
- waiting
- asked
- classes
- break
- crazy
- helps
- aware
- sunday
- hm
- speak
- term
- sound
- property
- sad
- comfortable
- waste
- channel
- evening
- cover
- heavy
- carry
- everyday
- systems
- gives
- wa
- answer
- higher
- unfortunately
- minute
- future
- serious
- snow
- available
- smaller
- handle
- ground
- behind
- huge
- west
- plant
- allowed
- wind
- peace
- costs
- cause
- serve
- rent
- lucky
- gee
- build
- english
- telling
- lose
- individual
- gardening
- busy
- order
- raised
- basic
- basis
- rock
- training
- happening
- opinion
- heart
- follow
- mainly
- history
- walking
- ye
- average
- towards
- houston
- games
- travel
- decision
- environment
- respect
- list
- hopefully
- grow
- others
- sorry
- san
- taught
- weight
- bags
- hurt
- finding
- attention
- hasn
- computers
- raise
- aerobics
- quick
- shot
- personally
- bedroom
- similar
- loved
- sixties
- park
- helping
- feet
- industry
- write
- generally
- weird
- record
- benefit
- pool
- mail
- pennsylvania
- glass
- notice
- calling
- process
- land
- originally
- richardson
- cities
- afraid
- utah
- entire
- colorado
- ball
- boat
- grandmother
- possible
- folks
- helped
- strong
- keeping
- bill
- keeps
- thank
- camp
- third
- types
- eventually
- obviously
- yesterday
- apparently
- instance
- pet
- central
- club
- flowers
- trash
- trip
- classical
- europe
- changing
- perhaps
- self
- color
- foot
- video
- based
- station
- saturday
- french
- normal
- fire
- '''clock'
- issues
- starts
- piece
- hobby
- quit
- prison
- parent
- oldest
- bush
- coverage
- police
- forget
- girls
- occasionally
- bank
- shape
- beginning
- moving
- sent
- vietnam
- nights
- current
- salary
- himself
- stories
- mountains
- aluminum
- luck
- invasion
- tape
- florida
- bed
- laws
- research
- mess
- hoping
- players
- tired
- thirteen
- magazine
- expect
- sleep
- words
- language
- push
- position
- hobbies
- background
- plants
- inches
- easily
- stopped
- murder
- shoot
- maryland
- hardly
- bills
- attitude
- pro
- civil
- sometime
- human
- wanting
- goodness
- security
- doctors
- kitchen
- somehow
- penalty
- county
- eating
- simply
- die
- bike
- reunion
- project
- typical
- j
- however
- total
- mexico
- base
- economy
- restaurants
- responsibility
- jail
- lower
- died
- tested
- safe
- voting
- elderly
- sh
- listening
- sudden
- numbers
- career
- stick
- born
- wondering
- poor
- painting
- active
- professional
- supposedly
- li
- lady
- reasons
- cool
- sixteen
- yep
- excuse
- horrible
- political
- red
- science
- federal
- besides
- shop
- opportunity
- ride
- planning
- degrees
- writing
- mexican
- engineering
- surprised
- bother
- share
- graduated
- account
- financial
- hands
- activities
- seventies
- step
- thanks
- bag
- role
- england
- limit
- willing
- hospital
- view
- band
- teams
- tonight
- groups
- advantage
- heat
- department
- turns
- tree
- telephone
- became
- brand
- criminal
- blue
- dry
- warm
- weekends
- grown
- stores
- rights
- garbage
- junior
- everywhere
- prices
- metric
- ran
- equipment
- till
- cross
- considered
- track
- moment
- figured
- americans
- met
- worst
- ridiculous
- grocery
- yours
- neighbor
- piano
- sold
- cowboys
- selling
- savings
- grandchildren
- nowadays
- add
- plays
- conversation
- lunch
- straight
- sentence
- floor
- dead
- fourteen
- meet
- ideas
- foods
- israel
- fix
- ourselves
- swimming
- upset
- sign
- sewing
- wood
- recipe
- van
- upon
- standard
- box
- win
- wall
- offer
- products
- otherwise
- pounds
- stations
- ex
- staying
- drop
- body
- carolina
- sales
- meal
- ice
- basketball
- mixed
- careful
- possibly
- sick
- farm
- retired
- compared
- western
- hearing
- finished
- separate
- mentioned
- soviet
- truck
- river
- defense
- oklahoma
- harder
- k
- re
- stuck
- cable
- trade
- favor
- positive
- related
- smoke
- effect
- various
- bottom
- awhile
- kindergarten
- beat
- court
- beach
- baltimore
- choose
- allow
- brown
- hang
- known
- sorts
- bathroom
- scared
- popular
- extremely
- politics
- hair
- policy
- wha
- saint
- covered
- ca
- sisters
- boston
- lakes
- forever
- fight
- downtown
- visa
- sauce
- garage
- lines
- suit
- whereas
- speech
- direction
- animals
- corps
- fit
- majority
- chinese
- dark
- painted
- milk
- concern
- dump
- nature
- safety
- shoes
- star
- questions
- switch
- clear
- trips
- management
- beyond
- depending
- sing
- iraq
- pressure
- cute
- runs
- windows
- salad
- board
- chicago
- population
- legal
- super
- '''all'
- puts
- slow
- pets
- forward
- thousands
- style
- debt
- becoming
- mo
- pop
- violent
- italian
- earlier
- cheap
- weapons
- coast
- austin
- traveling
- passed
- x
- speaking
- points
- prefer
- threat
- further
- master
- table
- broken
- random
- row
- northern
- simple
- appreciate
- district
- train
- continue
- rangers
- pittsburgh
- truth
- value
- quickly
- raising
- pass
- tennis
- flower
- bass
- engine
- becomes
- variety
- jeans
- exciting
- organization
- spread
- sat
- incredible
- somewhat
- loan
- engineer
- doubt
- southern
- monday
- backyard
- forced
- papers
- express
- saving
- owned
- recent
- toward
- fortunate
- liberal
- shopping
- rough
- brothers
- worried
- meals
- scouts
- vacations
- hunting
- lawyers
- wisconsin
- bucks
- act
- voice
- helpful
- wide
- retirement
- cannot
- picture
- picking
- suspect
- spare
- held
- election
- study
- report
- begin
- antonio
- drove
- opposed
- league
- ju
- se
- solution
- closer
- character
- finish
- knowing
- million
- common
- services
- thinks
- player
- violence
- wrote
- highway
- reasonable
- afternoon
- series
- developed
- effort
- christian
- fantastic
- saved
- seventeen
- barbecue
- sun
- conditioning
- ohio
- babies
- arlington
- hole
- visited
- rural
- herself
- knowledge
- kn
- plans
- instruments
- above
- border
- bible
- losing
- china
- events
- leaving
- written
- taste
- friday
- schedule
- anytime
- showed
- aspect
- range
- earth
- rice
- broke
- tent
- excited
- roles
- situations
- rooms
- spot
- laid
- duty
- bottles
- russia
- fighting
- pound
- letter
- convenient
- thi
- storm
- original
- wild
- showing
- percentage
- required
- grandparents
- extent
- economic
- voted
- canada
- trust
- healthy
- dealing
- face
- hired
- discuss
- larger
- pleased
- eye
- constantly
- perfect
- stupid
- square
- mix
- meat
- semester
- necessary
- mandatory
- burning
- fly
- mothers
- aids
- checked
- bedrooms
- fresh
- advice
- tomatoes
- treat
- sale
- ford
- japanese
- burn
- correct
- limited
- sleeping
- actual
- ends
- female
- hundreds
- feelings
- impact
- leaves
- section
- lay
- provide
- planted
- factor
- fill
- rich
- deep
- someplace
- drives
- circumstances
- honda
- jersey
- smoking
- feels
- fifties
- access
- doors
- pattern
- names
- payment
- facilities
- automatic
- boxes
- hi
- pictures
- versus
- ability
- edge
- politicians
- amazed
- boss
- union
- neighbors
- distance
- prime
- article
- mistake
- grades
- bread
- bothers
- jeez
- rented
- fourth
- alcohol
- gulf
- catfish
- license
- shooting
- touch
- asking
- realized
- require
- natural
- expenses
- purchase
- energy
- talks
- colors
- smart
- considering
- lessons
- tremendous
- participate
- ages
- missed
- quiet
- cheaper
- cents
- payments
- iron
- frightening
- forgot
- cheese
- daughters
- lawyer
- creek
- dental
- seat
- humid
- belt
- michigan
- extended
- flat
- driver
- foreign
- stays
- adults
- songs
- due
- wet
- double
- stress
- desert
- drink
- material
- equal
- deterrent
- machines
- eastern
- boring
- apart
- vegetables
- recipes
- unusual
- responsible
- hire
- garland
- ho
- dangerous
- loans
- colleges
- served
- prisons
- recycled
- cousins
- gorgeous
- member
- values
- fell
- fund
- metal
- wolves
- technology
- form
- enjoyable
- entertainment
- successful
- juries
- brings
- likely
- convicted
- appeal
- minimum
- opposite
- sport
- complete
- smell
- gallon
- lord
- employees
- centers
- alive
- blow
- meant
- cutting
- relatives
- bus
- commit
- none
- jus
- holding
- sand
- swing
- courses
- ski
- breed
- heck
- casual
- blood
- admit
- join
- fi
- draw
- upper
- bell
- youngest
- traffic
- protect
- tends
- medicine
- strongly
- committed
- opinions
- brick
- sides
- congress
- gasoline
- regularly
- plenty
- collect
- williams
- tickets
- perspective
- damage
- present
- bowl
- kidding
- employee
- tests
- loves
- round
- nations
- german
- roof
- august
- october
- disney
- pieces
- solid
- knock
- facts
- concept
- specific
- option
- jump
- stage
- block
- items
- murders
- breaks
- dirty
- shirts
- package
- pair
- pants
- data
- opera
- standing
- roll
- count
- action
- physical
- differently
- teenagers
- checks
- replace
- independent
- neither
- tuition
- eyes
- theater
- educational
- bins
- animal
- reports
- senior
- window
- curious
- de
- argument
- june
- date
- extreme
- innocent
- december
- germany
- salt
- et
- cetera
- tomorrow
- educated
- clubs
- bird
- sons
- journal
- visiting
- pulled
- letting
- tech
- fixed
- el
- shorts
- assume
- message
- primarily
- signs
- cuts
- john
- jazz
- balance
- un
- walked
- shirt
- dropped
- latin
- feed
- influence
- wondered
- adult
- aid
- inner
- elementary
- negative
- swim
- projects
- raleigh
- practically
- grand
- nearly
- turning
- cleaning
- fort
- recommend
- ate
- skiing
- rules
- yellow
- cruise
- impressed
- address
- labor
- dish
- highly
- repair
- prior
- fee
- terribly
- experiences
- lead
- accept
- mart
- immediately
- portion
- nicer
- seafood
- fault
- disease
- truly
- wearing
- male
- dances
- closed
- product
- expected
- caused
- tapes
- relaxing
- culture
- technical
- criminals
- sentencing
- summertime
- indiana
- killing
- encourage
- housing
- practice
- ups
- stitch
- compare
- sentenced
- freedom
- belong
- purpose
- throwing
- crafts
- pushing
- sweet
- decent
- sew
- campus
- carpet
- channels
- repairs
- preschool
- please
- minnesota
- activity
- naturally
- cooked
- quarterback
- wise
- satisfied
- cadillac
- streets
- businesses
- honest
- automatically
- routine
- coach
- arm
- driven
- dishes
- mornings
- contact
- mall
- deficit
- humidity
- location
- fortunately
- atmosphere
- corporate
- meeting
- improvement
- engineers
- network
- dressed
- mcdonald
- spanish
- catholic
- organizations
- hill
- model
- fifth
- elected
- articles
- expecting
- seriously
- volunteer
- handy
- riding
- threw
- ooh
- trend
- ba
- arts
- thursday
- uncle
- relationship
- members
- throughout
- buffalo
- solve
- pain
- auto
- cholesterol
- planned
- prepared
- presented
- staff
- choices
- march
- filled
- overall
- discipline
- justice
- weights
- mile
- unit
- bringing
- beef
- camped
- wal
- mow
- microwave
- weapon
- inch
- rule
- traveled
- subscribe
- proper
- di
- classic
- software
- pays
- complex
- missing
- shepherd
- pleasure
- st
- cream
- expense
- automobile
- hers
- orleans
- king
- philosophy
- singing
- eighties
- enjoys
- democratic
- significant
- chore
- ev
- combination
- patterns
- disappointed
- republican
- media
- pre
- sesame
- fixing
- seconds
- passing
- daily
- trek
- signed
- raining
- accident
- scale
- interests
- route
- ma
- whoever
- reach
- judges
- evidence
- european
- seasons
- supporting
- dirt
- loose
- france
- cancer
- planting
- iowa
- increase
- hospitals
- maintain
- odd
- pregnant
- math
- press
- agency
- shrimp
- beer
- key
- puppy
- sending
- hardest
- tr
- wi
- return
- corner
- suits
- dakota
- al
- immediate
- possibility
- hooked
- song
- stadium
- frame
- dig
- navy
- comedy
- annual
- fear
- island
- exercising
- fancy
- fat
- enjoying
- motivated
- design
- affect
- investment
- recall
- co
- luxury
- trim
- flexible
- international
- furniture
- potatoes
- wou
- fellow
- breakfast
- bath
- trucks
- uses
- onto
- beans
- apple
- alabama
- records
- musical
- tie
- setting
- offs
- michael
- bugs
- freeze
- anyhow
- properly
- underneath
- dining
- aside
- quarter
- kentucky
- skills
- parole
- parks
- nation
- complain
- wine
- summers
- fans
- golden
- unanimous
- shift
- warranty
- plastics
- rates
- rains
- charged
- lincoln
- decisions
- checking
- gray
- laugh
- hills
- commercial
- recognize
- quote
- receive
- recording
- illegal
- generations
- advance
- motor
- outdoor
- lab
- honestly
- rap
- oriented
- match
- art
- fiction
- manage
- flip
- appropriate
- strict
- mad
- mental
- hung
- adds
- mileage
- bicycle
- thoroughly
- elections
- deserve
- indian
- according
- latest
- bu
- ta
- vehicle
- holidays
- july
- junk
- emergency
- convinced
- graduating
- kick
- including
- teenage
- ceiling
- valley
- victim
- ocean
- hell
- steel
- rainy
- noise
- marvelous
- drunk
- studying
- mountain
- hood
- greatest
- facility
- generate
- desk
- improve
- tells
- sex
- results
- si
- manager
- goal
- teenager
- concert
- copy
- africa
- paycheck
- woods
- lubbock
- sentences
- prevent
- impossible
- split
- faster
- speed
- thin
- chose
- monthly
- stands
- turkey
- repeat
- japan
- financially
- lights
- page
- pulling
- explain
- potential
- rape
- wash
- minor
- thrown
- professor
- pan
- vegetable
- fried
- onions
- roommate
- effects
- wire
- shame
- individuals
- sweat
- scene
- yards
- whose
- thoughts
- draft
- useful
- welfare
- organized
- communities
- realistic
- directly
- print
- printer
- purchased
- aunt
- prepare
- millions
- challenge
- twins
- badly
- thick
- pure
- bar
- roads
- missouri
- tall
- library
- added
- sam
- marriage
- gardens
- lesser
- views
- understanding
- prove
- deer
- delicious
- containers
- depend
- denver
- favorites
- tear
- site
- code
- winds
- parties
- relatively
- opened
- falling
- fascinating
- forties
- options
- sharing
- attached
- owner
- version
- modern
- standpoint
- eaten
- fully
- neck
- trials
- knee
- uncomfortable
- temperature
- chemical
- processing
- fruit
- lovely
- bothered
- pot
- causes
- rea
- diet
- theory
- conflict
- earn
- disagree
- exposed
- administration
- breaking
- buildings
- fence
- shocked
- retire
- wedding
- ch
- dust
- acid
- pushed
- blame
- contract
- carried
- nurse
- overseas
- texan
- fuel
- whe
- vehicles
- increased
- necessity
- plate
- hitting
- reduce
- blocks
- hide
- silly
- length
- writer
- film
- development
- refrigerator
- engines
- louis
- relate
- citizens
- dorm
- began
- hawaii
- january
- wheel
- gourmet
- shots
- bushes
- theirs
- outrageous
- sea
- hook
- conscious
- videos
- mastercard
- suburb
- chevy
- tiny
- mowing
- bulbs
- flag
- detroit
- brakes
- charges
- retriever
- towns
- contribute
- arms
- slacks
- definite
- difficulty
- produce
- cultures
- cou
- discovered
- whatnot
- philadelphia
- ou
- electronic
- strictly
- tendency
- mister
- regard
- con
- approach
- friendly
- handled
- governor
- louisiana
- urban
- develop
- pardon
- construction
- classroom
- personality
- currently
- tour
- apply
- memory
- francisco
- affected
- complicated
- risk
- shock
- roses
- movement
- tied
- teaches
- nuts
- halfway
- softball
- masters
- causing
- cake
- unbelievable
- cast
- characters
- actor
- association
- wallpaper
- habit
- blowing
- expert
- screen
- bake
- dessert
- tents
- minneapolis
- tin
- wars
- steps
- structure
- motivation
- buddy
- minds
- wound
- coat
- holes
- covers
- shell
- tries
- undergraduate
- springs
- banks
- kuwait
- kansas
- established
- dozen
- steak
- following
- massachusetts
- jewish
- affects
- hotel
- sight
- tight
- birthday
- statement
- weeds
- consumer
- understood
- tastes
- cartoons
- apartments
- cares
- settled
- september
- letters
- atlanta
- newer
- guarantee
- citizen
- occasion
- attorneys
- tom
- levels
- sweaters
- tires
- direct
- wagon
- remarkable
- result
- shower
- hello
- commercials
- cassette
- forms
- standards
- james
- native
- falls
- comment
- peers
- wore
- pleasant
- mid
- region
- essentially
- differences
- fitness
- symphony
- finger
- ad
- sounded
- joined
- trained
- toyota
- motors
- aspects
- candidate
- votes
- hunt
- electronics
- charging
- registered
- ed
- electric
- bite
- gifts
- manufacturing
- farmers
- participating
- legislation
- los
- angeles
- ticket
- survive
- catching
- eliminate
- ryan
- luckily
- teeth
- ill
- hated
- offices
- file
- hassle
- universal
- entertain
- roast
- traditional
- entertaining
- crisis
- officer
- saudi
- participated
- profession
- gue
- soap
- johnson
- task
- dumb
- gain
- broad
- surgery
- dressing
- condition
- tex
- grill
- camper
- note
- managed
- increasing
- rained
- parking
- wake
- mistakes
- pitch
- cucumbers
- prescription
- shut
- forgotten
- conditions
- rehabilitation
- gold
- waited
- substitute
- lift
- crowd
- gym
- tools
- divorced
- practical
- avoid
- spray
- seats
- severe
- litter
- trunk
- programming
- soft
- discover
- cs
- zero
- firm
- army
- post
- rarely
- virtually
- suddenly
- relative
- technically
- frustrating
- nursery
- checkbook
- rolls
- colored
- division
- jack
- districts
- guitar
- leaders
- permanent
- puerto
- su
- ultimately
- race
- biking
- statistics
- accepted
- hussein
- steal
- shown
- menu
- pension
- youth
- pride
- create
- knit
- walks
- guide
- fry
- til
- requirements
- reporting
- networks
- chain
- soil
- jumped
- hysterical
- target
- wasting
- horse
- buses
- dear
- butter
- thanksgiving
- instrument
- cared
- unemployment
- switchboard
- vice
- morals
- focus
- beds
- wednesday
- george
- principal
- non
- scores
- grandfather
- qualified
- burned
- courts
- cousin
- proud
- ham
- hits
- literally
- transferred
- institution
- debts
- collection
- weed
- cigarettes
- homework
- corruption
- clarion
- purposes
- improved
- applied
- closet
- corn
- tomato
- lasagna
- pickup
- collecting
- immigration
- sooner
- resources
- largest
- hurting
- soccer
- treated
- shore
- bored
- abuse
- mayor
- continental
- professionals
- verdict
- carrying
- button
- drinking
- dying
- reliable
- transportation
- subjects
- fees
- unfortunate
- evenings
- craft
- scout
- languages
- scratch
- sears
- thirties
- solutions
- sherman
- stack
- funds
- skirt
- fed
- correctly
- listened
- clothing
- serving
- supervisor
- mark
- materials
- lewisville
- below
- chemicals
- era
- incentive
- coffee
- offered
- interior
- determine
- sets
- alternative
- instructor
- dance
- saddam
- discussion
- joke
- boating
- fabulous
- ship
- funding
- groceries
- entirely
- sitter
- communications
- democrat
- cafeteria
- corporation
- squash
- peppers
- nor
- pour
- flour
- waco
- controls
- argentina
- flying
- coal
- nuclear
- february
- saturdays
- phoenix
- electrical
- wage
- laying
- effective
- robin
- wealthy
- hampshire
- concerns
- hall
- figures
- rochester
- agreement
- pages
- bitty
- cowboy
- dealers
- features
- argue
- commitment
- hanging
- policeman
- critical
- user
- dried
- strip
- pie
- balls
- eggs
- among
- lifting
- phase
- desire
- final
- jogging
- bless
- attack
- taxed
- acres
- april
- oven
- pack
- claim
- gorbachev
- wherever
- troops
- illinois
- industries
- trailer
- grab
- pitching
- nineties
- ranch
- ti
- mortgage
- mill
- sue
- register
- attorney
- alike
- adopted
- tournament
- involvement
- silver
- perfectly
- slightly
- meetings
- primary
- sixth
- employer
- survey
- indoor
- partly
- addition
- nervous
- georgia
- recreation
- internal
- rise
- schooling
- previous
- mood
- stolen
- birds
- director
- named
- mustang
- mystery
- upstairs
- goods
- reunions
- perform
- reality
- hurry
- scattered
- environmental
- limits
- cleaned
- tons
- concrete
- belts
- cabin
- rolling
- review
- invaded
- invade
- obvious
- requires
- typically
- religious
- religion
- opportunities
- intelligent
- peter
- album
- drawing
- trumpet
- stock
- household
- customer
- kay
- cotton
- tennessee
- specifically
- lowest
- moon
- reputation
- honor
- secretary
- rico
- assumed
- realizing
- attitudes
- rat
- vegetarian
- occurred
- practicing
- promote
- adding
- designed
- delivered
- nah
- category
- disk
- exact
- pilot
- costing
- brake
- mercedes
- pr
- abortion
- texans
- moral
- capable
- applications
- beneficial
- flavor
- drain
- reporter
- clock
- aggravating
- politically
- governments
- clearly
- designing
- burden
- laughed
- topics
- chunk
- spots
- streams
- efficient
- slowly
- arkansas
- discussed
- conservative
- flute
- choir
- sugar
- answering
- lists
- babysitter
- impression
- lets
- david
- forces
- thumb
- cop
- creative
- dip
- switched
- pine
- content
- aerobic
- conversations
- touched
- candidates
- legitimate
- assistant
- annoying
- finance
- vietnamese
- husbands
- storms
- pump
- lawns
- patio
- roots
- russian
- plot
- mouth
- amounts
- suffering
- headlines
- hunter
- acre
- ties
- measure
- la
- trout
- guidelines
- bonus
- emotional
- cow
- unique
- providing
- encouraged
- positions
- barely
- criteria
- olds
- tradition
- scares
- workers
- iran
- toys
- tornado
- moves
- ton
- recyclable
- crowded
- ladies
- melt
- crack
- finances
- score
- crawfish
- transmission
- purple
- mavericks
- eve
- babysitting
- committing
- maintenance
- exposure
- cassettes
- socially
- reagan
- soup
- hiking
- athlete
- cheesecake
- grandson
- skunk
- addison
- skied
- realistically
- profit
- emissions
- skirts
- heels
- awards
- silence
- lambs
- whatsoever
- lotus
- offering
- unquote
- forest
- phones
- miniature
- medium
- grandma
- goo
- finishing
- judicial
- penalties
- ki
- hose
- hungry
- success
- monitor
- application
- pink
- depressing
- supper
- bureaucracy
- status
- territory
- mississippi
- exercises
- preference
- peo
- packages
- broadcast
- doctorate
- scholarship
- grows
- lean
- anxious
- core
- voluntary
- minority
- couples
- ears
- crochet
- selected
- voters
- democrats
- authority
- airport
- horror
- fox
- sub
- professors
- legs
- stir
- celery
- eats
- chocolate
- cup
- asleep
- studies
- afterwards
- slip
- lap
- connection
- individually
- dependent
- foundation
- worthwhile
- fields
- freedoms
- giants
- stars
- kittens
- vet
- balanced
- homeless
- birth
- mu
- campaign
- empty
- scenes
- heads
- kicked
- messed
- arabia
- greatly
- bob
- talent
- nurses
- strike
- reached
- dedicated
- suggested
- guard
- basement
- laughing
- communication
- ghost
- abused
- token
- plane
- beating
- former
- films
- fought
- failed
- lesson
- lo
- walls
- sink
- girlfriend
- accused
- hurts
- loud
- gang
- consistent
- stereo
- fa
- struggling
- interview
- employment
- borrowed
- spoiled
- tub
- tea
- mex
- lemon
- bin
- evidently
- grant
- tremendously
- cartons
- opening
- mi
- skin
- seed
- acceptable
- filter
- golly
- sits
- coke
- followed
- basics
- psychology
- operate
- owns
- freezing
- nissan
- te
- accidents
- settle
- leader
- poverty
- dr
- masking
- fiancee
- jugs
- landfill
- heavily
- lie
- trends
- interstate
- competitive
- arguments
- weigh
- competition
- surprising
- temporary
- inclined
- overnight
- priority
- darn
- honey
- roy
- accurate
- rocks
- babysit
- priced
- twin
- le
- ban
- athletes
- lack
- pond
- muscles
- connecticut
- anyways
- pacific
- owners
- freon
- responsibilities
- toxic
- permit
- closely
- pitched
- dresses
- scenery
- kevin
- costner
- greater
- enemy
- granted
- welcome
- define
- advertising
- salesman
- reverse
- ideal
- locked
- directions
- object
- figuring
- frequently
- boot
- therefore
- jails
- murdered
- purdue
- received
- led
- picks
- include
- democracy
- studied
- fond
- climate
- alaska
- sake
- avid
- healthier
- fired
- connected
- stealing
- chances
- humane
- supported
- enjoyment
- penny
- turtles
- encouraging
- ea
- marketing
- garlic
- broccoli
- potato
- suburbs
- formal
- rush
- concentrate
- woodworking
- leaf
- cent
- automobiles
- ozone
- devices
- source
- comedies
- landing
- semi
- agent
- string
- precious
- ugly
- phenomenal
- hilarious
- winning
- doe
- mobile
- farther
- chili
- landscape
- path
- someday
- complaining
- sky
- load
- baked
- stove
- bend
- en
- command
- decides
- attacks
- wished
- ac
- yearly
- weekly
- indeed
- brief
- mike
- dealer
- emergencies
- event
- charlotte
- slapstick
- purely
- included
- unfair
- meaning
- injuries
- vermont
- cornstarch
- egg
- worrying
- wrap
- buff
- advertisements
- plain
- chores
- mention
- allows
- novels
- bases
- billion
- protected
- workout
- cancel
- daddy
- outdoors
- novel
- bruce
- awfully
- constant
- spends
- accent
- deductions
- dealt
- informed
- tournaments
- snake
- penn
- sox
- tho
- root
- rip
- combat
- polls
- sundays
- blank
- frozen
- assistance
- ads
- hiring
- drivers
- recession
- convert
- alternate
- dryer
- lightning
- gr
- chair
- emotionally
- angry
- mature
- treatment
- lousy
- seventh
- ninth
- deck
- printed
- answers
- jumping
- mentality
- popcorn
- shade
- oaks
- reasonably
- budgeting
- controlled
- british
- unreal
- mini
- performance
- tip
- ge
- handgun
- toy
- skip
- armed
- fleas
- redo
- deposit
- goldfish
- childhood
- removed
- surprises
- dodge
- consulting
- sacrifice
- placed
- sailing
- classics
- bottle
- secretaries
- diesel
- liter
- chosen
- boats
- returned
- item
- november
- adoption
- fewer
- pizza
- feature
- nebraska
- cafe
- alzheimer
- agreed
- choosing
- council
- bermuda
- suspense
- satisfaction
- winters
- headed
- murphy
- customers
- habits
- norm
- loss
- bec
- crawl
- exist
- attractive
- wor
- leg
- selection
- prob
- sources
- audience
- styles
- davis
- borrow
- goals
- determined
- accounts
- pat
- vs
- whi
- advantages
- diapers
- pin
- models
- queen
- sticks
- mesquite
- canal
- incredibly
- feeding
- importance
- salvador
- fathers
- regardless
- translation
- frustrated
- bond
- structured
- counting
- factors
- economical
- involves
- radical
- depressed
- universities
- shall
- tank
- jesus
- counselor
- proposal
- allowing
- pocket
- airplane
- gangs
- saints
- consideration
- dolls
- horses
- spouse
- midwest
- fashioned
- screw
- curriculum
- oakland
- candy
- blanket
- backpack
- industrial
- smog
- canyon
- elect
- backed
- bear
- comfort
- economically
- warmer
- sunny
- exhausted
- afternoons
- ranger
- worries
- orange
- physically
- experiment
- famous
- copies
- cardboard
- pa
- demand
- polluted
- tail
- compatible
- wordperfect
- drag
- float
- carter
- presidential
- dug
- israelis
- relations
- arab
- rings
- estate
- salaries
- recognition
- headline
- nowhere
- ratings
- asia
- ei
- lifestyle
- tenth
- preparing
- cookies
- fifteenth
- bait
- experienced
- defendant
- surprise
- cocaine
- reminds
- liquid
- destroy
- century
- admire
- rare
- tuned
- schwartzkopf
- reduced
- cruel
- cheers
- picnic
- accounting
- pace
- jane
- tune
- knees
- holy
- owe
- pepper
- worms
- bricks
- mound
- additional
- flow
- tended
- refuse
- landfills
- stance
- cry
- dumping
- memories
- anyplace
- geared
- arrangements
- depth
- tuesday
- raw
- neighborhoods
- policemen
- net
- located
- trail
- edition
- purchases
- injury
- beliefs
- statements
- sin
- cultural
- shorter
- guilt
- 'false'
- economics
- enormous
- lifetime
- advanced
- adopt
- mechanical
- liters
- dream
- bachelor
- nasty
- scare
- laundry
- strikes
- quilt
- chlorine
- shed
- whom
- ds
- convince
- courtroom
- volleyball
- domestic
- stomach
- concerts
- stepfather
- typewriter
- clouds
- rating
- gifted
- generals
- clip
- screwed
- australia
- maine
- quarters
- chrysler
- oldsmobile
- pistol
- membership
- seldom
- supply
- tornadoes
- hu
- oth
- porch
- persian
- lakers
- tarpley
- seattle
- thrilled
- boards
- brian
- roughly
- paints
- attic
- ceilings
- baths
- pig
- killer
- pros
- paris
- brooks
- dealership
- developing
- islands
- kennedy
- ending
- ratio
- created
- separated
- lasts
- wives
- jean
- spaghetti
- village
- biased
- operating
- enid
- crappie
- employers
- conference
- tuna
- tole
- pollutants
- jones
- handling
- emission
- vary
- initially
- finds
- obligation
- select
- carefully
- barrier
- strangest
- spaniel
- blues
- comparison
- attend
- focused
- ver
- blacks
- jurors
- floors
- spell
- wears
- heel
- wooden
- assistants
- accustomed
- mild
- bands
- bang
- alrighty
- campbell
- tours
- panama
- believes
- corrupt
- cocoa
- interestingly
- makeup
- communism
- etcetera
- historical
- heating
- hispanic
- bilingual
- ultimate
- bicycling
- elsewhere
- scientific
- combine
- ar
- consequences
- gal
- cure
- grader
- corporations
- stitching
- grief
- leading
- graphics
- regards
- rank
- personalities
- mission
- whiz
- voter
- controlling
- believed
- minded
- kyle
- author
- certified
- shelter
- historically
- protecting
- fits
- carrots
- knitting
- professionally
- specialty
- jars
- needlework
- robert
- regarding
- billions
- rental
- nolan
- ruined
- searching
- taco
- mama
- relationships
- exchange
- highways
- handicapped
- scouting
- discouraging
- dropping
- electricity
- stacks
- catalytic
- muffler
- pipe
- error
- compete
- cajun
- haul
- discussing
- kurds
- anti
- orchestra
- needle
- ireland
- investments
- dramatically
- drawback
- raises
- growth
- definition
- guatemala
- receiving
- reported
- aikman
- shoulder
- banking
- highest
- jimmy
- jim
- cardinals
- jamaica
- magic
- convictions
- usage
- hamburgers
- sporting
- muscle
- sophisticated
- element
- occur
- designated
- depression
- covering
- tooth
- filling
- sharp
- strawberry
- relax
- advise
- enter
- throat
- instances
- allowance
- stronger
- debate
- literature
- shelves
- remove
- advertised
- progress
- smith
- richard
- raped
- offense
- detail
- christians
- tore
- accomplish
- released
- loaning
- bright
- intense
- dies
- peas
- steaks
- spicy
- conditioner
- convenience
- drought
- cups
- nee
- russians
- yeltsin
- thirds
- acting
- northwest
- freeway
- curbside
- corpus
- publicized
- mets
- memorial
- onion
- garages
- employed
- lazy
- wrestling
- crab
- loaded
- stationary
- coupons
- ripped
- balances
- convict
- loving
- represent
- judgment
- pork
- wasted
- selecting
- recover
- divide
- civic
- builds
- quicker
- translate
- churches
- slice
- discount
- swear
- nap
- centered
- vitamins
- planes
- contractor
- drastically
- elaborate
- continued
- decline
- uncles
- utilities
- camera
- musicians
- musician
- condominium
- augustine
- tolerant
- southwest
- counselors
- mirrors
- communicate
- worker
- medication
- powerful
- manure
- replaced
- redone
- shotgun
- memphis
- turtle
- supreme
- owning
- cycle
- jay
- airline
- sir
- method
- mayonnaise
- execution
- plea
- mower
- buttons
- campaigns
- log
- quarterbacks
- hamburger
- arizona
- ignore
- bred
- indianapolis
- envelope
- conversion
- hail
- flooding
- spanked
- fluid
- bay
- leather
- italy
- locations
- blew
- extensive
- traded
- transition
- kilometers
- robbing
- kills
- cadillacs
- randomly
- institute
- triangle
- mercury
- volvo
- dan
- leads
- pe
- rome
- attraction
- aunts
- latex
- texoma
- rabbit
- audi
- methodist
- basements
- tee
- clarinet
- walker
- massive
- stroke
- leak
- sites
- deals
- lined
- embarrassed
- slab
- officially
- behavior
- examples
- witness
- wishes
- unlisted
- terminal
- modem
- poodle
- weighs
- paul
- subscription
- chapter
- likewise
- documents
- shoe
- miserable
- jacket
- lax
- varies
- peach
- blows
- disco
- suicide
- bo
- downhill
- profitable
- twenties
- official
- pressures
- image
- monies
- absentee
- senate
- ethnic
- involve
- proven
- offenders
- afghans
- borders
- peaceful
- ab
- blown
- lock
- adequate
- scholarships
- offers
- bat
- injection
- useless
- revolution
- mormon
- enforce
- cosby
- preapproved
- fortune
- messing
- promised
- sum
- frankly
- damn
- gravy
- boil
- remembered
- consuming
- metropolitan
- gift
- seeds
- factories
- layer
- costly
- usual
- cooler
- daytime
- appearance
- sufficient
- balcony
- chasing
- chest
- las
- plumbing
- farming
- becau
- cleaner
- packed
- cried
- lover
- indians
- racial
- occasional
- rivers
- pollute
- locally
- contribution
- presentations
- laser
- represented
- guests
- apples
- hank
- closest
- oak
- missionaries
- rob
- mailing
- ring
- bias
- newsweek
- nicely
- tables
- zone
- faith
- cheapest
- excuses
- fail
- administrator
- baylor
- sued
- emotions
- appeared
- notes
- tying
- nail
- shake
- comp
- entry
- peer
- sore
- sticky
- pudding
- knowledgeable
- haze
- mass
- stressed
- academy
- considerably
- rowlett
- shortly
- nose
- ordered
- crying
- handed
- wages
- input
- praying
- warfare
- accomplished
- woke
- regulation
- equivalent
- bankrupt
- jog
- ell
- ri
- appeals
- extraordinary
- metroplex
- absolute
- conclusion
- accountable
- glory
- pray
- prisoners
- bomb
- destroyed
- testament
- pu
- suggest
- polish
- principle
- gardener
- beets
- behave
- periods
- shrubs
- sprinkler
- fajitas
- describe
- release
- motorcycle
- bound
- styrofoam
- valuable
- tolerate
- attempt
- jordan
- exists
- screaming
- stump
- breathing
- selfish
- dick
- blonde
- maximum
- max
- secret
- holds
- landscaping
- reads
- prevalent
- galveston
- weirdest
- joy
- nationwide
- soda
- coin
- dukakis
- steam
- embarrassing
- plates
- incorporate
- deductible
- machinery
- categories
- funded
- chairs
- recommended
- handicap
- bowling
- meantime
- accord
- tyler
- mosquitoes
- booklet
- coaches
- syria
- dinners
- holiday
- baltic
- priorities
- recognized
- wipe
- longest
- suburban
- delayed
- backgrounds
- varied
- eighth
- den
- coats
- theme
- nicest
- penney
- adjust
- hou
- toilet
- bullet
- rapidly
- capabilities
- hilly
- container
- layoff
- watches
- jewelry
- maker
- infant
- resent
- blade
- watering
- wildlife
- decorating
- fabric
- leadership
- privilege
- exotic
- loop
- seasoning
- chopped
- retiring
- backseat
- par
- leukemia
- ammunition
- barrel
- pontiac
- mazda
- expressway
- administer
- unions
- function
- stopping
- organize
- parenting
- schedules
- slept
- wheels
- resource
- competing
- sees
- careers
- pits
- carpeting
- legislature
- functional
- divorce
- bridge
- transfer
- needlepoint
- cookbook
- breast
- published
- portland
- throws
- counts
- larry
- louisville
- com
- glued
- tube
- slide
- protective
- felony
- dursban
- renting
- rebuild
- london
- shingles
- lea
- stink
- puppies
- schnauzer
- steering
- plugs
- mechanic
- worn
- inflation
- diving
- stretch
- purse
- introduced
- stripped
- occupied
- siamese
- controversy
- buick
- religiously
- allergic
- edges
- sail
- nancy
- biographies
- nonfiction
- thunderstorms
- intend
- educate
- nerve
- recordings
- concentration
- steve
- academic
- freshman
- sophomore
- neutered
- ponds
- disgusting
- narrow
- comparing
- associate
- adjusted
- cottage
- foster
- rake
- outstanding
- appreciated
- malpractice
- thankful
- personnel
- selective
- administrative
- comparable
- pier
- contributing
- cart
- explore
- commits
- affair
- cleveland
- glasses
- downstairs
- details
- backpacking
- blackberries
- alternator
- antilock
- peeves
- chris
- billy
- henry
- smooth
- polluting
- sweats
- fever
- sweater
- wyoming
- filmed
- guts
- respond
- theories
- database
- culturally
- threatened
- tears
- messages
- ear
- bark
- grandpa
- versions
- lee
- wave
- analysis
- gear
- comments
- colorful
- photography
- victims
- resolution
- stiff
- brazil
- minister
- interpret
- hero
- lebanon
- declare
- heritage
- escape
- columbia
- prescriptions
- assumption
- berkeley
- combined
- traditionally
- relaxation
- entering
- regulate
- consciousness
- react
- sexual
- proved
- booze
- cloth
- herald
- instructors
- vested
- consultant
- taxpayer
- lethal
- restricted
- pub
- directed
- frequent
- tempted
- hat
- treadmill
- abilene
- hates
- skinny
- turnout
- bouncing
- wayne
- beforehand
- deserves
- ninja
- expand
- probation
- eliminated
- yogurt
- powder
- boyfriend
- blankets
- alarm
- vacuum
- chop
- strips
- ruin
- knots
- bits
- rogers
- guessing
- addicted
- pitcher
- fingers
- rascal
- whip
- ag
- vegas
- response
- advocate
- donate
- proposed
- emphasis
- transit
- carpool
- map
- sheets
- punch
- calories
- strenuous
- laboratory
- resolve
- serves
- drum
- compact
- tigon
- initial
- moms
- identify
- respected
- vision
- visits
- eagle
- summary
- illustrated
- dial
- extraordinarily
- intelligence
- stages
- troy
- injured
- increases
- joints
- dayton
- mary
- deduct
- administrators
- pressing
- contest
- arguing
- marked
- seek
- gross
- roberts
- mentally
- session
- failing
- occasions
- videotape
- clever
- jerry
- mutant
- warning
- intellectual
- approve
- declared
- hallway
- edging
- pressed
- strawberries
- nieces
- sour
- homemade
- trick
- mixture
- solar
- inspection
- global
- winner
- drawn
- trace
- sympathetic
- managing
- anchors
- sulphur
- chuck
- overcrowded
- stole
- dean
- steven
- bi
- thursdays
- appear
- collapse
- dome
- flex
- stressful
- ok
- paroled
- apt
- patient
- injustice
- farmer
- socialized
- snap
- clay
- wintertime
- beaches
- touching
- curb
- clippings
- flowerbeds
- toes
- buffer
- hardware
- republic
- battle
- heading
- units
- shadow
- yankees
- rounded
- immigrant
- diseases
- caesar
- saves
- nephews
- slowed
- grounds
- snakes
- abilities
- missiles
- nova
- pen
- digging
- drew
- pools
- strung
- port
- sticking
- orioles
- hopes
- ov
- fertilizer
- railroad
- rub
- robberies
- theft
- tourist
- sta
- stood
- eligible
- freshwater
- saltwater
- shark
- fool
- commute
- deciding
- fam
- terrific
- catalogs
- froze
- ethic
- controversial
- crossed
- georgetown
- soy
- hoi
- pasta
- dreams
- painful
- filthy
- innocence
- leaning
- cleared
- feasible
- perception
- lottery
- parochial
- announced
- ll
- gallons
- kindercare
- behavioral
- classrooms
- merchandise
- washer
- refrigerators
- tinker
- supplies
- stimulation
- alert
- furthest
- cease
- reward
- biology
- starter
- prairie
- drill
- johnny
- experiments
- exercised
- paneling
- tougher
- strain
- noisy
- instill
- housework
- gap
- auditor
- dot
- maternity
- butler
- amarillo
- mulch
- actions
- lawsuits
- senators
- anniversary
- bonding
- leisure
- fertilize
- dragging
- decorated
- statewide
- format
- skeptical
- pad
- mode
- justify
- budgets
- seniors
- chief
- efforts
- hispanics
- drastic
- frost
- layoffs
- temperatures
- airlines
- hoses
- safer
- nails
- salads
- clients
- vans
- surely
- pulls
- operation
- sells
- bikes
- unable
- permanently
- slight
- rifle
- impulse
- manual
- handguns
- gauge
- someth
- youngsters
- karate
- hotels
- demanding
- wool
- warnings
- sanctions
- attract
- mysteries
- tenths
- pots
- neglected
- sliced
- leagues
- bulls
- celtics
- struggle
- qualify
- bars
- lucked
- cliff
- cabins
- relaxed
- gates
- oregon
- loads
- crystal
- fumes
- previews
- floating
- reviews
- peaks
- poorer
- matters
- continues
- costa
- geographic
- earthquake
- intrigued
- ain
- albums
- singapore
- proof
- bulb
- spayed
- fr
- skating
- robbery
- sector
- horn
- drafting
- premeditated
- frustration
- radiator
- boundaries
- bureau
- belonged
- nephew
- officers
- serger
- seam
- choral
- dating
- genuine
- requirement
- gradually
- asians
- establish
- effectively
- reel
- ra
- steady
- produces
- switzerland
- calm
- anthony
- suzuki
- plymouth
- sized
- thread
- centimeters
- recorder
- signal
- brands
- resolved
- converted
- dumped
- spur
- trap
- yell
- smarter
- humanities
- amherst
- sheriff
- safely
- completed
- equally
- labs
- foam
- sociology
- entertained
- lobster
- title
- recommendation
- residential
- vicious
- lease
- outer
- honesty
- switching
- freezer
- tollway
- heavier
- bahamas
- sperry
- rollers
- mowed
- cougar
- chi
- crooks
- lips
- remodeled
- cocker
- eigh
- syndrome
- overweight
- titles
- lettuce
- gather
- span
- greenville
- drip
- senator
- dam
- zip
- lexus
- peninsula
- counseling
- grapevine
- parental
- branch
- travels
- atlantic
- screening
- thr
- veterans
- substance
- golfers
- golfer
- manually
- carbon
- disposition
- harrison
- putt
- disability
- marry
- infants
- engaged
- braves
- mums
- provo
- boots
- commercialized
- replacing
- moisture
- assign
- router
- saws
- translators
- alleviate
- acquainted
- caring
- incinerator
- receipt
- scrub
- setup
- hazardous
- wardrobe
- jackets
- blouses
- suspenseful
- graphic
- gary
- monitoring
- hacker
- india
- desirable
- invite
- reaction
- fantasy
- shocking
- recorded
- addresses
- rig
- instructions
- faced
- advances
- paperwork
- tongue
- cha
- accommodate
- motion
- performed
- composer
- horrendous
- beatles
- crop
- applying
- budgeted
- coda
- seminars
- challenging
- righty
- cave
- dragged
- conscientious
- lenient
- warehouse
- managers
- windy
- allergies
- flu
- inordinately
- cinderella
- shoulders
- progressive
- cam
- colonial
- nicaragua
- exception
- translations
- scream
- independence
- cope
- economies
- tropical
- consequently
- difficulties
- plead
- disturbed
- correlation
- movements
- athletic
- stoned
- invested
- coincidence
- analyze
- chip
- miracle
- fif
- kee
- inmates
- external
- civilian
- trapped
- ghetto
- amenities
- clutch
- disposable
- makers
- pursue
- organ
- blast
- pluses
- racquetball
- lobbyists
- republicans
- outskirts
- carpenter
- buck
- predict
- backwards
- wok
- sweets
- ugh
- tablespoon
- singer
- shops
- singers
- stockings
- mirror
- crocheting
- zucchini
- voices
- pockets
- exhaust
- oxides
- victimized
- cynical
- colder
- castle
- listed
- deliberately
- spoken
- adventure
- repeats
- imagination
- viewing
- bench
- catcher
- bull
- corners
- dustin
- hoffman
- kmart
- concerning
- bulk
- accepting
- eerie
- na
- properties
- lying
- sturdy
- logic
- dated
- slick
- separating
- talented
- raiders
- device
- macintosh
- statistical
- sausage
- italians
- canoe
- thrill
- honeymoon
- arabs
- defending
- stability
- pops
- musicals
- sends
- asks
- ringing
- versa
- opens
- offhand
- dana
- envision
- philosophical
- charity
- volunteering
- commentaries
- informal
- commentary
- viewpoint
- independently
- sections
- nope
- firmly
- forcing
- flags
- gathered
- gett
- neil
- jagged
- awakening
- julia
- beside
- initiated
- pole
- kidnapping
- witnesses
- handles
- panel
- refined
- portions
- moments
- accessible
- hollywood
- norman
- assets
- tire
- pursued
- factory
- au
- romance
- fuels
- presentation
- closets
- hips
- rated
- publish
- protestant
- females
- crowds
- poorly
- identified
- buys
- stuffed
- chamber
- brass
- arrest
- productive
- ticks
- earned
- prisoner
- reimbursement
- spiritual
- z
- pronounce
- riskier
- protection
- consistently
- endless
- charles
- rebellion
- pacifist
- curse
- unto
- spirit
- barbara
- bombs
- tearing
- struck
- heaven
- theaters
- northeast
- licensed
- reducing
- peoples
- lithuania
- damaged
- bacon
- worm
- bug
- sprays
- bloom
- rye
- leasing
- nightmare
- beautifully
- washing
- nurseries
- neglect
- mixes
- frying
- guacamole
- disc
- populated
- cooperation
- bundle
- nickel
- rely
- insulation
- powers
- soldiers
- leery
- iraqi
- germans
- safest
- appears
- whoa
- republics
- participation
- reference
- disgusted
- hauling
- permitted
- orientals
- excluded
- stone
- sack
- crush
- fills
- crap
- fisher
- leap
- interact
- publicity
- brooklyn
- idiot
- easter
- vines
- extensively
- fou
- extras
- shootings
- knife
- outcome
- pensacola
- fished
- interviews
- disappointing
- overworked
- speedy
- apathy
- juror
- ann
- appointed
- spite
- ballot
- counter
- appetite
- technician
- complaints
- begins
- reaching
- referred
- influences
- swayed
- award
- slips
- stranded
- bankruptcy
- users
- socialize
- boom
- secondary
- captured
- backward
- intellectually
- bean
- measured
- remind
- bolt
- swung
- dryers
- extension
- hooks
- trinity
- lasting
- hatred
- snack
- altogether
- heal
- restore
- restored
- deeper
- strength
- link
- graders
- noticeable
- lowering
- preferred
- remarkably
- baroque
- barry
- townhouse
- fertilizing
- decade
- slower
- pl
- hop
- creates
- alternatives
- gains
- operated
- forgetting
- detector
- deliberate
- cycling
- legally
- bridges
- prize
- adolescents
- gamut
- slant
- fascinated
- baskets
- glue
- collector
- accountant
- rides
- def
- remote
- professions
- suggesting
- crafty
- remembers
- bears
- identical
- burns
- basket
- believer
- document
- korea
- lasted
- meatballs
- waist
- rear
- stretching
- fold
- kroger
- linoleum
- angle
- wo
- diverse
- buyer
- bullets
- banning
- bargain
- breeding
- humor
- evil
- q
- illness
- peop
- oldsmobiles
- fiance
- bodied
- educating
- showers
- mud
- connect
- bothering
- rebuilding
- kuwaiti
- possibilities
- overcast
- cloudy
- hurricanes
- forecast
- ru
- therapist
- scott
- rugs
- angel
- wheat
- editor
- caretaker
- liking
- kiss
- inevitably
- chat
- unhappy
- comfortably
- litt
- variation
- protest
- fences
- samples
- messy
- affectionate
- disabled
- barking
- production
- kelly
- corvette
- fanatic
- towel
- firing
- coaching
- presents
- burglar
- overcrowding
- lane
- imprisonment
- arrested
- asian
- wrecked
- beauty
- olympics
- conviction
- playground
- garth
- rs
- jam
- literary
- cre
- execute
- cartoon
- nearby
- fundamental
- ribbon
- bobby
- montessori
- sofa
- fetched
- rolled
- sewed
- starters
- crocheted
- liberties
- nintendo
- majoring
- associated
- threatening
- freezes
- traction
- perspectives
- southeast
- carp
- advertise
- pint
- merit
- durham
- meryl
- snowed
- advisors
- terrorism
- sectors
- joint
- terrain
- citizenship
- melted
- ounces
- ounce
- keys
- races
- smokers
- sensible
- bradshaw
- hip
- af
- richmond
- sen
- readily
- consistency
- canned
- enforcement
- contracts
- cons
- differ
- suffer
- tool
- specialist
- flies
- confidence
- esteem
- ironing
- inexpensive
- slots
- buffet
- cuisine
- congressman
- persuaded
- minorities
- stranger
- brush
- coastline
- blind
- cape
- dow
- partially
- calcium
- vast
- abroad
- museum
- physician
- physicians
- redid
- erie
- cooperative
- survival
- har
- exac
- intentionally
- affecting
- urine
- grandkids
- agricultural
- beam
- display
- constitution
- capitol
- ordinary
- babysat
- aggressive
- journalism
- grad
- tia
- olive
- collin
- casserole
- cakes
- operas
- accents
- almo
- oprah
- tiles
- tile
- trillions
- struggled
- tips
- tulsa
- museums
- sailboat
- perch
- styling
- seville
- rotten
- ken
- dentist
- maverick
- medicare
- douglas
- leased
- insane
- madison
- dock
- subdivision
- pouring
- wooded
- departments
- airplanes
- pilots
- premium
- ol
- liberty
- malls
- fossil
- produced
- bumper
- purchasing
- gentleman
- tribe
- wordstar
- rinse
- santa
- broth
- thomas
- addressed
- unconsciously
- enchiladas
- slickers
- rib
- lawry
- housekeeping
- opener
- doll
- sierra
- nuskin
- legend
- ruben
- batteries
- drywall
- disturbing
- relief
- devastating
- confined
- strides
- incineration
- drums
- cement
- leaked
- presently
- semiconductor
- firms
- foremost
- hoods
- sample
- client
- update
- predominantly
- gory
- dancing
- inherent
- harmed
- sneak
- invisible
- obligated
- invariably
- supervisors
- dentists
- chew
- randy
- understandable
- springer
- artist
- stardom
- taylor
- synthesis
- adapt
- pla
- labeled
- label
- attended
- manuals
- stephen
- stimulating
- improvements
- veterinarian
- serial
- wrongly
- preschoolers
- conditioned
- detailed
- unload
- highs
- collar
- identification
- stones
- zoo
- owens
- sandinistas
- greedy
- kings
- roosevelt
- bananas
- tempting
- lessened
- performances
- greek
- plots
- sean
- statehood
- quo
- assuming
- significantly
- woul
- ve
- occurring
- stringent
- troubled
- resistance
- regional
- disastrous
- practices
- alternates
- approved
- believing
- joe
- iraqis
- habitual
- bone
- dope
- threaten
- inventory
- bibs
- tasted
- afghan
- quilts
- riot
- earning
- backup
- christ
- begun
- guaranteed
- beats
- monetary
- ne
- involving
- punishable
- instantly
- hog
- logistics
- joining
- tutor
- doggone
- hats
- remodeling
- allen
- cabinets
- motivate
- inspired
- computerized
- pers
- extremes
- willingness
- excitement
- jacobs
- architect
- lump
- shared
- evaluate
- exclusive
- expanded
- tablespoons
- ginger
- peanuts
- sang
- choirs
- finals
- aggravated
- okra
- ruled
- landmark
- restrictions
- smack
- investing
- drier
- hotter
- orlando
- adventures
- scrap
- battery
- timing
- boeing
- alcoholic
- sullivan
- continuing
- ukraine
- adjustments
- astros
- claws
- declawed
- rushed
- stray
- void
- chase
- messes
- procedures
- underwear
- skill
- politician
- mitch
- caddo
- prizes
- lids
- files
- tra
- questioned
- wolf
- thunder
- howl
- buffaloes
- honduras
- wealth
- contributes
- wider
- soak
- installed
- converter
- authorities
- visible
- ash
- suspected
- agencies
- mouse
- printout
- producing
- unix
- blueberry
- hike
- overly
- baker
- assault
- restraint
- enj
- danny
- couch
- arnold
- ridge
- gene
- clo
- unemployed
- ahold
- dislike
- equality
- mistaken
- aged
- quoted
- harsh
- realizes
- upstate
- expend
- brinkley
- complaint
- slanted
- restricting
- halls
- wheelchair
- supervised
- terry
- monstrous
- drawbacks
- fights
- learns
- fallen
- challenged
- rewarding
- mailed
- snowing
- ni
- wreck
- amongst
- misery
- schwarzenegger
- goofy
- entered
- rationale
- prosecutor
- excused
- bare
- lawsuit
- audio
- teti
- eh
- lacking
- memorable
- wisdom
- succeed
- jokes
- frenchman
- liability
- workmen
- executives
- marijuana
- surface
- lengths
- fondue
- cheddar
- watermelon
- saucepan
- lukewarm
- cookbooks
- collected
- saran
- hollow
- warming
- spa
- bathing
- incur
- institutions
- freshmen
- sinking
- description
- graduates
- nelson
- commerce
- recruiting
- homemaker
- cri
- ankle
- install
- sympathy
- burnt
- episode
- awesome
- scandal
- grasp
- multiple
- fonda
- tolerance
- enforced
- lighter
- enemies
- gentle
- avoided
- approaches
- sheep
- grace
- reserve
- claimed
- abusing
- borrowing
- servants
- stops
- moist
- ass
- kin
- trimmed
- varieties
- experimenting
- mashed
- foo
- barbecued
- barbecues
- marinate
- manages
- sacks
- giant
- pact
- confused
- stepping
- seams
- michener
- blooming
- stewart
- tim
- rebel
- grammar
- yankee
- restriction
- biblical
- paychecks
- request
- stable
- diego
- lush
- ga
- limb
- flooded
- strokes
- animated
- muddy
- sharks
- quantum
- partners
- deedee
- formula
- subtle
- solved
- tow
- bounds
- rooting
- championship
- toronto
- ontario
- cabbage
- cantaloupe
- siding
- twist
- sirens
- reminded
- affluent
- bee
- captain
- tackle
- advancement
- isolated
- destroying
- foggy
- regulating
- cigarette
- linguistics
- canadian
- payless
- cashways
- bucket
- cereal
- maxed
- rally
- richards
- convention
- everytime
- mar
- dairy
- doubts
- pursuing
- flight
- crew
- oops
- misses
- amazingly
- punished
- suited
- flexibility
- rehabilitate
- deduction
- debit
- executive
- requested
- implemented
- disadvantage
- shoddy
- naive
- moscow
- marcos
- shoots
- blessed
- cad
- noon
- formed
- bargains
- circuit
- dissertation
- serviceable
- roughing
- cots
- condo
- poles
- locks
- ob
- hearts
- passover
- seder
- catholics
- attacking
- syrian
- bagels
- affairs
- iranian
- ideals
- dividend
- voluntarily
- devote
- performing
- pipes
- arteriosclerosis
- nonexistent
- torn
- outfits
- prejudice
- invited
- remembering
- remedial
- certification
- textured
- insides
- tone
- tornados
- exxon
- brain
- photographer
- audit
- mainframe
- jet
- upgraded
- baghdad
- scheduled
- receptacles
- continual
- potentially
- prestige
- perceived
- trivial
- broader
- sided
- claims
- adjustment
- tread
- richland
- discouraged
- stepdaughter
- sacrificed
- possession
- castroville
- timer
- shady
- lehrer
- editorial
- embroidery
- envelopes
- continuous
- typing
- claude
- aging
- attending
- trainable
- watered
- composition
- dis
- disabilities
- intentions
- inter
- gay
- facing
- interviewed
- seasonal
- patch
- peculiar
- rec
- brilliant
- invest
- payday
- buddies
- wiped
- indoors
- fiddle
- inspect
- peel
- hors
- impress
- ridden
- objects
- surprisingly
- servicemen
- teeny
- equitable
- tier
- stair
- targets
- knocked
- accuracy
- impressive
- cycles
- writers
- rehabilitated
- fleet
- drops
- quarts
- peeve
- sa
- pregnancy
- meets
- campsite
- specialized
- indicated
- beings
- obnoxious
- stereotype
- communist
- sway
- soviets
- monetarily
- circle
- blah
- carnival
- outs
- indication
- gigantic
- ownership
- feeds
- latch
- pansies
- cau
- screened
- references
- tabs
- steamed
- blueberries
- desserts
- sandwich
- slices
- mba
- describing
- duke
- mechanics
- secorski
- financing
- punishments
- whack
- addiction
- '7'
- specials
- climbing
- shells
- spectrum
- ins
- ants
- painter
- painters
- noises
- rats
- sequel
- rocky
- stallone
- pai
- exterior
- afterward
- greasy
- builders
- intervention
- solving
- appliances
- fu
- hesitant
- incorrectly
- lizards
- bats
- evils
- refugees
- permission
- dive
- instituted
- parked
- landry
- scope
- eagles
- cows
- orders
- tokyo
- subway
- remorse
- heinous
- manufacturer
- occupation
- neal
- brushes
- manhattan
- stud
- leftover
- coll
- rifles
- shelf
- robbed
- temporarily
- inconvenient
- limitations
- spelling
- precise
- commodore
- specifications
- belief
- aggravates
- nev
- bites
- knox
- overheard
- rows
- frederick
- pointed
- stu
- rusty
- reelected
- loses
- pretend
- symptoms
- biography
- destroys
- delicate
- speakers
- happier
- grub
- raiser
- petroleum
- menial
- jeff
- blink
- recommending
- diner
- streep
- copper
- explosives
- disappear
- cosmopolitan
- swimmer
- vogue
- felon
- converting
- bolts
- ross
- ro
- reject
- outfit
- automotive
- mexicans
- envious
- risking
- shifts
- cylinder
- gaining
- tragic
- expressing
- expression
- chilly
- yorker
- dall
- deny
- bonuses
- lucrative
- congressmen
- portray
- needing
- scallops
- susan
- protein
- gained
- baking
- academically
- kenyon
- admissions
- sciences
- provides
- preparation
- logical
- cage
- owed
- devastated
- despite
- pillsbury
- surrounding
- prosecution
- liable
- limitation
- writes
- follows
- nash
- paso
- juice
- reusable
- procedure
- vegetation
- bach
- delivery
- rapes
- thou
- contemporary
- brookhaven
- heater
- curiosity
- fuse
- assembly
- limestone
- danger
- ferry
- ducks
- pilgrimage
- annoyance
- seniority
- ben
- partner
- executed
- healing
- darker
- diff
- routes
- touring
- footage
- abandoned
- retain
- warped
- leslie
- mockingbird
- tricky
- steep
- overwhelming
- killers
- calendar
- faculty
- bingo
- fog
- rationing
- visas
- awareness
- howard
- repairing
- bathrooms
- upside
- symbol
- conception
- veteran
- daylight
- babysitters
- valentine
- ideally
- driveway
- digest
- danielle
- severely
- confident
- idaho
- searched
- appointment
- givers
- pappasito
- dillard
- expertise
- tasty
- publisher
- reruns
- soaps
- repaired
- theatre
- cedar
- mainstream
- refer
- tina
- secure
- rockets
- loo
- contacts
- carpooling
- appalachian
- adventurous
- hostages
- fatal
- patients
- '2'
- sunfish
- donated
- shepherds
- joey
- treats
- researcher
- unnecessary
- stucco
- payroll
- scan
- conductors
- versed
- midway
- beard
- princess
- naked
- custom
- mount
- marshmallows
- mommy
- committee
- allegedly
- tap
- woodstock
- routinely
- rod
- tuesdays
- patterned
- czar
- donald
- booked
- intent
- granddaughter
- chips
- sedan
- discounts
- inn
- dent
- crib
- deliver
- schutzhund
- alsatian
- refused
- nola
- grapes
- marinated
- maxima
- oahu
- conferences
- newly
- kauai
- maui
- hunters
- concentrated
- bakery
- hay
- sleeve
- niro
- builder
- curtain
- spain
- crust
- intriguing
- reimbursed
- licenses
- physics
- reaches
- donahue
- cruises
- nassau
- olives
- lodge
- grandsons
- acoustics
- waves
- uniforms
- fancier
- mesa
- dalmatians
- soapdish
- mushroom
- milwaukee
- violin
- harpsichord
- rumor
- disneyworld
- thinner
- carolyn
- risque
- saxophone
- jodie
- hopkins
- credibility
- barbies
- motel
- wendy
- broncos
- chico
- troop
- warranties
- picky
- aberdeen
- solicitors
- autumn
- nevada
- marlin
- operations
- exhibit
- shuttle
- wycliffe
- sheltie
- particulates
- colombo
- duties
- burner
- hometown
- permits
- contributions
- astronomical
- attire
- blazer
- critics
- omaha
- disturbs
- politeness
- polite
- presumably
- conscience
- canceled
- respects
- norms
- rang
- solicitations
- gossipy
- obtained
- frequency
- turf
- soliciting
- medications
- chow
- smiling
- leash
- acts
- gin
- dispute
- reactions
- intimidated
- alm
- inundated
- switches
- influenced
- rhythm
- sim
- mus
- jimi
- hendrix
- pitiful
- promise
- simon
- qualities
- achieve
- unexpected
- alw
- loaned
- quota
- holler
- leeway
- pains
- wing
- coordinated
- spelled
- skid
- counsel
- violation
- actu
- modeling
- lyrics
- oldies
- phil
- collins
- criticize
- suggestions
- petting
- farms
- exit
- determination
- preservation
- ted
- teddy
- underclass
- considerable
- watcher
- gathering
- sexually
- justified
- territories
- capita
- carefree
- taxing
- weak
- territorial
- resist
- attempts
- craze
- uni
- subscribed
- tractors
- regulated
- cal
- organic
- weaponry
- tanks
- offender
- cured
- slave
- foul
- flipping
- shades
- acclimated
- squares
- tapped
- jerusalem
- fearful
- interrupt
- interrupted
- erase
- monterey
- jose
- ram
- supplement
- standardized
- overtime
- amazes
- circumstance
- summons
- conservation
- indestructible
- littlest
- missionary
- wrapped
- ellen
- toyotas
- preferences
- rag
- straw
- wallpapering
- hoe
- vo
- tubes
- dulles
- incoming
- eldorado
- coun
- tenure
- evaluation
- assigned
- flatter
- chickens
- curry
- overextended
- compl
- housewife
- simmer
- yarn
- demo
- ensemble
- bas
- transmissions
- frivolous
- sessions
- grind
- ranges
- quits
- disconnected
- substances
- etched
- notion
- redeeming
- grabbing
- scrape
- por
- funniest
- rotted
- harvest
- adaptations
- mining
- incaviglia
- excess
- exhibition
- da
- nightmares
- biscuits
- echoes
- actress
- believable
- drafted
- truman
- snider
- extend
- planet
- packing
- dumpsters
- awakenings
- deniro
- actors
- ser
- garp
- attacked
- ralph
- rapid
- agreements
- forests
- polluters
- penalize
- undergrad
- output
- sensational
- failure
- fattening
- catered
- brownies
- crock
- downy
- delta
- cooled
- duplicate
- clearing
- pheasant
- genuinely
- capability
- shield
- agenda
- coup
- briefly
- context
- governors
- irish
- reserved
- collectors
- ole
- antique
- eights
- irate
- noticing
- solo
- shipped
- dramatic
- grateful
- segments
- updates
- trite
- platter
- inc
- incidences
- estimate
- walter
- cronkite
- mold
- efficiency
- spouses
- widely
- redskins
- lynn
- deaths
- observe
- educators
- nother
- visual
- graded
- objectives
- principals
- passes
- poli
- interaction
- prescribed
- breakthrough
- fake
- fears
- web
- housewives
- awake
- reservations
- suggestion
- genre
- innovative
- umbrella
- annoyed
- myth
- proportion
- generational
- exams
- gung
- essential
- pushers
- cathy
- sassafras
- dye
- barn
- outlets
- hollering
- dents
- scratches
- layers
- swiss
- cauliflower
- trays
- pans
- boiling
- vanilla
- custard
- unsweetened
- spoon
- freons
- officials
- disaster
- contributor
- analyzing
- respiratory
- powered
- desired
- trainer
- butt
- psychological
- majors
- staggering
- hamilton
- tracy
- protesting
- prejudices
- dale
- willie
- summoned
- questionnaire
- skipped
- bail
- hebert
- mangione
- breeze
- fairer
- regulations
- seriousness
- darkness
- remem
- judith
- dedicate
- owes
- domino
- insured
- backing
- risks
- devalued
- magnitude
- taped
- breakdown
- beep
- murderers
- murderer
- insanity
- slap
- wrist
- merry
- reinstated
- atrocities
- prayer
- premature
- pushes
- offend
- ridiculously
- bind
- identity
- bombed
- keepers
- deducted
- offset
- owing
- giveaway
- immigrants
- seeking
- insects
- daffodils
- bud
- dandelions
- plagued
- tiller
- trie
- plum
- fescue
- dries
- greenbelt
- cracks
- smokey
- megahertz
- samna
- proficient
- poison
- reused
- mash
- heights
- lone
- vicksburg
- handful
- futuristic
- patrick
- foggiest
- soldier
- buckets
- tot
- immigrate
- render
- fab
- principles
- payoff
- incinerators
- smelled
- ozarks
- disappeared
- tad
- tiers
- glance
- enlightening
- nashville
- fellows
- communicated
- catalog
- insight
- spoke
- flounder
- padre
- aransas
- dingy
- marriages
- becky
- squeezed
- triple
- caribbean
- bees
- lilac
- overhead
- static
- lumber
- juan
- irresponsible
- bold
- carmel
- smarts
- surf
- snappers
- snapper
- described
- aetna
- medi
- irving
- provided
- wells
- romania
- resort
- affords
- printing
- seminar
- thaw
- payoffs
- persuade
- judeo
- litigious
- opponent
- underdog
- equate
- fred
- divided
- separately
- turnover
- descent
- filet
- sole
- jerk
- therapy
- companions
- dresser
- explained
- hush
- agrees
- aff
- drama
- at&t
- modest
- bef
- prep
- vocational
- col
- inevitable
- atomic
- disadvantages
- distracted
- measurement
- arrogant
- clientele
- jelly
- biting
- acceptance
- fir
- overdue
- optima
- suckers
- honored
- chevrolet
- taurus
- recreational
- campers
- shines
- holly
- mattresses
- elastic
- hectic
- volunteered
- heartbreaking
- bargaining
- forgive
- adamant
- moderates
- egypt
- muslims
- palestinians
- poem
- naps
- demonstrations
- restless
- underlying
- dissatisfied
- proposing
- upbringing
- outlook
- quilting
- amish
- acreage
- eyed
- motivates
- vitamin
- drilled
- extensions
- quantities
- carson
- doses
- experimented
- chlorinated
- rode
- nationalities
- exam
- memorize
- readers
- scales
- grain
- matching
- explains
- semigloss
- marks
- experiencing
- upbeat
- connections
- dah
- seated
- alley
- uncertainty
- hoot
- itemize
- processors
- portable
- hewlett
- rival
- rugged
- decks
- printers
- obsolete
- quitting
- approximately
- martin
- achieved
- tact
- disappointment
- trusting
- corrected
- opted
- perjured
- barred
- script
- ironic
- witnessed
- answered
- dependents
- mobility
- preventative
- lung
- carrier
- filed
- pissed
- offensive
- opinionated
- textbooks
- forbid
- advertisement
- cordless
- porcelain
- sandy
- tracks
- amateur
- sings
- contraceptives
- luxuries
- continually
- perennials
- arriving
- bows
- ribbons
- designs
- bunny
- ink
- canvas
- crewel
- decorations
- victorian
- stiffen
- uncommon
- compensate
- typed
- correcting
- frustrations
- acted
- rumors
- lebanese
- newsmen
- chemistry
- tw
- literacy
- jackson
- macho
- hint
- cer
- cutbacks
- slogan
- preserving
- trigger
- greenhouse
- plattsburgh
- digital
- sane
- boost
- vacationing
- stationed
- slope
- attach
- starving
- distant
- mideast
- bureaucratic
- bearing
- nightline
- eng
- centuries
- decking
- crawling
- buds
- vine
- chops
- guest
- sucks
- tails
- '''oeuvres'
- cooks
- elegant
- crumbs
- crunchy
- bouillon
- 20/20
- cord
- irritated
- luggage
- climates
- richer
- civilized
- israeli
- jazzercise
- ego
- exer
- leaned
- firearm
- firearms
- twirling
- edited
- dribble
- accidental
- resale
- trading
- strangely
- cutlass
- semesters
- recipients
- recipient
- pathetic
- import
- partnership
- ambition
- disciplined
- prenatal
- peru
- thir
- filters
- tourists
- canadians
- panamanians
- initiate
- concentrating
- cellular
- awkward
- aw
- sanitation
- kuwaitis
- accomplishment
- defend
- amy
- sunshine
- hurricane
- flood
- muggy
- royals
- pitchers
- nat
- indicator
- lineup
- knives
- publishing
- laptop
- search
- significance
- chains
- jonathan
- petunias
- blooms
- stitches
- fruits
- righ
- opportune
- tang
- inspiring
- incomes
- ferraro
- isaiah
- alma
- mater
- dominant
- greed
- hud
- pit
- bounced
- installation
- stinking
- forgets
- morally
- millionaire
- observer
- restrict
- ancestors
- kitchenette
- neatest
- miniskirts
- grandmothers
- feminine
- marching
- bizarre
- overboard
- gu
- neon
- tints
- condominiums
- walt
- crummy
- flake
- woodwork
- widespread
- worldwide
- bow
- contrast
- vocal
- removing
- passive
- colonies
- bury
- presence
- quietly
- whichever
- vacant
- equity
- litters
- fin
- aquarium
- commands
- anticipate
- resulted
- ranches
- repentance
- mas
- olympic
- wicked
- climbed
- stretched
- explaining
- wayside
- combinations
- carpets
- str
- tickled
- tinted
- carmakers
- sporty
- miata
- authentic
- demands
- parkway
- gabriel
- shannon
- patriot
- mansion
- alan
- blessing
- catnip
- bombay
- himmy
- champion
- gloves
- devon
- curly
- mice
- associations
- haired
- qualifications
- attracted
- irritating
- cops
- irks
- ron
- relation
- germantown
- hondas
- skins
- errands
- pigs
- substituting
- spoil
- butts
- experts
- markets
- hong
- kong
- tens
- conflicts
- bangladesh
- prevention
- barrels
- lily
- humongous
- azaleas
- fielder
- cubs
- pri
- aft
- kinder
- callers
- capone
- arsenio
- flatliners
- scheduling
- threads
- bedspread
- lobby
- mckinney
- spaced
- ethical
- expenditures
- recovery
- sitters
- reader
- authors
- scraping
- backlash
- estes
- sensitive
- taxpayers
- fisherman
- soul
- lures
- hea
- propose
- reinforcement
- exempt
- pendulum
- applies
- flea
- skilled
- petty
- brochures
- bussed
- african
- glen
- godfather
- sooners
- hump
- summit
- strengthen
- meaningful
- steamer
- sprinkle
- skillet
- teflon
- passion
- increasingly
- privileges
- constitutional
- thousandths
- motorcycles
- eighths
- annoys
- horizon
- tooling
- essence
- decimal
- inherited
- fifths
- sweatshirts
- blouse
- programmer
- fashions
- taiwan
- keyboard
- unpopular
- plumber
- sucker
- transporting
- indifferent
- shallow
- undo
- seeming
- kilograms
- dates
- propaganda
- confidently
- badge
- clipper
- steelers
- temperament
- scoring
- warren
- proving
- arthritis
- revenue
- scheme
- os
- wholeheartedly
- unknown
- capacity
- noodles
- instincts
- lecture
- stanford
- unlike
- academics
- cannon
- instinct
- stereotypical
- mac
- firepower
- mug
- antenna
- denton
- psych
- hamsters
- smelling
- expenditure
- dec
- diploma
- radioactive
- packaging
- detect
- stream
- particles
- cattle
- creeks
- alaskan
- roam
- booster
- contagious
- scientist
- wednesdays
- shopper
- species
- tribes
- underpaid
- ambience
- texture
- enthralled
- mel
- presidents
- consultants
- persons
- sweaty
- speaker
- subsidy
- lies
- ano
- offenses
- housekeeper
- hottest
- firewheel
- salisbury
- hams
- locking
- prosecuting
- gettysburg
- arena
- openness
- duplex
- fords
- carburetor
- cap
- notch
- overlap
- dash
- vegetarians
- cleanliness
- vegan
- bodies
- utilize
- coo
- hens
- ballpark
- kicking
- getaway
- des
- vitelle
- a&m
- oriental
- yellowstone
- lion
- rio
- grande
- marble
- jealous
- ruins
- objecting
- fireman
- malicious
- compensation
- executing
- falsely
- statistic
- meanwhile
- storing
- internship
- cooper
- clinic
- cardiovascular
- rotate
- picturesque
- biggie
- killeen
- purebred
- virus
- affection
- caravan
- storage
- libber
- heated
- shrubbery
- supportive
- unacceptable
- appalled
- reimburse
- explorer
- middlekauff
- stiffer
- disneyland
- amusement
- solely
- lafayette
- allies
- liars
- masses
- majored
- discriminated
- valid
- lonely
- smile
- consists
- lisa
- floods
- historian
- societies
- eater
- rewiring
- praised
- openly
- logically
- nest
- pap
- supporter
- runner
- moth
- devastate
- mediocre
- excel
- insist
- halloween
- toning
- dramas
- shakespeare
- multimillionaire
- supervise
- imports
- inferior
- wallet
- dwell
- po
- iguana
- br
- twentieth
- assertive
- chewing
- freelance
- reputable
- avenues
- smoothly
- avenue
- classify
- spices
- tort
- riots
- methods
- textbook
- sprayed
- wiring
- busting
- minimal
- youngster
- manner
- fringe
- beeper
- pill
- spraying
- heavens
- splitting
- maturity
- cues
- nineteenth
- velcro
- cole
- codependency
- losses
- worlds
- representation
- roller
- maternal
- franchise
- bones
- quickie
- resorts
- inept
- tossed
- superior
- enthusiastic
- stripper
- eth
- shotguns
- vital
- mutual
- laura
- lotion
- accumulate
- dime
- unfinished
- toned
- treatments
- rust
- instruction
- productivity
- wherewithal
- indigent
- employ
- medicaid
- desperately
- equipped
- alto
- jerker
- christopher
- reeves
- climb
- mastercards
- beaver
- champions
- pines
- berries
- dutch
- shou
- cathedral
- constructed
- rainfall
- chased
- tossing
- peonies
- hardy
- divorces
- drank
- tan
- sunburn
- interfere
- fo
- custody
- bottoms
- guidance
- flew
- jar
- eisenhower
- bitter
- motivational
- presidency
- leaps
- noriega
- tunnel
- anger
- roger
- mis
- universe
- bargained
- interviewing
- potluck
- trump
- hyacinths
- purply
- mugged
- paroling
- int
- avon
- spectator
- deeply
- amou
- crepe
- pile
- toll
- dependable
- cavalier
- squish
- drinks
- census
- pell
- vienna
- waitresses
- ultra
- regency
- progressing
- retrievers
- prompt
- brisket
- reliability
- graveyard
- submit
- reception
- watercolor
- jan
- shanghai
- effected
- micro
- satisfying
- preston
- broiled
- violated
- appealed
- martha
- melodies
- speaks
- squad
- cutback
- texasville
- breathe
- homemakers
- dreyfuss
- spit
- presumed
- cra
- coordination
- irons
- perry
- stepmother
- ambulance
- deteriorated
- bunk
- flan
- vinegar
- pies
- happiest
- wheeling
- geriatric
- cockapoo
- rabbits
- ignored
- earnings
- pencil
- taller
- glorified
- sch
- eyre
- sung
- madam
- butterfly
- puccini
- canoeing
- receptive
- jackie
- gymnastics
- im
- steadily
- ronald
- brownwood
- temple
- substantial
- les
- broadway
- orthodontic
- verge
- orthopedic
- silverton
- drafter
- drawings
- unbiased
- equals
- secretarial
- overturned
- thelma
- louise
- tacky
- chipped
- sledding
- ambulatory
- reluctantly
- adequately
- cheryl
- hearty
- skim
- thai
- lunches
- molestation
- releasing
- sketch
- subscriptions
- upright
- paddle
- appliance
- tops
- pant
- gail
- centralized
- claus
- earns
- coit
- orchestras
- breasts
- chill
- punk
- '101'
- rebate
- perkins
- fluffy
- parker
- coppell
- bleeding
- pittosporum
- thumper
- carney
- trailers
- eager
- signature
- whoops
- discovery
- macaroni
- golfing
- superbowl
- tease
- includes
- desperate
- entitled
- dill
- suing
- semiautomatic
- cuddle
- legislate
- hubbard
- screams
- competitiveness
- mechanically
- jesuit
- duh
- haiti
- constituents
- ordering
- striped
- bonham
- donna
- du
- nist
- sheet
- sergeant
- rebuilt
- spy
- thorough
- fame
- hydrocarbons
- nitrogen
- ville
- manufacturers
- mats
- algebra
- glossy
- pathology
- towncar
- missions
- mat
- gut
- precaution
- kenosha
- pianos
- commissioners
- exemptions
- daytona
- holder
- gloss
- exploring
- hatchback
- abuses
- royalty
- rehearsals
- meg
- boise
- barbie
- radial
- lathe
- distributor
- parakeets
- chimney
- telecom
- bran
- piedmont
- howse
- duncanville
- admitted
- warriors
- marketplace
- dunn
- bradstreet
- vivaldi
- boutique
- decorative
- volume
- honeywell
- quicken
- strengthened
- quantity
- hinge
- cumbersome
- qua
- transport
- makings
- seal
- entitle
- opacity
- abouts
- forum
- ductwork
- shave
- interchange
- ber
- scruffy
- critic
- trivia
- sharon
- invitation
- astounded
- effectiveness
- insulted
- conspiracy
- paranoia
- surmise
- latches
- invading
- knocking
- ritual
- introducing
- click
- occurrences
- summed
- absenteeism
- errand
- discrimination
- improving
- uncertain
- suspicious
- detectors
- hammer
- royalties
- hideous
- militant
- objections
- absurd
- frampton
- performer
- eclectic
- listener
- ravi
- shankar
- spreadsheet
- dedication
- mardi
- gras
- straps
- convincing
- carl
- casually
- horrifying
- litigation
- retention
- dusty
- regulars
- texteller
- stripe
- tipped
- pastel
- pallet
- patent
- spin
- coul
- southbend
- variable
- intended
- workplace
- inputs
- toured
- reich
- genesis
- bottomed
- shoul
- devoted
- detriment
- manipulating
- softly
- alleged
- accuse
- exploiting
- cuba
- starve
- hun
- ashamed
- connery
- dwarf
- favors
- freer
- imposed
- demanded
- natives
- representative
- undoubtedly
- abou
- melting
- clinging
- quebec
- mountaineering
- implies
- fads
- institutes
- newsletter
- orientation
- meditation
- desks
- laborers
- keyed
- enc
- incorporated
- predominant
- intending
- trafficking
- aghast
- frito
- artistic
- kits
- pinks
- kit
- lilly
- greens
- stocking
- selections
- chapel
- percentile
- stabilized
- illegally
- errors
- nasa
- quaint
- mem
- supplemental
- applaud
- competitors
- generous
- repayment
- celebrated
- negatives
- ind
- privately
- brutal
- hoped
- slim
- administrating
- latter
- nickname
- customs
- defeating
- gadgets
- bluegrass
- pizzas
- anderson
- predominately
- standings
- moore
- pennant
- pirates
- appraised
- overpriced
- longevity
- satisfy
- resell
- editing
- availability
- prohibit
- janitors
- endurance
- mutually
- supervisory
- quotas
- swampers
- laborer
- happ
- mushrooms
- consisted
- terr
- siren
- alarms
- jamaican
- knitted
- granny
- moderate
- carpentry
- candle
- contributors
- ai
- comply
- helicopter
- sting
- nitrous
- chemist
- unseasonable
- ust
- nostalgic
- calligraphy
- tidbits
- mcgyver
- inventing
- baling
- washers
- junkyard
- portraying
- invented
- attempting
- innings
- ke
- weaned
- meows
- docile
- traumatic
- secretive
- daisy
- hype
- mimic
- predicting
- fictional
- swamp
- margin
- teasing
- crosses
- dang
- dumpster
- openings
- recycles
- imaginable
- folded
- straightened
- reminding
- settlement
- beaten
- ramifications
- margaret
- thatcher
- gandhi
- volcanos
- rhode
- residue
- pitted
- comeback
- nader
- volcano
- indicates
- previously
- regulatory
- arrows
- zoom
- calculate
- yugo
- pricing
- dos
- pastor
- sauces
- coleman
- sacramento
- backpacked
- undeveloped
- opposition
- negotiate
- factions
- refreshing
- reveal
- occupy
- responding
- tunes
- jigs
- instrumental
- mickey
- wills
- nickelodeon
- fl
- shenandoah
- flimsy
- programmers
- mentioning
- irritates
- aspen
- contel
- demonstrated
- surrogacy
- crass
- nurturing
- donation
- auction
- shelters
- bedridden
- gals
- '''am'
- factual
- nightly
- chancellor
- gaps
- newscaster
- excerpts
- rises
- choi
- assisted
- deteriorate
- sponsor
- caretakers
- supplemented
- possessions
- signing
- sectioned
- zones
- vikings
- hart
- educator
- beg
- initiative
- administrations
- maj
- sabbatical
- minuscule
- referring
- hourly
- gardened
- remotely
- shack
- broaden
- ivy
- couches
- careless
- anybo
- oreo
- twisted
- actresses
- kenny
- columbus
- disrupted
- mistrial
- chooses
- confession
- placing
- inception
- insure
- burglars
- jacques
- lewis
- chagrin
- ame
- preferably
- loudly
- epileptic
- aftermath
- snob
- broadened
- expectations
- swore
- amphetamines
- endangering
- hassles
- splotches
- scratching
- dread
- hardwood
- toothbrush
- proclaimed
- nicks
- breads
- chunks
- quart
- slender
- blender
- thickens
- thickened
- thicken
- cooling
- leaded
- endorse
- caprice
- converters
- arguable
- lit
- meteorological
- circulation
- lungs
- focal
- volkswagen
- pinned
- fulfilling
- obligations
- belonging
- wealthier
- adulthood
- functioning
- monster
- wandering
- ropes
- appreciation
- confess
- tolerances
- pete
- arnett
- sporadically
- impartial
- diversity
- affiliate
- cutesy
- beeped
- moody
- wonderfully
- vowed
- booklets
- recruit
- courthouse
- strangled
- testify
- neurotic
- crooked
- bracelet
- instructed
- whereabouts
- bracket
- koontz
- bachman
- letterman
- hologram
- pitches
- speculative
- deregulation
- teapot
- vaguely
- hoover
- pennies
- nickels
- investors
- holders
- asphalt
- charts
- kathy
- walkman
- simmons
- rapists
- manson
- repealed
- thousandth
- pac
- kingdoms
- ruler
- scriptural
- elses
- discernment
- walters
- wiley
- communists
- assaulted
- compensated
- medicines
- rude
- returns
- indebted
- deli
- strings
- crabgrass
- slimy
- tempered
- standby
- surgeon
- pruning
- undertaking
- irrigation
- leafy
- remain
- flowering
- chick
- lem
- humus
- barbe
- stoves
- flame
- grease
- tortillas
- turkeys
- smoked
- hickories
- spreadsheets
- specs
- montana
- hazards
- crash
- burlap
- coupon
- subtract
- compost
- branches
- heed
- staunch
- withstand
- buffers
- scuds
- provinces
- merely
- demilitarize
- confusing
- sucked
- incomprehensible
- disarm
- socialism
- boris
- nationality
- nut
- sabine
- consequence
- wade
- camps
- kingsley
- centennial
- canton
- dinky
- proclamation
- mason
- dixon
- seller
- avalon
- chilling
- wits
- characteristics
- tuberculosis
- wafer
- linear
- mismanaged
- outraged
- breyiana
- demos
- boggles
- contaminated
- refineries
- desires
- delaware
- caves
- fading
- anythi
- pantry
- crushers
- hallways
- casualties
- magnified
- tones
- questionable
- andy
- creatures
- extends
- fork
- spills
- degrading
- spark
- probab
- hints
- stereotypes
- romanticize
- thugs
- beaumont
- predictions
- barring
- substantially
- separates
- zealous
- farmhouse
- pumpkins
- planter
- creosote
- landlord
- brushing
- rose
- cantaloupes
- cubic
- wary
- youths
- hostilities
- judging
- burlington
- confronted
- slit
- divisions
- rash
- monterrey
- objective
- hamper
- grouper
- oysters
- tiring
- canals
- grabs
- grabbed
- dogfish
- antibiotics
- commuting
- deprived
- clinics
- infections
- enrolled
- rigid
- fined
- mills
- deceiving
- surroundings
- paths
- motive
- motivations
- upwards
- bundled
- doubling
- financed
- integrity
- benefitted
- perceive
- unfairness
- wiser
- segment
- vengeful
- pitifully
- massively
- respon
- represents
- speeches
- slapped
- inflammatory
- atrocious
- blitz
- zoning
- wholesaler
- turnovers
- argentine
- microwaves
- waxed
- flakes
- purplish
- cubes
- sherry
- argentinean
- sausages
- breaded
- publications
- thesis
- disgruntled
- cries
- replaces
- belongings
- roaches
- overhaul
- uniform
- discretionary
- emotion
- hence
- fines
- documentary
- dealings
- declaring
- dire
- squirrelly
- miscellaneous
- nd
- deposited
- scurried
- skaggs
- endangerment
- assumes
- endanger
- endangered
- accidentally
- suspicion
- continents
- ingrained
- confuse
- trans
- centimeter
- measurements
- peanut
- kindercares
- alphabet
- scold
- inappropriate
- trauma
- weath
- predictable
- inversions
- threesome
- novice
- rut
- yo
- delightful
- ferrari
- resembled
- satellite
- bathed
- jacuzzi
- wings
- fastest
- ant
- kitchens
- dented
- refresher
- kosher
- knishes
- mea
- unstable
- relevant
- americanized
- hugged
- scam
- apologize
- hug
- shiite
- poss
- wheth
- countrymen
- wom
- implementing
- decreasing
- finland
- selfishness
- benefited
- mil
- flunk
- canning
- zinc
- processed
- bogged
- distributed
- moderately
- companion
- organs
- sally
- petite
- isometrics
- ingestation
- plight
- surrounded
- directing
- coed
- subbing
- calculator
- behaved
- versatile
- applicable
- depot
- spackling
- creamy
- similarly
- formative
- contacting
- aptitude
- sounding
- upkeep
- cellar
- rents
- complexes
- nanny
- prefabs
- enou
- scoot
- emulate
- guru
- auditors
- packard
- matrix
- transparencies
- outdated
- advisor
- panhandle
- piling
- shredded
- pessimism
- racism
- destined
- fronts
- hippie
- texaco
- pennzoil
- miscarriage
- rational
- testimony
- testifying
- paralegal
- priors
- aggravate
- enlightened
- niceties
- flop
- horrified
- absence
- taxation
- flabbergasted
- gracious
- flops
- certificate
- explanation
- univer
- dustbuster
- plated
- bowls
- patty
- womb
- soothing
- repetitious
- wilder
- eleventh
- painless
- necessities
- harm
- magnolias
- raking
- underground
- grasses
- blend
- macneil
- jennings
- informative
- bureaus
- comics
- mourning
- lace
- weave
- lacy
- draping
- batting
- anticipating
- splurge
- deci
- typist
- damme
- bland
- widow
- dummies
- caan
- rescuers
- submarine
- studio
- survived
- einstein
- stepson
- literate
- honors
- lifesaver
- framing
- hindsight
- incidents
- outsiders
- jesse
- complains
- threatens
- entrepreneur
- achievement
- clue
- sights
- transplant
- glamorous
- uncontrollable
- constitute
- denial
- champlain
- resume
- technicians
- fad
- timid
- macon
- hous
- espec
- contacted
- liquor
- repairman
- popped
- radishes
- turnips
- loam
- intensive
- attachment
- pickles
- unfairly
- seasonings
- paralyzed
- spinal
- discrete
- seatbelt
- arrow
- reuse
- collects
- dorms
- perimeter
- orthopedist
- freak
- diane
- diver
- limping
- tights
- casts
- nautilus
- cushion
- singled
- tighter
- lonesome
- naw
- everyb
- imitate
- oscars
- booth
- demographic
- judgments
- texins
- crest
- demonstrator
- reps
- partying
- tracking
- perpetuate
- manpower
- coincide
- cl
- soreness
- nighttime
- evacuated
- winnebago
- benefiting
- incidence
- abundance
- creature
- aim
- shah
- felons
- unseasonably
- comparisons
- waning
- surviving
- diplomacy
- eliminating
- processes
- righteous
- filtered
- launch
- unmet
- strife
- ray
- blatant
- fax
- proactive
- buil
- treaty
- bully
- repay
- swallow
- evolve
- tug
- skewed
- intersection
- trampoline
- downs
- cy
- swept
- streak
- averages
- catches
- tigers
- strategy
- bayless
- advised
- brunt
- rooted
- dseg
- documentation
- floppy
- disks
- hus
- touchy
- linda
- rossa
- teen
- boo
- livingston
- seagull
- wro
- midland
- odessa
- practiced
- fur
- contra
- haunt
- resentment
- laughable
- arises
- browns
- topping
- toast
- mustard
- cucumber
- bonanza
- meta
- rearing
- robinson
- cylinders
- akeem
- dominate
- reselling
- jap
- wichita
- galen
- amrein
- snacks
- elephant
- transferring
- fare
- veterinarians
- wonders
- developer
- breathed
- limiting
- cookouts
- individuality
- frills
- fluctuates
- tastefully
- smashed
- organizing
- dare
- reform
- bri
- gate
- felonies
- ima
- racist
- gripe
- gar
- width
- spreader
- lightly
- freshly
- arthur
- waterfront
- movers
- frames
- enamel
- spun
- descendants
- favorable
- intervening
- advancing
- frightened
- revolting
- upsetting
- acquired
- creeps
- kitten
- teacup
- frustrates
- cheaply
- brunch
- crook
- mock
- primaries
- workday
- chows
- guinea
- harming
- bellies
- rubbed
- terrified
- louder
- lid
- collie
- mechanism
- inspected
- cheated
- fingernails
- uninformed
- disinterested
- honduran
- rica
- tourism
- enabled
- policies
- engrossed
- virgo
- elder
- ricans
- rican
- loaner
- revival
- christianity
- revered
- pyramid
- birthdays
- disciplinarian
- nutri
- stairs
- elevator
- powerhouse
- alway
- rehearse
- patriots
- photo
- guards
- congested
- incarcerating
- foreground
- snatched
- astro
- minivan
- subaru
- ticking
- rack
- upgrade
- retail
- campgrounds
- bearable
- dipper
- addict
- sportsmanship
- describes
- strasbourg
- missile
- bounce
- goll
- humiliating
- chauffeur
- valet
- condemning
- airs
- tithe
- blessings
- foley
- croak
- critters
- turkish
- himalayan
- patches
- paws
- lanky
- hillside
- communicating
- swam
- supervision
- stephanie
- keel
- tuba
- nerves
- turntable
- dual
- processor
- edit
- layout
- preventing
- overloaded
- mentions
- sevren
- montgomery
- piddly
- compressor
- prelude
- impractical
- wharf
- colts
- seahawks
- winners
- champs
- expansion
- attendance
- kites
- strangers
- tasting
- arrangement
- rewards
- interfering
- inhumane
- overtaken
- underwater
- intention
- philippines
- tag
- quarterly
- incentives
- justification
- sorting
- insurmountable
- forestry
- trails
- emphasized
- obtain
- cubicles
- advent
- op
- accurately
- orchids
- dodgers
- brat
- petrified
- circular
- terrifies
- niece
- laughs
- exc
- negate
- rejected
- lawlessness
- founded
- crippled
- perpetrators
- breath
- intake
- valleys
- pencils
- abreast
- ethics
- scandalous
- churchill
- dickens
- withstood
- mindless
- pi
- sincerely
- whew
- spreading
- petersburg
- finest
- southwestern
- cincinnati
- roaring
- perpetual
- lhasa
- scuba
- pampered
- dinosaur
- fires
- ventured
- dooming
- plunked
- cooperated
- adjusting
- decades
- valued
- downstream
- lure
- bumble
- wasp
- squirrels
- popularity
- isolation
- disciplining
- spank
- isolate
- handicraft
- dough
- ornaments
- empties
- posted
- ruining
- kurdish
- roseanne
- matthew
- brando
- levinson
- follower
- marino
- keystone
- cunningham
- tactics
- granada
- cuban
- salinas
- terrorist
- buried
- hyundee
- helicopters
- stepper
- pillow
- staring
- aqua
- blisters
- rubber
- trashed
- dwindling
- cooker
- cherry
- blackening
- gumbo
- portuguese
- ribs
- ya
- jumbo
- initiatives
- revolt
- obliged
- argues
- constrained
- fools
- indoctrinated
- millimeters
- fractions
- fittings
- wrench
- header
- screws
- progressively
- pullover
- smokes
- sw
- othe
- designer
- foolish
- puzzled
- warned
- cab
- tractor
- sixes
- diesels
- injector
- asylum
- governmental
- antiwar
- translated
- soapbox
- usable
- antimetric
- sweden
- midnight
- plains
- collapsible
- helper
- motivator
- huff
- phenomena
- temper
- miami
- cyclical
- oilers
- stallworth
- swan
- oppose
- decisive
- wrath
- constituency
- nuggets
- meatless
- ingredients
- hostess
- soybeans
- proteins
- belton
- pennsyl
- lsats
- als
- sev
- abcs
- especiall
- affordable
- carpools
- symbolic
- scenario
- gunfire
- outlaw
- abiding
- restrictive
- concealed
- sp
- deterrence
- weighed
- objection
- misusing
- impose
- crackdown
- dawn
- liners
- gerbils
- mutts
- counted
- eel
- tiniest
- debated
- symptom
- furnish
- nonsense
- handicrafts
- awarding
- topsy
- turvy
- worldly
- sparked
- reg
- flours
- dublin
- bulldozers
- overflow
- posters
- chained
- tabby
- rampant
- girlfriends
- inadequate
- '8088'
- monitors
- respectable
- secondly
- binary
- calibrated
- qualification
- brackets
- rescue
- passport
- mou
- alcoholics
- returning
- laurie
- clout
- grilled
- buffets
- brunches
- woodland
- colo
- prix
- seagal
- starred
- premise
- preoccupation
- belly
- millimeter
- darndest
- assembled
- hauled
- fertilizers
- prohibited
- facets
- denied
- loaf
- dawned
- boulders
- marbles
- duck
- shish
- odor
- boneless
- scrambled
- armenian
- consume
- punishing
- devil
- suffered
- agreeing
- enforcing
- burglaries
- rationalize
- busiest
- airy
- wires
- compartment
- soldered
- restrain
- overeat
- pastas
- minerals
- accepts
- supplements
- toledo
- oriole
- steeper
- moines
- bleachers
- collapsed
- herbs
- sill
- appleseed
- pecans
- wes
- enterprise
- bulletin
- electrician
- terminology
- gaithersburg
- valedictorian
- pushy
- seemingly
- rockies
- carries
- yells
- breezed
- solicit
- coworkers
- alright
- humans
- bust
- holdup
- underst
- convicting
- restoring
- ankles
- landscaped
- sal
- continuance
- pensions
- allergy
- baxter
- ceo
- homa
- rallies
- anaerobic
- improves
- ls
- adverse
- hunk
- pulse
- resting
- mirrored
- fireplace
- tucked
- condos
- abandon
- dennis
- distributing
- refuses
- glove
- pricey
- passenger
- lowered
- questioning
- dummy
- mans
- occupations
- norma
- techniques
- karen
- spotted
- incompetent
- exper
- priest
- kindergartners
- conform
- creativity
- manners
- mannerisms
- establishment
- norfork
- farthest
- charleston
- hairs
- follicles
- rehab
- fro
- weddings
- graduation
- med
- saudis
- thieves
- chaos
- promotion
- unconditional
- offspring
- quotes
- dumps
- bluebonnets
- absorb
- es
- flash
- medina
- salty
- beirut
- penalized
- lining
- faucets
- repainting
- arrange
- tripping
- ingest
- ingesting
- arteries
- reacts
- framers
- framed
- viable
- supports
- viewpoints
- delay
- nevertheless
- allocation
- infrastructure
- expended
- restock
- twen
- spider
- marigolds
- impatiens
- replacement
- teased
- bacillus
- gypsy
- toddlers
- recommendations
- skits
- attachments
- slacked
- contributed
- bombarded
- mrs
- cleaver
- senses
- romantic
- illiterate
- paced
- ridged
- totaled
- hesitate
- technologies
- stacked
- renters
- counties
- citibank
- scams
- swayze
- clyde
- drummer
- scratched
- demographics
- companionship
- dependency
- everyth
- prospective
- pairs
- unsupervised
- morton
- lu
- offended
- drinker
- measures
- lions
- arapaho
- drool
- yuppie
- cheat
- reinforced
- fashion
- defrosting
- pilaf
- mixing
- mushy
- korean
- auxiliary
- curriculums
- kathleen
- accordingly
- residency
- sportswise
- blitzer
- fanny
- treadmills
- cinema
- dripping
- shorted
- enlarge
- valves
- shingle
- fixtures
- detached
- stigma
- pioneers
- households
- beepers
- bulky
- vibrates
- hepatitis
- freed
- expectation
- boyfriends
- homeowners
- existence
- anguish
- charming
- weathered
- leveled
- wallpapered
- conserving
- diagnosed
- inspiration
- alerted
- swimmers
- extracurricular
- loser
- sats
- barber
- verses
- robber
- dachshunds
- spaniels
- anthropology
- presses
- clerical
- forthcoming
- homecoming
- famil
- familiarized
- virgin
- qui
- divine
- skates
- cot
- shove
- nannies
- objectivity
- digressing
- ordinarily
- weirder
- revolved
- hatchery
- intimate
- calendars
- decoration
- passage
- continuity
- percentages
- cavaliers
- ewing
- highlights
- patience
- bethesda
- beijing
- pooling
- restful
- pends
- dells
- starring
- rage
- terminator
- twists
- treble
- mackerel
- pike
- stung
- fleetwood
- displayed
- freaks
- backs
- buicks
- convertible
- vintage
- setter
- feathers
- conducted
- ethically
- patrol
- kidnapped
- pun
- exceedingly
- albany
- syracuse
- rapist
- investigation
- pamper
- waits
- assistantship
- newlyweds
- hopping
- annually
- journals
- figurines
- sanded
- 4h
- refinish
- hormones
- lip
- fender
- sparingly
- lime
- sands
- upscale
- gum
- rips
- shreds
- sponge
- mate
- averaged
- harvard
- successfully
- approaching
- nutrition
- conductor
- cringe
- mcneil
- criticism
- palo
- columns
- candles
- psycho
- deadly
- uneasy
- robocop
- molly
- savage
- resented
- retrospect
- juggling
- density
- crucial
- oft
- lame
- assaulting
- pleading
- psychiatrist
- psychiatrists
- psychotics
- assaults
- sponsors
- rainier
- snowy
- immune
- tawakoni
- cones
- fearless
- enclosed
- roofs
- sizes
- cei
- furnace
- ambitious
- poking
- fountains
- latitude
- underpass
- hiding
- petals
- slows
- oscar
- durant
- alo
- notorious
- settles
- smoker
- sponsored
- educations
- ele
- approached
- proponent
- thus
- endeavor
- wri
- fingerprints
- slipped
- fingerprinted
- astounding
- intervals
- contracted
- dea
- imm
- soaking
- visitors
- rug
- daddies
- conformist
- revolutionary
- kramer
- celebration
- feeder
- nets
- minnow
- burping
- purina
- parade
- compound
- pursuit
- refuted
- refute
- turnouts
- vi
- relates
- regain
- moats
- staubach
- encountered
- unrealistic
- landon
- portrayed
- josey
- clint
- jot
- baptist
- reflection
- damages
- shortage
- clerks
- doubled
- smallest
- pavilion
- fuses
- alter
- sensing
- bandit
- theatres
- ellison
- activist
- photographs
- hyacinth
- hollies
- spike
- perennial
- gomphrena
- repeating
- minimize
- ornamental
- happiness
- acquire
- congratulations
- simpler
- circles
- wham
- forgiving
- detrimental
- immature
- maple
- myrtles
- screwing
- disguise
- formatting
- paragraph
- voyager
- crank
- pepsi
- mcmahon
- racking
- recharged
- seabrook
- nucleus
- billed
- mints
- adaptation
- crown
- lunchtime
- celebrate
- incident
- shreveport
- limbo
- diaper
- chassis
- bent
- soapies
- bichon
- frise
- personable
- rin
- tervurien
- latchkey
- considerations
- sunroom
- rambler
- sandstone
- beltway
- adored
- surrendering
- cooperate
- allah
- sakes
- stirring
- pineapple
- oatmeal
- casseroles
- bronze
- catherine
- nissans
- escort
- trusted
- insurances
- provider
- postal
- recourse
- invades
- complained
- susceptible
- newhart
- comedians
- contrary
- bart
- simpson
- morocco
- continent
- ripping
- photos
- reef
- melbourne
- squirrel
- agents
- hockey
- christi
- diverted
- pea
- fiasco
- liver
- caution
- expediency
- misplaced
- technicalities
- technicality
- ruffle
- conducive
- sandwiches
- vendors
- pins
- ligaments
- beethoven
- mozart
- softer
- banned
- regime
- liberalization
- civics
- dart
- wasteful
- wounded
- mcmurtry
- trashy
- grou
- grouchy
- projectionist
- subtitles
- intuitive
- footnotes
- footnote
- operator
- lands
- appetizers
- premed
- specialize
- matinee
- cocoon
- alien
- maintained
- sharif
- oddly
- exceed
- incapacitated
- images
- dangerfield
- stacking
- leftovers
- catering
- scooped
- amelia
- anyth
- wolfe
- myths
- haggard
- phonetics
- relearning
- wheelers
- transaction
- checkup
- reserves
- cranky
- measuring
- coating
- cognitive
- jour
- austen
- reviewed
- attracts
- grandchild
- congealed
- soprano
- canoed
- cancun
- bummer
- teenaged
- manhood
- ostracized
- liken
- pear
- daytimes
- ransom
- sightseeing
- gubernatorial
- robb
- receipts
- gambling
- sedentary
- tortilla
- picante
- grated
- jell
- timely
- subjected
- athletics
- bathe
- commercially
- accordion
- miserables
- milkman
- travis
- phantom
- lloyd
- listens
- illnesses
- diligent
- invaluable
- scotland
- jaw
- periodically
- durango
- jeep
- destin
- jetty
- draftsman
- roman
- recognizes
- regarded
- mediation
- crises
- bystander
- awe
- prac
- gannan
- valerie
- addicts
- sayings
- possi
- restrooms
- festival
- alpine
- uneven
- sleds
- knob
- mows
- mulched
- presbyterian
- willingly
- littler
- strategies
- rapport
- walnut
- impersonal
- hack
- cheerful
- emily
- dell
- preschools
- pediatrician
- dane
- tangent
- backfire
- ethiopian
- venison
- fries
- waitress
- waiter
- attentive
- adventuresome
- heyday
- bernie
- dra
- assortment
- piled
- veal
- evident
- unleaded
- ambivalent
- clothe
- rehabilitating
- confessed
- amendment
- xeros
- quartet
- technique
- carols
- mechanisms
- decompose
- murray
- sorted
- dimes
- crusher
- renewed
- prostate
- antigen
- fourths
- smells
- spinner
- baits
- fisherwoman
- imitation
- sticker
- sn
- pantsuit
- pantsuits
- enthusiasm
- begging
- fitting
- harold
- taft
- milder
- gimmicks
- hemorrhaging
- mennonite
- sealer
- premier
- landed
- suites
- invalid
- invalids
- labels
- frugal
- substituted
- legacy
- reside
- partial
- yuck
- balloting
- sibling
- colds
- discontinued
- primitive
- tulips
- hazard
- codes
- zenith
- ques
- slides
- purity
- richie
- bushel
- wines
- napa
- ronnie
- whittle
- satire
- monotonous
- menus
- frankenstein
- blazing
- saddles
- grants
- hitler
- paintings
- specimen
- fussing
- presume
- pollu
- decorate
- kindergartner
- arguably
- cradle
- grave
- fluff
- swings
- queens
- beltline
- thrus
- aerosol
- corny
- fridays
- camry
- elway
- moneys
- exponentially
- crawls
- grieve
- greg
- foresee
- uninsured
- noses
- rudman
- accountability
- proportionally
- gruesome
- couscous
- repercussions
- wimpy
- shortened
- befitting
- nece
- asset
- flushed
- dressy
- slack
- sl
- tro
- bidness
- apiece
- smokeys
- sur
- outlawed
- legislating
- creating
- activated
- steinbeck
- grizzly
- encounters
- doubting
- doug
- ranked
- sierras
- rai
- tempe
- yelling
- explored
- bogey
- burgled
- plop
- pee
- ay
- handyman
- tighten
- loopholes
- withhold
- advantageous
- bueno
- librarian
- coma
- seasick
- minnows
- seas
- fore
- calico
- yaupon
- labrador
- wax
- scalp
- salsa
- hidden
- continuously
- hibiscus
- wetter
- mitsubishi
- '90210'
- nicole
- matlock
- charlene
- beverly
- shred
- pierre
- recognizing
- cinematography
- invasions
- premises
- '911'
- sitcoms
- misbehaving
- faces
- censor
- morality
- jumps
- finite
- infinite
- whining
- panels
- resurfaced
- cimarron
- jeopardizing
- retirees
- ladder
- investigative
- catastrophes
- existed
- halogen
- sulfur
- combustion
- hitch
- moynihan
- skillman
- lynch
- chil
- amnesty
- abstinence
- crayon
- detest
- ph
- allante
- peppy
- saddle
- inca
- dub
- regiment
- twisters
- toe
- prone
- adjustable
- conspired
- premiums
- reasonableness
- parkland
- losers
- witt
- greave
- wins
- dilemma
- reallowed
- implement
- unsmashed
- crazies
- fabricating
- sampling
- steele
- youn
- upsets
- magnetic
- resonance
- sober
- molesting
- boar
- constraints
- betcha
- severity
- entitlements
- reductions
- defaults
- blackman
- manned
- dealerships
- purrs
- feeders
- frontier
- jetsons
- nearest
- trough
- sli
- howatch
- birmingham
- disregard
- darned
- greenery
- tahoe
- skidding
- surveyors
- tracer
- '486'
- measles
- crunch
- burger
- cameroon
- scoutmaster
- sitcom
- seato
- colony
- nato
- disbanded
- arrive
- uncooked
- overdone
- yummy
- bendix
- pontiacs
- hattiesburg
- bir
- boa
- constrictor
- parrot
- overspending
- coughing
- julio
- misuse
- sniff
- milan
- anchoring
- tedious
- stragglers
- tobogganing
- baggy
- reduction
- hewett
- scaffolds
- excessive
- rep
- disappoints
- nairobi
- safari
- wesley
- hospice
- theoretically
- mishap
- electoral
- stew
- hardaway
- dioxide
- vapor
- aye
- pickings
- legitimately
- sails
- bisquick
- lopsided
- boarding
- freezers
- genealogy
- stash
- proliferates
- brokers
- patterson
- subsidized
- amway
- nonpolluting
- bicycles
- bullheads
- nikki
- jig
- stroll
- ogden
- puzzles
- combo
- airless
- scroll
- dolphin
- torpedo
- malamute
- trillion
- ludicrous
- payers
- column
- dumbbells
- controllers
- harrisville
- specialties
- virtue
- accrued
- transfusion
- refund
- pup
- patron
- parenthesis
- earmarked
- greatful
- striper
- senegalese
- perks
- parkinson
- industrialized
- truer
- dispose
- mega
- tonnage
- scrubber
- ammonia
- compounds
- acids
- thickness
- pronto
- finalization
- utmost
- cognizitive
- scarves
- uns
- unseasonal
- sleeves
- sweatpants
- corduroy
- compliments
- skorts
- nominated
- dud
- recurring
- fami
- overreact
- terror
- cohill
- cohi
- drivel
- eldon
- housepainter
- extracts
- overtly
- uncontrolled
- pirated
- ominous
- thief
- westerner
- lunatic
- violate
- socia
- jehovah
- mormons
- intrusive
- solicited
- invasive
- soli
- intruded
- defining
- surmised
- incorrect
- unsolicited
- nonsol
- unconscious
- cli
- sequence
- peddling
- harassment
- generated
- lois
- intimidating
- rver
- greeting
- stake
- mitzi
- yip
- ranging
- soaked
- rhyme
- ruckus
- parallels
- cov
- hooker
- absolu
- phenomenon
- brazilian
- listenable
- elec
- acoustic
- interchangeably
- folk
- arranger
- sitar
- muted
- existing
- tally
- slush
- stocks
- expired
- pleasures
- albridge
- slogans
- outlooks
- haggerty
- spookier
- pecially
- airways
- focusing
- taj
- mahals
- prolongs
- whim
- deserved
- prevents
- mopping
- odds
- unair
- facial
- beards
- skids
- repack
- buttoned
- starched
- suspenders
- reorganization
- cruddy
- reall
- notre
- dame
- explosion
- untypically
- accumulation
- flatlands
- zeppelin
- floyd
- brash
- bump
- bohemian
- rhapsody
- pumped
- siskel
- ebert
- thumbs
- travolta
- quee
- tokens
- divi
- showbiz
- admission
- scyene
- inexpensively
- sao
- paulo
- usefulness
- spheres
- spaniards
- rulers
- conquistadors
- socialistic
- horribly
- dishonor
- defenses
- sabotaged
- peasant
- exploitation
- exerts
- export
- broadcasting
- ruddy
- minist
- wr
- ler
- interpretations
- histories
- copes
- indicate
- resident
- fledged
- barefoot
- pejorative
- unrest
- citizenry
- ignorance
- ult
- constitutionally
- creole
- prohibitions
- strengths
- cuisines
- throes
- reassess
- functionally
- fractiousness
- faddish
- wellness
- biweekly
- dispensed
- distinctions
- dev
- fizzled
- acupuncture
- gestalt
- irony
- cert
- vigorous
- carbohydrates
- kinesiology
- calc
- calculated
- calisthenics
- myerson
- frantic
- astonishing
- mortars
- formulated
- sociopathic
- pronounced
- unfit
- mouthed
- transcribing
- customized
- anne
- glenn
- improvise
- concentrates
- password
- verbal
- rowing
- lution
- rower
- transforms
- markov
- naval
- postgraduate
- civilians
- mainline
- respondent
- unders
- allergist
- smorgasbord
- compensatory
- profile
- bonds
- deducting
- disproportionate
- brutally
- commuted
- delays
- electrocution
- determent
- deter
- dubious
- internally
- organiz
- coordinating
- scandals
- kisha
- knight
- pullman
- exacerbate
- clutches
- pads
- benz
- absorbed
- keyboards
- spaghettis
- lasagnas
- hor
- horseback
- dabbled
- banjo
- druther
- stre
- farts
- polly
- followers
- inspir
- booths
- commutiv
- billboards
- bartman
- simpsons
- debbie
- nigh
- appraisers
- onward
- ease
- folds
- performs
- tenured
- microcomputer
- comprehensive
- rigamarole
- teachable
- specially
- spicier
- tofu
- pistachios
- pistachio
- bumped
- curried
- saute
- gigs
- perse
- ow
- conventions
- slippers
- teller
- alterations
- utilitarian
- knickknacks
- sconces
- jalapeno
- almanac
- concluding
- warms
- shutting
- piloting
- spectacle
- lobbyist
- legislators
- individ
- unbelieving
- justifiable
- nucle
- kilowatt
- washes
- stinging
- swelter
- lively
- eureka
- rentals
- inspires
- glider
- welder
- treks
- '747'
- mindlessly
- pacifier
- reme
- destructed
- milton
- berle
- stepchild
- tumultuous
- regions
- siberia
- oppression
- attentions
- hopely
- catchers
- gladly
- unheard
- babe
- ruth
- thru
- lovingest
- cosmo
- pellet
- tod
- lovey
- dovey
- kneading
- trimming
- bonzo
- poindexter
- felix
- tortoise
- possessive
- bedtime
- rendering
- jessica
- tandy
- warmth
- manhunt
- manhunter
- dysfunction
- slay
- toothpicks
- outwardly
- awfulness
- wonderfulness
- lapses
- telecommunications
- profits
- waivers
- earners
- physicals
- subsist
- lodges
- moss
- footing
- alumi
- defrays
- defray
- unfold
- walmart
- discourages
- catatonic
- discovers
- buzzards
- pal
- imagined
- slaughter
- earthquakes
- robby
- graze
- indira
- observed
- attleboro
- freeways
- jets
- swinging
- kerosene
- eah
- boilerhouse
- powerhouses
- belch
- kodak
- smokestack
- phosphorous
- grenades
- photograph
- overstated
- environmentalists
- claiming
- automakers
- soot
- particulate
- meter
- tailpipe
- devise
- mufflers
- resumes
- graph
- erased
- simplified
- anduille
- doughnuts
- cobbler
- fudge
- fiber
- sloughs
- rafting
- potty
- packs
- noth
- outfitter
- headwaters
- damper
- hostage
- rhetoric
- rolm
- engi
- sheer
- estimated
- doctrine
- turks
- cheering
- reconcile
- divisive
- unprecedented
- authorize
- frontal
- sununu
- commend
- scud
- lefty
- frizzell
- galway
- harpist
- bagpipes
- whistle
- violins
- instrumentals
- rooney
- dancer
- entertainer
- eddy
- smiley
- burnette
- raspy
- playboys
- ernest
- tubbs
- rector
- scratchy
- opry
- stadler
- autry
- anymo
- vegetate
- fri
- relly
- complication
- eith
- demolishing
- stereos
- annoy
- troubleshooting
- initials
- conversed
- sexes
- consist
- childbearing
- storly
- var
- biological
- urges
- encumbered
- heirs
- characterized
- acquaintances
- terming
- emerging
- marathon
- idear
- discrepancies
- overview
- encapsulated
- introductory
- glamour
- updated
- airspace
- huntley
- analyst
- paragraphs
- noontime
- dose
- spee
- fastened
- wander
- aides
- debilitated
- arboretum
- maid
- tackles
- spinning
- irvin
- overwork
- reinjuring
- scab
- revamped
- metcalf
- smuggled
- investigated
- rehi
- renamed
- psychologists
- ration
- modalities
- learner
- kinesthetic
- gladewater
- baccalaureate
- unle
- commentator
- golsome
- superintendent
- adminis
- scarce
- overachievers
- overachiever
- beeps
- expre
- phoe
- easiest
- horizons
- hurtling
- brothers'
- clips
- madly
- fetish
- luring
- costuming
- remarked
- thriller
- distinguished
- terrorized
- branching
- vito
- flicks
- bawled
- toughest
- venue
- disrup
- sequestered
- entrapment
- displeasure
- waive
- bungling
- caricature
- bloodless
- comic
- functions
- thrash
- fixes
- climactic
- joseph
- reborn
- targeted
- hypercritical
- fart
- gags
- slapsti
- funniness
- gag
- retreading
- tec
- preemployment
- brazen
- wisened
- ventilated
- motorola
- tack
- orangish
- feat
- brighter
- coloring
- haphazard
- baseboards
- edger
- granary
- stocked
- formulas
- perfectionist
- tasks
- freehand
- gratin
- banana
- dissipate
- thickening
- globs
- rubbery
- blenders
- cools
- favoring
- nestle
- quik
- groedy
- whisk
- beater
- melon
- baler
- cond
- octane
- generating
- volt
- v8s
- repellent
- erupted
- meteorologists
- chernobyl
- tracers
- smoky
- array
- fiero
- undisciplined
- jacuzzis
- abdominals
- thighs
- mattered
- alienated
- suffocating
- choke
- differing
- grads
- quirks
- academies
- cadets
- espouse
- anglo
- saxon
- inveterate
- switcher
- dave
- wylie
- pumping
- weatherman
- hansen
- gordon
- lightfoot
- winston
- headphones
- toweling
- investigator
- tailing
- socialite
- extradited
- levy
- uplifting
- interpreting
- jur
- gui
- overcrowd
- connects
- businessmen
- sente
- penned
- duff
- penal
- beca
- litigating
- respo
- spiritually
- begats
- durn
- kratz
- kranz
- hedges
- nathaniel
- hawthorne
- storybooks
- woe
- glossary
- krantz
- twilight
- bogused
- fuck
- dares
- hangover
- sarcastic
- fishbone
- spirited
- venezuela
- avalanche
- gobs
- inflated
- beneath
- captures
- resulting
- risky
- contain
- vague
- guaranty
- guarantees
- guaranties
- disasters
- vulnerability
- regul
- workup
- incline
- unjust
- revoke
- reverked
- revoked
- vengeance
- sayeth
- mao
- tse
- chung
- temples
- unified
- humbly
- sovereignly
- rebuke
- ager
- preface
- admonition
- agrarian
- commander
- conceal
- napalm
- gro
- clayton
- uproots
- residents
- deba
- servant
- repaid
- granddaddy
- dodger
- militia
- bologna
- alleviating
- afresh
- lifestyles
- cabbages
- broccolis
- insecticides
- dandelion
- roly
- poly
- slug
- dragons
- sockets
- alkaline
- stem
- peaches
- silt
- shrivels
- mes
- cottonwoods
- irr
- smartest
- gardenias
- revitalizing
- mayb
- chopping
- blasted
- hybrid
- editions
- spruce
- dips
- dipping
- arabic
- pita
- eggplant
- marinating
- hickory
- clones
- mach
- databases
- searches
- deleting
- pieced
- bypass
- monochrome
- enthusiasts
- nathan
- swollen
- manuscripts
- composts
- nurserymen
- goop
- doorknob
- compress
- mugs
- expressions
- ungodly
- expansionism
- nationalistic
- succ
- origins
- angolan
- sinai
- warsaw
- militory
- indu
- chan
- clobber
- conquered
- autonomists
- shortages
- bulgaria
- czechoslovakia
- placate
- alienate
- emancipated
- slaves
- emancipate
- supplied
- battleground
- val
- verde
- briefcase
- bookcase
- armageddon
- grove
- imposing
- yoakum
- trilogy
- terrifying
- '''brien'
- crappy
- jakes
- compendium
- lobbying
- emancimation
- afterthought
- luted
- honorary
- isaac
- asimov
- robot
- developmental
- blockbuster
- mist
- dune
- freeman
- debating
- suave
- charac
- egalitarian
- scripture
- disciples
- wafers
- contradict
- buyers
- elma
- sheds
- pasadena
- refinery
- phoenixville
- grumble
- northwestern
- piped
- almetco
- pantr
- deanne
- multipurpose
- vide
- launched
- groupings
- gentlem
- dyke
- griffith
- idn
- brave
- shallows
- gig
- naughty
- murky
- spectrums
- abso
- feldon
- madonna
- lamar
- gators
- sneaky
- buckner
- stadiums
- cornell
- redwings
- peewee
- crude
- tilled
- screeching
- acorn
- scents
- pollinate
- yield
- tiered
- shrub
- locus
- thorns
- pollination
- pollinated
- littleton
- trucked
- shovel
- pressurized
- chainsaw
- dusk
- unfeeling
- spreads
- datsun
- ku
- klux
- klan
- incumbents
- larou
- larouche
- chord
- mayport
- brim
- snagging
- owl
- baiting
- oyster
- cracker
- trophies
- rockport
- netted
- ugliest
- archaic
- dots
- croaking
- croaker
- friendships
- copayment
- seclor
- exemplary
- snatch
- impressions
- inspections
- yellowish
- misty
- emphysema
- isolating
- biker
- vowel
- lint
- phrase
- cub
- smash
- conv
- ding
- dongs
- guathier
- eliminates
- briberies
- sidedness
- lengthy
- judo
- hoc
- deltaing
- disagreement
- wapner
- judean
- vibrant
- undoable
- semitic
- predetermined
- wandered
- defeated
- astaire
- sto
- plank
- poultry
- empenadas
- eu
- scallions
- sesa
- slivers
- overcook
- dashes
- ketchup
- bishu
- meats
- empanadas
- bun
- niokes
- requi
- bah
- humbug
- fives
- phony
- interdisciplinary
- dispelled
- grating
- reputations
- impaired
- institutional
- quiche
- growls
- overrun
- hussy
- settlements
- poll
- tiddlywinks
- volumes
- ignorant
- ironsides
- affixing
- chart
- commingle
- confusion
- issuer
- conven
- shucks
- profitability
- shifted
- itemized
- alpha
- beta
- accusation
- linemen
- rotation
- thereafter
- proves
- encouragement
- chemists
- overinflate
- southward
- nonconventional
- warheads
- parallel
- resolves
- negotiations
- inhabiting
- lith
- neutral
- crazier
- libya
- treaties
- overthrow
- survives
- inhabitants
- dancers
- outweigh
- wayward
- attained
- sharpness
- acuity
- disorient
- decimeter
- superpowers
- toddler
- indoctrinate
- understa
- skipping
- lows
- chillier
- handicappers
- mosey
- twosome
- mellowed
- doubles
- rationalizing
- purged
- goofed
- nastier
- cashed
- burgeoning
- metropolis
- carey
- thes
- intern
- sanger
- harris
- lifelong
- thunderbird
- citation
- mazaratti
- conceive
- degray
- stutters
- antennas
- roadside
- cords
- heaters
- hookups
- sopping
- dialect
- hums
- nuns
- trin
- shun
- hospitalized
- pumps
- stimul
- flipper
- retraining
- stagnant
- sores
- golan
- kishkes
- matzi
- goyim
- pocketful
- heston
- commandments
- grips
- muslim
- religions
- sects
- protestants
- lennon
- zionist
- nosed
- tampa
- scariest
- coincidently
- lox
- generic
- predates
- jihads
- toge
- secretly
- unity
- revert
- baltics
- forcibly
- impossibility
- insightful
- prays
- dissimilar
- forefathers
- esc
- disseminated
- giv
- postpones
- juniors
- disgust
- centeredness
- inability
- multicultural
- multiracial
- psychologist
- refers
- preoccupied
- infor
- cults
- motorbike
- maureen
- solomon
- eastland
- farmed
- millennium
- hopeless
- ideology
- eden
- distributorship
- supplier
- dirkson
- extansion
- dirk
- pearson
- embarked
- isometric
- chlorination
- firsthand
- detectives
- hunky
- dory
- gi
- barbados
- colleagues
- covert
- suburbia
- roasted
- goat
- hating
- stunts
- bending
- alleviates
- indicative
- handcuffed
- elem
- escalated
- bett
- reemphasis
- rote
- spitted
- memorizer
- wiping
- mennonites
- electronically
- determines
- sherwin
- molding
- bled
- spackle
- lighting
- nerdy
- garfunkel
- fascination
- innate
- supp
- manilow
- badness
- behinds
- pajamas
- yardage
- enclose
- fanatically
- subcontract
- ducts
- materialistic
- dwelling
- necess
- branched
- dishwasher
- inventions
- trashing
- diskette
- ordeal
- configured
- prestigious
- innova
- innovation
- audits
- pry
- peripherals
- lance
- restraints
- thermal
- razzle
- dazzle
- flats
- clairon
- rath
- educa
- feast
- waking
- tentatively
- receptacle
- raisers
- distribute
- disposables
- incremental
- fiery
- luther
- galvanized
- bashing
- environmentalist
- respons
- glow
- wartime
- overlook
- affirmative
- junkyards
- testimonies
- defendants
- legalistic
- achieving
- likelihood
- tilted
- sleaze
- protects
- choreographed
- patents
- antic
- repeater
- vendetta
- observing
- proceedings
- weightless
- effortless
- sweatless
- surveys
- adjusters
- expressed
- meningitis
- fetal
- terminated
- termination
- codependents
- goddess
- observations
- firemen
- overtones
- astonished
- phys
- cokes
- sternness
- forbi
- expressways
- patricia
- handlebars
- rewarded
- dubbed
- booger
- diamonds
- numbered
- redeem
- attache
- suitcases
- lamps
- wheelbarrows
- mixer
- toaster
- waffle
- clocks
- candlesticks
- aloud
- fussy
- babbly
- druthers
- rockville
- ballady
- abortions
- pregnancies
- handing
- landscapers
- replant
- alleys
- cultivate
- replenished
- subside
- prune
- hosted
- correspondents
- translating
- masks
- typeface
- piddley
- braunsfel
- unread
- skimming
- imperialism
- reasserting
- hangings
- needlepointed
- outlined
- intricate
- geometric
- upholster
- stiffened
- streamers
- stiffener
- quilted
- stamp
- foresaw
- refrain
- expedite
- franc
- francs
- diem
- consternation
- godfrey
- goodies
- prin
- perforated
- metrics
- typos
- retyping
- retypes
- encyclopedia
- prints
- limi
- clone
- bleep
- lionheart
- singular
- superstar
- norris
- deserts
- bates
- floats
- animation
- retitled
- reshot
- rout
- cosmic
- enlightenment
- dichotomy
- educatable
- prodigies
- precocious
- harks
- schoolwork
- construct
- convey
- verbally
- stressing
- penalizing
- eternity
- bradley
- activists
- demonstrating
- agreeable
- gerrymandered
- lipscomb
- disservice
- pauken
- politicking
- upmanship
- fooled
- nationally
- applicants
- dissolved
- shutdown
- mathematics
- outgo
- kidney
- positives
- spe
- sadder
- anxieties
- detected
- dismissal
- pard
- certainty
- handcraft
- wreaths
- eucalyptus
- dowels
- goofs
- bulch
- straying
- koala
- shapes
- wintered
- transplanting
- leafed
- pasture
- jungles
- rubs
- validity
- disagrees
- guessed
- lux
- accom
- transcontinental
- throats
- coalition
- armaments
- congressional
- fuss
- shiites
- fiddling
- shaped
- topsoil
- herb
- rollback
- spurts
- loppers
- rotor
- dethatch
- heave
- ingredient
- shrip
- fettucini
- straightens
- disconnect
- sucking
- depended
- peeled
- chestnuts
- burgundy
- browned
- bruises
- retires
- swivels
- collisions
- automation
- iaccoca
- airbags
- sc
- spine
- harness
- nifty
- chryslers
- aerodynamic
- conveyor
- magnet
- pennsylvanians
- brownie
- pamphlet
- slicks
- slot
- poundage
- instant
- wisely
- shboom
- befriended
- ironically
- resumed
- gymnasium
- flooring
- chrome
- height
- pounding
- engineered
- curbs
- gravity
- singles
- assorted
- immobilized
- screamed
- climbers
- limp
- matches
- ammn
- amm
- initi
- initiation
- mishandle
- guiding
- deregister
- tumbling
- themself
- banding
- pis
- julie
- tense
- bundles
- childish
- kazoo
- numb
- suffices
- rela
- weakness
- weaknesses
- experi
- temporaries
- retest
- retested
- rx7
- whatso
- seater
- narrowed
- assessment
- thirsty
- stint
- wanderlust
- poker
- admiration
- miners
- roadsides
- harvey
- uneducated
- flaunting
- relinquished
- strikers
- speeded
- aerobically
- calmed
- postnatal
- cise
- birthing
- axle
- windstorm
- overlooking
- embankment
- arkan
- sweeping
- tows
- beavers
- flee
- attitu
- flaunt
- americanism
- slums
- coops
- inoculation
- hungary
- requesting
- rotely
- panamanian
- quieted
- anticommunist
- excesses
- playtex
- flowery
- jaded
- comforts
- thorn
- bureaucratics
- dyed
- pollen
- gah
- blowy
- rebellions
- massacred
- protested
- diminishing
- renegade
- launching
- strifes
- defect
- obtaining
- globally
- demise
- glasnost
- escalate
- reins
- intentioned
- conveniences
- nonfeeling
- uphold
- unpopularity
- geez
- honorable
- massad
- madman
- straddle
- personalties
- rethinking
- gesture
- miscalculated
- liberate
- underestimated
- miscalculation
- huss
- assassinate
- staking
- precedent
- bullies
- powdered
- bombing
- khomeini
- normalized
- sanc
- juggle
- friction
- bookkeeping
- earner
- kite
- idling
- spooky
- lat
- tracing
- hitter
- shorten
- saberhagen
- crain
- craning
- reds
- stri
- fouls
- steinbrenner
- bogus
- workable
- peripheral
- notebook
- modems
- revise
- furnishes
- deadline
- courier
- magee
- peretti
- piercing
- fic
- soun
- illu
- illusions
- quintupled
- flied
- nailed
- gibbons
- exempts
- planters
- shedding
- proj
- beau
- insi
- sunlight
- sulked
- overmilitarization
- disparity
- civilization
- bigge
- trickle
- hemisphere
- kingsport
- masala
- sweeter
- amaretta
- dijon
- basil
- turgeon
- laroute
- gastro
- lamink
- restructured
- hardships
- subcultures
- debates
- patronizing
- demeaning
- midwife
- pater
- paternity
- troit
- misunderstood
- ranks
- aines
- peak
- olajuwon
- dunk
- businessman
- murchison
- bottomless
- leanings
- assholes
- reaganomics
- nonexempt
- visitations
- shuts
- hunts
- wan
- degreed
- jenny
- outdoorsie
- twix
- braniff
- gossip
- hound
- host
- pause
- mic
- '''clo'
- participators
- primal
- kicks
- tabloids
- journalistic
- fondly
- steeped
- repu
- unnecessarily
- glancing
- nod
- tonic
- unhooking
- uncoupling
- rotating
- rotated
- dieting
- ourself
- wrapping
- kip
- centrally
- sickness
- folder
- emphasize
- miniskirt
- evoke
- overdo
- laces
- flounces
- adornment
- unprofessional
- sexist
- tailored
- vulgar
- redford
- lewisburg
- emblems
- grotesque
- imag
- shoo
- padlock
- pawn
- someway
- neatness
- psychiatric
- hinkleys
- accidently
- distinguishable
- barbed
- curi
- prayed
- reestablish
- lengthways
- mounds
- clumps
- southw
- slapping
- formidable
- adcose
- exaggeration
- harmful
- structural
- hankering
- tick
- excalibur
- newmarket
- edmunds
- barnyard
- treacherous
- journey
- climbs
- creation
- touristing
- asbestos
- repaint
- roughed
- energized
- bids
- bleed
- caulk
- masonite
- bid
- varnished
- intervene
- toppling
- descend
- latinos
- mee
- meek
- europeans
- vocalism
- comparably
- bitch
- moan
- compromise
- dependence
- cartels
- mistreating
- slovak
- catacombs
- persecution
- idi
- amin
- oopsy
- pood
- greets
- recouped
- evi
- burial
- countenance
- uncanny
- litterbox
- anointed
- buzzer
- cheerleaders
- courage
- cheerleader
- precincts
- precinct
- harmfulness
- heroin
- forefront
- estimation
- demolish
- cur
- tract
- scaredy
- straits
- quieter
- comfy
- husb
- prance
- paw
- lovable
- lapdogs
- cockatoos
- squawking
- som
- cower
- akita
- aq
- padding
- chewed
- wiper
- blades
- tinkering
- rightly
- punctured
- patched
- restores
- feminist
- amer
- undoing
- stains
- altar
- spooked
- butterflies
- dee
- nicaraguan
- housed
- spiders
- repent
- evangelical
- surpassing
- override
- rejoice
- borrower
- bondage
- squatters
- witchcraft
- mayans
- incas
- worshipped
- pyramids
- sacrifices
- gods
- oppressed
- warehouses
- cumulative
- itemizing
- scrimp
- walkabout
- boonies
- attribute
- eric
- dickerson
- smi
- linebacker
- bickering
- wen
- appropriately
- arcade
- drafts
- archie
- manning
- nobodies
- showi
- furious
- veg
- padded
- opposing
- satin
- bridesmaids
- maids
- accessibility
- harsher
- aerostar
- stealth
- slipping
- celicas
- perfor
- racing
- surreal
- fulfilled
- blair
- reformed
- gambler
- microbiologist
- competitions
- minnea
- dowling
- ren
- entrances
- periphery
- paired
- deacons
- blesses
- fugate
- proverb
- macy
- lowe
- purebreds
- studs
- sweetest
- sweetheart
- breeders
- bree
- inbreeding
- inquisitive
- hindquarters
- predominate
- rex
- rexes
- rodents
- groundhogs
- mesh
- remains
- teetering
- refusal
- presc
- pharmacy
- mens
- absoluteness
- foiled
- mere
- outlawing
- conspicuous
- inconspicuous
- inappropriately
- hunted
- squirted
- novelty
- outdo
- raciness
- calculators
- euphonium
- mellow
- deejays
- grafting
- cough
- graphs
- sponsoring
- enhanced
- bytes
- '128'
- callously
- deterr
- blooded
- midsized
- porting
- attendant
- vessels
- overbuilding
- phe
- phenomenally
- galant
- serviced
- 49ers
- harbor
- niners
- kim
- redskin
- cartoonist
- ellicott
- basicall
- importantly
- devaluated
- goats
- schoolyard
- motherhood
- overcompensate
- destabilize
- vying
- regroup
- standpoints
- easterners
- couched
- proclaim
- weaving
- dike
- plug
- unveiling
- takers
- roomie
- slaughtered
- sudan
- occurrence
- shredding
- bedding
- wrappers
- reviving
- yosemite
- objectors
- assigning
- examined
- idealistic
- pakistan
- algeria
- blinking
- manipulations
- insofar
- clowns
- partition
- dividers
- baloney
- daylilies
- orchid
- closes
- velvety
- multiplied
- weeded
- lilies
- azalea
- glories
- ned
- skeldon
- ojeda
- hubie
- offerman
- prediction
- cecil
- orel
- hershiser
- darrell
- interleague
- introduce
- anoth
- homey
- randi
- dawdle
- steamy
- lawrence
- mae
- rambo
- hogan
- associates
- realist
- garments
- vogues
- knits
- garment
- loopers
- piping
- cording
- twe
- sewn
- exceptional
- bev
- reap
- sow
- establishes
- pardons
- lust
- incest
- swiftly
- integral
- reeks
- expediting
- compunction
- appropr
- sins
- stoning
- clog
- streamlining
- extremism
- bubble
- habitat
- humanity
- inefficient
- preconceived
- notions
- delivering
- spiraling
- conservatism
- hampers
- patchwork
- unflattering
- autobiographies
- randolph
- descriptive
- affluents
- tale
- binge
- bookl
- francis
- momentarily
- connecting
- sigh
- chowperd
- snowbirds
- spawned
- contend
- melts
- kitty
- apso
- panic
- preserve
- campsites
- twang
- pfeiffer
- rim
- glenrose
- latrines
- gemini
- genocide
- hmong
- unsure
- slash
- intercultural
- dissimilated
- conceptualize
- slavery
- linguist
- withholding
- worthless
- cambodians
- graft
- falk
- drugstore
- coils
- mosquito
- crickets
- foamy
- pristine
- froth
- bobber
- reeling
- saturated
- soggy
- damp
- claustrophobia
- terrify
- spanking
- revamping
- lev
- plaques
- stenciling
- cushions
- impeme
- interface
- janitor
- reams
- dalmarva
- deinking
- contaminate
- wastebaskets
- publicly
- yucky
- interven
- occupying
- schwartz
- iranians
- egyptians
- kane
- matinees
- burton
- batman
- glover
- kline
- dennehe
- goldblum
- clease
- arquett
- untouchables
- graffiti
- broderick
- marlon
- parody
- tinman
- humphrey
- bogart
- maltese
- falcon
- quinn
- rainman
- okie
- homeboys
- optimism
- reconstruction
- redefining
- trait
- longhorns
- randal
- streaky
- touted
- sentimental
- instability
- indoctrination
- marines
- ak
- 47s
- cubans
- capturing
- nicaraguans
- crate
- patrice
- lamumba
- teachings
- extremist
- gen
- irregardless
- albania
- revolts
- psychos
- chiefs
- staffs
- uprisings
- squadrons
- afghanistan
- boils
- cen
- berlin
- wat
- steppers
- soles
- reword
- indi
- environmentalism
- ruther
- environmentally
- blasphemy
- acutely
- bureaucracies
- relegated
- heartache
- grudge
- succeeding
- parish
- policed
- comforting
- reminders
- pyrex
- teaspoon
- blackened
- skewers
- basin
- chefs
- clams
- instinctual
- demographically
- democratically
- proposition
- proposals
- revolted
- obligatory
- considers
- australians
- looses
- leas
- denies
- hamilt
- passionate
- democ
- candi
- antigovernment
- misspending
- bastards
- inte
- hundredths
- sixteenths
- mismatch
- clamps
- meters
- drams
- perfume
- machinist
- indic
- indicators
- micrometer
- finders
- nondecimal
- halves
- listing
- beverages
- whiskey
- ploy
- conversant
- milling
- measu
- calipers
- pliers
- milliliter
- drilling
- hundre
- lawy
- strangle
- neiman
- marcus
- outgrowing
- necked
- embellished
- dre
- presentable
- outrageously
- busters
- campinas
- oursel
- asses
- orient
- optimist
- jungle
- resonates
- profound
- bullying
- dreamed
- wildest
- semantics
- transcribes
- onl
- guzzlers
- fours
- threes
- transverse
- mounted
- shoved
- serpentine
- stickers
- reinstalled
- nozzle
- stroking
- groves
- surinam
- natio
- internationally
- amaco
- mobil
- rectified
- inward
- hateful
- kilom
- thumbnail
- kilogram
- britain
- adopting
- precisely
- grams
- sync
- orchestrate
- unfamiliar
- toting
- stroganoff
- allendale
- waldwick
- adirondacks
- pancakes
- outgrew
- beth
- knowl
- roanoke
- randall
- duplicated
- gamble
- ditka
- nate
- newton
- branded
- outlaws
- webster
- cocky
- lambert
- bloopers
- receivers
- tackled
- necks
- fav
- entities
- overburdened
- fairness
- pondsy
- invu
- invulnerable
- belongs
- electing
- politic
- floored
- maryl
- nurture
- credits
- ukrainian
- scallop
- buns
- batter
- bourguignonne
- grudgingly
- pinch
- reversal
- beck
- subsidize
- bennington
- liber
- refinement
- etiquette
- advises
- renaissance
- bowdoin
- bucknell
- lectures
- confirm
- guitarist
- yale
- minoring
- irrevocable
- irrespective
- clinical
- pathologist
- kayla
- bachelors
- profess
- traced
- rung
- maladjusted
- compelling
- distaste
- resp
- beret
- uzis
- disorderly
- unc
- unconcealed
- matched
- vibes
- clearest
- confi
- junkins
- mandated
- prompted
- tobacco
- bandwagon
- cour
- tricked
- syst
- maintenances
- scoop
- fetch
- pooper
- scooper
- colombia
- reek
- kindhearted
- nixed
- asthma
- outgrown
- misclass
- stately
- sunk
- furnished
- swoop
- situational
- punches
- momentum
- lockheed
- arose
- courageous
- accredita
- accreditation
- keying
- adjacent
- refine
- classified
- chemicalwise
- refining
- strean
- stillwater
- stephenville
- toxins
- bacterial
- bleaching
- sinked
- australian
- dominique
- neek
- wimp
- feline
- unconditionally
- feisty
- snuggle
- investigate
- beaner
- wadded
- fixture
- decor
- panty
- garb
- polyesters
- wools
- neatly
- layerings
- eyesore
- mended
- ironed
- compose
- upgrading
- plummeted
- acro
- daltons
- wholly
- understands
- disadvantaged
- winnowed
- structures
- casing
- connectors
- workmanship
- hal
- fluke
- highlands
- patronage
- cranberry
- pou
- lobsters
- billboard
- steams
- culinary
- adventurer
- franchised
- shacks
- shoney
- reliably
- communercation
- compe
- renditions
- organizer
- defeat
- registration
- dragginess
- headache
- draggy
- locker
- sauna
- motiv
- agony
- dictatorship
- uganda
- mils
- distances
- centigrade
- celsius
- metropolitans
- heeley
- wentworth
- differential
- microns
- whatev
- responded
- favorably
- bagged
- ecological
- prod
- additives
- pickups
- hangers
- cupboards
- fountain
- faucet
- exceeding
- decomposed
- shocker
- bizmart
- upseted
- taxwise
- toilets
- smashing
- soaker
- sheltered
- disapp
- rankled
- cheerfully
- outermost
- inland
- curving
- ventura
- buildi
- overflows
- anaheim
- simi
- meanings
- rhymed
- balti
- strayed
- kabob
- breakfasts
- galunkies
- marsh
- pierogies
- grandparent
- newarth
- cholest
- margarine
- margarines
- kebabs
- utensils
- goulashes
- juices
- sealed
- galore
- finer
- drains
- shakers
- journalist
- crux
- remo
- appease
- pob
- patr
- paro
- paroles
- partake
- traumatizing
- viaducts
- ceremonies
- dozens
- pageants
- riveted
- confuses
- thrilling
- producers
- tony
- dorsett
- hershel
- rationalized
- cinemax
- correspondence
- '30'
- cod
- reso
- repossessed
- 635's
- looper
- ramblers
- brook
- dealie
- diversion
- chevys
- nex
- v8
- carburetors
- gingerly
- yanked
- tinkerer
- evaporator
- rubbing
- testers
- diagnostic
- tester
- diagnostics
- carriage
- chilton
- multiplying
- lincolns
- tremend
- leaking
- condenser
- busted
- haas
- ovolacto
- lard
- nutrient
- lactose
- synthesize
- slough
- utilizing
- rids
- utili
- paperback
- novelization
- lucas
- freder
- brink
- feinstein
- fairfax
- deaf
- insulate
- scrubby
- pecan
- paralegals
- clears
- interference
- surplus
- tariffs
- mon
- apprentices
- advisable
- journeyman
- exporting
- imminent
- oodles
- salutatorian
- prided
- welcom
- welcoming
- tol
- resentful
- zales
- spiegel
- hurried
- circulating
- walrus
- porpoises
- mainland
- sanctuary
- whooping
- cranes
- pelicans
- antone
- alamo
- brewery
- caverns
- uncourteous
- actua
- irritant
- hullabaloo
- stockholders
- inebriated
- unsafe
- surgeries
- subsidizing
- quack
- waiveable
- refresh
- somewh
- willy
- horton
- consolation
- microscopic
- kneecap
- curtailed
- forming
- bison
- weakening
- strengthening
- '401'
- continuation
- telephones
- handbook
- badger
- showering
- physiological
- advan
- fledgling
- bikers
- bicyclist
- knocks
- coronary
- artery
- decreases
- embark
- motivating
- disevered
- knobby
- vaulted
- woodhollow
- villa
- secluded
- joking
- sellers
- coworker
- doorstep
- housebroken
- playful
- gastrointestinal
- beagle
- romping
- waters
- retrieve
- paddled
- unrequir
- degenerating
- rosebud
- sociable
- smu
- synopsis
- furrier
- judgement
- distribution
- wrongfully
- penitentiary
- sitt
- caravans
- lending
- simulation
- resemble
- adroit
- oddity
- moonlighting
- strengthwise
- divulging
- tarnished
- faye
- socialist
- undone
- inefficiency
- platform
- lieu
- mamma
- disruptive
- brow
- browbeat
- wist
- mugging
- faceless
- persuadable
- thunderbirds
- topaz
- camaro
- reim
- dominated
- wrenches
- eas
- champ
- premeditate
- premeditatively
- stiffening
- lessening
- retarded
- pleaded
- phrased
- dayers
- correctness
- promoting
- niceness
- vouch
- waterfall
- busch
- blacksburg
- portsmith
- williamsburg
- epcot
- temp
- buccaneers
- assessing
- opp
- benef
- wadley
- milestone
- tainted
- snickered
- examine
- aircraft
- astound
- pusher
- circularly
- chairman
- judy
- perturbed
- promotions
- programmed
- brightens
- hallmark
- servi
- seizures
- brighten
- tonya
- sneaks
- rainstorm
- breezes
- temperate
- promises
- westernize
- intact
- extensly
- vely
- woodward
- projected
- commanders
- colin
- powell
- embargo
- misread
- earliest
- disarray
- hopeful
- prosecute
- stature
- statesman
- foreseeable
- selves
- volatile
- retile
- bathtubs
- scouter
- drippy
- panes
- putty
- gazoo
- pes
- pesticides
- bulging
- chlorinating
- coronarys
- diets
- quadrupled
- ingestion
- clogging
- primates
- regimen
- kenneth
- innovator
- inactivity
- neurosurgeon
- strictest
- idiots
- stan
- destruction
- symbolism
- evokes
- lynched
- modified
- possess
- condone
- adamantly
- symbolizes
- circum
- satisfactory
- budg
- spartan
- frugally
- jordache
- nonessential
- victory
- cliche
- enactment
- adjourned
- mot
- expending
- reasoning
- allege
- myriad
- departure
- restocked
- guided
- unconstitutional
- reforms
- gard
- arranging
- orig
- florist
- slowdown
- runners
- geraniums
- coleus
- vinca
- thuringiansis
- caterpillars
- expands
- unlicensed
- brittle
- excelled
- wei
- denotes
- tension
- bicep
- tricep
- instructing
- grindstone
- hovering
- configuration
- blended
- muscular
- dystrophy
- documentaries
- paroe
- planner
- uruguay
- concepts
- yuppies
- legislated
- dynamics
- auditing
- rev
- revenues
- millspec
- operates
- elevens
- hammers
- federalized
- ci
- emphas
- identi
- americard
- adios
- commu
- demeanor
- announcement
- calcutta
- foreigner
- worldliness
- attributed
- chuckle
- pogo
- mourn
- tolerated
- drumming
- scrunch
- glamor
- sprigs
- ricksun
- tender
- lamp
- ashes
- overcame
- nondescript
- damned
- hierarchy
- restructuring
- feminism
- boomer
- creep
- rapidity
- electroni
- luncheon
- existent
- consulted
- alters
- stamina
- goi
- denying
- revolve
- entrusting
- omniscious
- omniscipotent
- alec
- precedes
- daders
- shrinking
- worthy
- whate
- responses
- spoils
- flashbacks
- flashback
- fidgety
- discriminate
- pertaining
- distraction
- males
- ital
- entree
- sagar
- presby
- kimonos
- grishman
- bavarian
- constricted
- putrid
- folley
- tableclo
- crayons
- disintegration
- flickers
- prevalence
- excusing
- signals
- mechanized
- requiring
- antipasta
- stuffing
- poached
- kernel
- spinach
- wilson
- beeping
- bakes
- frosting
- frostings
- chatting
- mentor
- adversaries
- manuscript
- harried
- interruptions
- feedback
- videotaping
- adopts
- twelfth
- tangible
- overseen
- alternately
- ilk
- phonic
- pistons
- snooty
- telev
- leno
- carvey
- deduce
- cros
- wheeled
- porked
- termites
- chess
- rearrange
- hisself
- bathtub
- prettier
- rewired
- shorting
- surges
- famili
- rearranging
- shuffle
- pane
- breakers
- valve
- drips
- walkway
- splash
- vein
- downfall
- yuppiedom
- restructure
- biologically
- physiologically
- wonderment
- swooshed
- viva
- talents
- mongst
- jealousy
- computerizing
- pecking
- punched
- slightest
- epidemiological
- guesswork
- transmitted
- semen
- illegitimate
- exploded
- stepchildren
- socio
- radios
- faxes
- sensors
- stalk
- jurisdiction
- outnumber
- solicitation
- prostitution
- unlocked
- fallout
- probability
- indentured
- servitude
- vigilantes
- victimless
- ridicul
- auctioning
- bidding
- patios
- insecticide
- diazinon
- carefu
- deb
- wallpa
- stagger
- renovator
- sheeting
- resilient
- stairway
- sworn
- rud
- veto
- bout
- yea
- dams
- droughts
- reservoirs
- poole
- reflected
- counteract
- learners
- genius
- perspiration
- diagnose
- predisposition
- flashing
- drowsy
- facilitators
- manipulated
- burdening
- toot
- weekdays
- racket
- drawer
- dennison
- derby
- siphon
- cu
- uba
- tailgate
- deterrents
- publishers
- poisons
- ergotisms
- fungus
- gender
- confidential
- tide
- vatted
- archeology
- shoelace
- promising
- upcoming
- reprinting
- thurber
- hundredth
- riveting
- viorst
- sci
- revol
- revolves
- shoelaces
- binds
- melody
- workbooks
- workbook
- geometry
- cypress
- greece
- irrelevant
- tortola
- gorda
- infusion
- ethnicity
- familial
- acclimate
- retaining
- latino
- continentals
- roberto
- unprepared
- vociferous
- attain
- imported
- territorialism
- horns
- encompass
- handcrafts
- wreath
- phillips
- ranching
- contemplating
- stabilize
- occupies
- baseline
- flextime
- grading
- scribble
- sensitivities
- akin
- minimized
- prematurely
- dumper
- geria
- empathize
- tandem
- providers
- prohibitive
- fantastically
- moslem
- surro
- surrogate
- regretful
- arou
- swims
- nationals
- quarries
- tumbled
- avail
- denmark
- appliqued
- eraser
- maturing
- rite
- unmarried
- aquariums
- zoos
- paternal
- traditions
- disintegrated
- trinket
- sociologist
- multigeneration
- eightch
- scorer
- rebounders
- assists
- thown
- laker
- marriott
- spittering
- sputtering
- swimsuit
- mavs
- favored
- endorsements
- prospects
- stanley
- underclassmen
- myrna
- curfew
- fiscally
- jockey
- catton
- dives
- cayman
- itinerary
- viet
- doves
- abnormal
- puppet
- heartbeats
- reviewing
- bocket
- hannibal
- lector
- fascin
- luster
- attractiveness
- originality
- pinpoint
- lavon
- upstream
- sever
- benders
- grea
- musky
- perches
- salami
- sonar
- maneuver
- charter
- suntan
- hobbyist
- styled
- convertibles
- sevi
- welded
- welding
- sunroof
- soured
- contention
- jags
- contractors
- bends
- enthused
- enthusi
- ap
- vending
- cartilage
- glanced
- fenced
- econ
- repeatable
- bundy
- exe
- strauss
- punish
- electrocute
- problematic
- candid
- fraud
- intangible
- reinstate
- mario
- cuomo
- legislatures
- molested
- incarcerate
- sylvan
- reenacted
- paltry
- polishing
- lotions
- meniar
- cringes
- thrifty
- flier
- psycholinguistics
- ivory
- godsend
- pathe
- willow
- cana
- bacally
- obese
- reimburses
- collared
- widget
- bramalea
- 401k
- weeny
- nonex
- censored
- bombarding
- dramatize
- statues
- weld
- epoxy
- resin
- shattered
- statue
- cricket
- thatches
- thatched
- vapors
- stained
- lacquered
- tung
- fanatical
- pills
- hem
- sweating
- bulge
- wrinkles
- vices
- sha
- germ
- ecru
- undercoat
- peachy
- steamers
- mottled
- grey
- maroon
- vivid
- turquoise
- coral
- renovating
- hallucinations
- cloths
- slop
- soluble
- tricks
- skimp
- tediously
- rewallpaper
- racks
- metlife
- worki
- workm
- inconsistencies
- amateurs
- footballs
- fencing
- earl
- princeton
- pacers
- subminimum
- administered
- reluctant
- poured
- chiropractor
- cautious
- janitorial
- rafael
- septien
- applicant
- eduardo
- mana
- sai
- mafia
- newcomers
- ellis
- redoing
- comm
- elitist
- concise
- rathers
- yous
- segregate
- wretched
- horrid
- shortchanged
- brokaw
- demi
- ringwald
- sixteenth
- doogie
- howser
- freckly
- ferris
- moustache
- reeve
- dreaming
- ooze
- bride
- pretended
- occupational
- exemption
- judiciously
- incidental
- figuratively
- westport
- bradford
- indirectly
- clair
- dayt
- baldwin
- bebble
- foreclosed
- rider
- homestead
- creeping
- livable
- retrial
- retry
- wond
- seeded
- raping
- choking
- shotcross
- televised
- vendettas
- trialed
- revoted
- annihilated
- enterprises
- misgivings
- quiz
- sprint
- capture
- extending
- endowment
- joes
- alumni
- splits
- governme
- faired
- undertaken
- deficiency
- dilly
- sangre
- cristos
- wichitas
- lakefront
- pinon
- naturalist
- stools
- binding
- component
- carol
- playroom
- realtors
- dominantly
- alleyways
- shifting
- popping
- bangla
- hugo
- bedroo
- barometric
- borger
- funnel
- pillowy
- radar
- veer
- swirl
- junes
- budding
- crimp
- scorch
- distracting
- heats
- therapeutic
- northe
- mayer
- denison
- purify
- purifying
- philodendron
- acc
- divert
- blurred
- fluoro
- fluorocarbons
- provoking
- brandeis
- fift
- readings
- iliad
- mythology
- choo
- scientifically
- grumbled
- unpleasant
- imparting
- cluster
- vicarious
- compromised
- profiles
- telemarketeers
- outcry
- cited
- crashes
- eroded
- erosion
- lockers
- latitudes
- motorists
- liens
- representing
- landlo
- dakotas
- alarmed
- exclusion
- parameters
- interpreted
- adoptive
- carting
- arresting
- interval
- orwell
- tay
- unusually
- leathery
- venture
- wea
- pebbles
- drainage
- deceptive
- fiend
- wrinkled
- oils
- fishermen
- tricycles
- kiddie
- wilds
- calves
- heifer
- jea
- flared
- hep
- themsel
- continuum
- astute
- propagate
- raccoon
- filleted
- livestock
- whiskers
- growling
- widen
- weaker
- ticker
- pentagon
- whomever
- nutrisweet
- bitterness
- ancient
- vets
- complicate
- preregister
- registrations
- eligibility
- preceded
- theodore
- upward
- rascals
- stinks
- precluded
- gullibility
- democracies
- redistricting
- subsidizes
- lineman
- spilled
- camouflage
- booby
- traps
- apocalypse
- influx
- surge
- buckle
- overcome
- castaways
- depicting
- dudley
- bloody
- olden
- realism
- pioneer
- worship
- chri
- videotapes
- shrunk
- eastwood
- showy
- westerns
- cursed
- pointy
- melissa
- gilbert
- idol
- verse
- shep
- immemorial
- misdemeanor
- waving
- prevail
- appoint
- bailiffs
- clerk
- verbalize
- tripled
- cameras
- reporters
- prosecutors
- outweighs
- prosecuted
- sump
- sewage
- towed
- aut
- trad
- marina
- hears
- acclaim
- sequels
- earle
- recluse
- essays
- qu
- conclusions
- photographers
- arro
- gorillas
- sloth
- fascinates
- bottoming
- landers
- tycoon
- bloomed
- fade
- spiky
- bl
- hya
- colossians
- thistles
- landscaper
- junipers
- puny
- foliage
- iris
- fuzzies
- wildflower
- insists
- camcorder
- pastime
- muggings
- grates
- claustrophobic
- tendencies
- deviant
- anguished
- cleaners
- meridian
- inlaws
- sneakers
- jordans
- brains
- caps
- videoed
- repeated
- repetition
- termed
- allowable
- purs
- discretion
- freely
- altering
- preparations
- namely
- minuses
- factored
- competitor
- trevino
- influencing
- wholesome
- exclamations
- sportsman
- phooey
- applicator
- nurseryman
- elm
- circumference
- stubs
- propelled
- pest
- sawed
- rot
- rotter
- autobiography
- liquidating
- emulating
- compu
- ause
- accomplishing
- spacings
- formattings
- insert
- reset
- rewrite
- typesetting
- typeset
- spaces
- compatibles
- adhere
- brochco
- hillstreet
- finale
- nudity
- delight
- shudder
- flabby
- telemarketing
- classification
- lotteries
- kalamazoo
- sinus
- carton
- stakes
- mounts
- hub
- airports
- altitudes
- intermediate
- simp
- fluorides
- guerrilla
- marched
- lied
- expire
- xerox
- modify
- soo
- terminals
- insur
- breakable
- hangouts
- haunts
- southerners
- rudest
- bartenders
- wee
- ferrings
- taiwanese
- jambalaya
- wowed
- univerisity
- arias
- casks
- hospitalization
- hos
- crowns
- fluctuate
- celebr
- inordinate
- axe
- newscast
- js
- recap
- sensationalize
- sensationalized
- asinine
- puzzle
- precede
- preclu
- preclude
- stretches
- wakes
- depreciate
- tru
- unibody
- granddaughters
- gol
- wagging
- trainers
- airheaded
- yappy
- dignified
- culling
- tamper
- innately
- tractable
- selectively
- culled
- belgian
- distinct
- breeds
- kennel
- translates
- shit
- unreliable
- handlers
- indiscriminate
- breeder
- handler
- bab
- doorbell
- stipulation
- laundromat
- grasslands
- surrounds
- betty
- parades
- palestine
- id
- peg
- catalyst
- palestinian
- kindest
- abounding
- kindness
- godly
- compassion
- humanness
- mandarin
- oranges
- grape
- fridge
- gelatin
- carrot
- eggo
- waffles
- adolph
- breakfa
- craftsmanship
- opt
- stanza
- glitters
- oasis
- warp
- clearinghouse
- consolidating
- salespers
- tel
- compan
- announcing
- telepho
- discard
- episodes
- cramp
- vela
- someb
- thirtysomething
- mclaughlin
- yogi
- loner
- comedian
- cantankerous
- echoed
- withdrawal
- grumpy
- stooges
- mouthiest
- kiddos
- mouthy
- touristy
- besieged
- defini
- badgering
- galapagos
- sidney
- adelaide
- chengdu
- quingdao
- retreat
- flights
- rita
- oah
- destitute
- ree
- snorkeling
- prawns
- milli
- arsenal
- traffi
- bennett
- gangsters
- corp
- arr
- pris
- crowding
- statutory
- verbalizing
- stints
- citing
- intensity
- limbaugh
- lamenting
- microwaved
- healthiest
- teases
- accuses
- deprivation
- nourishing
- evaporated
- broil
- marinara
- grapefruit
- starch
- pleasurable
- kalli
- cater
- rodolfo
- royal
- maitre
- pilgrim
- unnatural
- lookout
- arby
- wastes
- reduces
- speedup
- healthily
- sup
- quoting
- disputes
- commas
- reevaluated
- inma
- blinded
- restitution
- willfully
- contradictory
- caveman
- coleslaw
- tablecloths
- bakeries
- regretted
- purch
- pastrami
- '''oeuvre'
- complicat
- sustain
- addressing
- fellowship
- prefers
- troublesome
- camels
- beatle
- orchestration
- okeydoke
- statler
- stated
- debut
- investigating
- bootstraps
- baptisms
- clergy
- imprisoned
- confiscated
- bourgeoisie
- commonality
- recanting
- courtyard
- motions
- commandant
- escaped
- perseverance
- bureauc
- persecuted
- dab
- chorus
- mothering
- rerate
- precluding
- analogy
- spade
- marketeer
- warring
- peacefully
- trampling
- fantas
- crabby
- coated
- willis
- sarandon
- gena
- vatican
- paradeso
- befriends
- friendship
- califor
- drying
- nippy
- mucky
- thunderstormed
- shoveling
- michelle
- lan
- footnoting
- retype
- appetizer
- criterion
- alumnae
- heavyset
- poignant
- subtleties
- gore
- warlock
- omelet
- characterizing
- conceited
- portay
- goer
- prosecu
- cutor
- struggles
- flowing
- ir
- slicing
- locust
- omar
- swallowed
- redwood
- brownstone
- caulking
- myneer
- spacious
- inhaled
- revived
- airway
- revive
- sol
- dignity
- luxurious
- blossoming
- brazos
- sleeps
- purdis
- sandlin
- quake
- mak
- caramelized
- customary
- orchard
- accor
- ply
- crier
- waistline
- jewels
- earhart
- thurow
- perceptive
- pinpointing
- flimflam
- hughes
- assis
- plod
- rereading
- ditched
- findings
- bonfire
- vanities
- temporally
- burdened
- cafeterias
- linen
- napkins
- duplexes
- hodgkin
- undergoing
- interim
- constancy
- sufficiently
- farfetched
- wheeler
- cock
- slowing
- pals
- unjudgmental
- homy
- reprimand
- secrets
- brooksville
- campuses
- eyesight
- enrichment
- schooled
- rejection
- proceed
- herman
- foreigners
- polluter
- rigs
- busses
- incinerate
- pollutant
- untold
- cockroach
- accelerated
- nutrients
- sponges
- tending
- newark
- vividly
- entrance
- biggies
- consumable
- calculation
- physiology
- snowball
- dieters
- robbers
- trendsetters
- correspond
- circulates
- centralize
- descendancy
- closeness
- caliber
- differentiate
- stevens
- shippensburg
- specializes
- novelist
- intricately
- johann
- sebastian
- copyright
- compile
- poems
- baudelaire
- jennie
- abridged
- reunited
- rituals
- equated
- communion
- repetitively
- vernon
- salmonella
- silverware
- caterer
- biographer
- obituaries
- succeeded
- vigor
- bulletins
- chorals
- beginner
- violinist
- percussion
- accompany
- choruses
- audition
- verdi
- hermit
- vacationed
- anonymous
- whirlwinded
- effortlessly
- elicited
- unwound
- guadalupe
- penetrates
- alda
- burt
- reynolds
- vignettes
- dinosaurs
- robots
- satur
- sniping
- howling
- gleason
- snippets
- idle
- workshop
- gra
- dividing
- moses
- hab
- scavenge
- conserve
- indulgent
- exceptions
- contemplate
- permitting
- calming
- aboard
- docks
- cozumel
- ocho
- rios
- jurisdictions
- tapping
- lynda
- slandered
- landslide
- thornburg
- landslided
- characteristically
- savory
- petition
- resisted
- dirtier
- muddier
- sensibilities
- transpired
- nixon
- edible
- accumulating
- elbow
- cho
- grandes
- refried
- katy
- avocados
- avocado
- coolwhip
- horseshoes
- auctions
- sidelines
- loosely
- socioeconomic
- tracked
- pressured
- vandalism
- outward
- custodial
- skyline
- irritable
- unattended
- environments
- dunked
- compaq
- honk
- prodigy
- mush
- shareware
- paradox
- shooter
- crawford
- andrew
- webber
- paranoid
- unlucky
- anonymously
- competency
- wholesale
- lon
- exa
- beginnings
- kuenzer
- rebelled
- debtor
- angela
- eyeglasses
- indiv
- staffing
- examines
- optometrist
- ophthalmologist
- extractions
- publication
- unfeasible
- bettle
- orthodontal
- outsor
- roo
- suite
- scattering
- leniency
- underhanded
- perpetrator
- injustices
- wherein
- dist
- unsavory
- elimi
- rarity
- chairmen
- ministers
- congregations
- catholicism
- forthright
- disorders
- soothe
- exertion
- characteristic
- cram
- guarded
- sacrificing
- mediators
- interpersonal
- mediator
- doable
- devised
- stimulations
- goof
- whipping
- nickie
- snail
- hards
- futuristically
- subjective
- harmony
- impregnated
- challenges
- motherly
- competent
- militaristic
- colonel
- infantry
- embrey
- reynold
- riddle
- aeronautical
- pratt
- whitney
- daphne
- dictated
- qualifying
- rhodes
- scholars
- homogeneous
- realities
- socialization
- insular
- sheriffs
- evict
- continuances
- abundantly
- appealing
- retried
- lowers
- percep
- gypped
- slicker
- bruno
- kirby
- chauvinistic
- punching
- correlations
- opium
- dens
- weakened
- duress
- drunken
- induced
- legalized
- quantify
- deg
- safeguards
- fraction
- oath
- sensings
- sentencings
- pertains
- introduction
- accordance
- clark
- parachute
- presiding
- reorganizing
- sweeper
- univerty
- versity
- lakeway
- expose
- jun
- bethany
- unfocused
- midst
- instigated
- marrie
- remained
- tomorr
- whitmore
- arbor
- slushy
- sled
- icy
- lingering
- exodus
- eternally
- snowfall
- grassy
- sachse
- goddard
- stickler
- mulcher
- seni
- antisocial
- adapting
- deteriorates
- glimpse
- unwilling
- appalachia
- stopgap
- rougher
- strategic
- fails
- worded
- peoria
- dropouts
- insecure
- scaring
- stylish
- interpretive
- fathom
- expanding
- wean
- referrals
- advisory
- myrtle
- barricaded
- blackberry
- defeats
- enchila
- boiled
- toasted
- calorie
- hereditary
- headstart
- preschooler
- tacos
- tamales
- romanian
- backfires
- waiters
- batty
- momo
- colter
- pas
- campari
- adventured
- souper
- prey
- backlogged
- patrolled
- frus
- imme
- dialogue
- aisles
- cornball
- overacted
- applauding
- waterskiing
- ashley
- jamie
- warner
- deanna
- cheeks
- backdraft
- berry
- raspberries
- shaved
- entrees
- accompaniments
- gershwin
- puree
- antipollution
- gases
- accumulates
- groundwater
- fusion
- optimistic
- pessimistic
- reconvicted
- sicko
- merciful
- cannibalism
- hunch
- coordinate
- communicable
- memos
- orchestral
- fiddler
- oboe
- classy
- corresponds
- christening
- elijah
- marches
- poinsettias
- bouncy
- haunting
- conventional
- disposal
- odors
- throwaway
- ditches
- drinkers
- churn
- shipwrecked
- explodes
- maims
- sylvester
- mermaid
- outfitted
- crushing
- hobnail
- phobia
- bifocers
- trifocals
- mccalls
- byte
- afflicted
- exceeded
- antibody
- realm
- telethons
- doling
- receives
- ociety
- aesthetic
- enhancing
- frightens
- dahmer
- burglary
- enquirer
- cranks
- fuzz
- repala
- sil
- shiny
- heartbeat
- spins
- rainbow
- packaged
- trespass
- tidbit
- refrozen
- cheesecakes
- refreeze
- liabilities
- wrecks
- tattoos
- speedboats
- chambers
- afloat
- maneuvers
- stormy
- nibble
- rope
- entice
- sneaking
- paged
- favo
- flyer
- shaky
- iffy
- sentra
- subdued
- urinalysis
- bums
- overdress
- overkill
- businesslike
- nylons
- nutrisystem
- dreaded
- toppers
- ceramics
- seamstress
- cramped
- negligent
- initiates
- squeegees
- newscasters
- postponed
- a1
- alfredo
- clowning
- circuits
- sfuzzi
- copeland
- transported
- thirteenth
- wobbly
- bookends
- jug
- viscosity
- saver
- brushed
- tooken
- turpentine
- towels
- shi
- jul
- shindig
- boulevard
- maizeland
- skier
- minnie
- canaveral
- reschedule
- hilton
- eighteenth
- raton
- '287'
- '70'
- broadmoor
- breckenridge
- trinidad
- '25'
- hexpired
- disheartening
- elders
- albertson
- limbs
- sodas
- arranged
- brookshires
- pickle
- piles
- emporium
- cinch
- consolidate
- alluring
- cupcake
- henpecked
- instilled
- gatherings
- subtracts
- debits
- incidentals
- scotch
- igloos
- strateg
- strategically
- incurred
- cashes
- reunio
- entryway
- roaming
- ris
- risen
- appraisal
- disoriented
- blissful
- unexpectedly
- cockroaches
- complacent
- bitterly
- polling
- campaigning
- napping
- structuring
- digested
- perfumes
- geese
- peaked
- balloon
- canyons
- weatherwise
- sleet
- maps
- sy
- pearls
- loafers
- distinguishes
- '1200'
- whereby
- extract
- generates
- bursts
- navc
- blazey
- obscure
- promotes
- goe
- refrigerate
- tartness
- raspberry
- connoisseur
- tastings
- mesina
- exorbitant
- kaiser
- mccullum
- catastrophic
- implants
- transplants
- howe
- dislikes
- chopin
- expresses
- discussions
- chords
- panicking
- kielbasa
- bak
- ravioli
- reggae
- twangy
- agr
- cackle
- atteck
- scholar
- adolf
- imaginative
- sty
- antiques
- winnie
- pooh
- grimm
- fairy
- tales
- gentlest
- jewel
- restroom
- spitz
- extravagant
- overpass
- littering
- timers
- tans
- mauve
- distantly
- swap
- bichons
- barks
- hind
- origina
- bernards
- lega
- belittling
- liberals
- suppos
- tcat
- examination
- clicker
- screens
- carpooled
- bolivia
- sundresses
- polyester
- overheat
- sweltering
- newborn
- pleats
- absent
- strep
- bookkeeper
- partitions
- duality
- extenuating
- newsworthy
- leafing
- mccall
- subscribing
- gott
- newsy
- putterer
- caladiums
- hardened
- semitropical
- carrollton
- architecture
- hairless
- coon
- manx
- tame
- ships
- folklore
- faint
- chincoteague
- burgers
- teriyaki
- shakes
- grandy
- fend
- snowballed
- inconveniences
- woozy
- sys
- squirt
- flicking
- whales
- showtime
- adder
- dragon
- rosa
- sorrento
- dine
- mah
- jongg
- yearbook
- imprinted
- depreciated
- cribs
- bestes
- giver
- enables
- ly
- confining
- bronco
- moder
- cowb
- cheer
- schnauzers
- dachshund
- starved
- curled
- skittish
- spaying
- belon
- severing
- sr
- suicidal
- craziness
- mistrust
- lacks
- poland
- weeding
- mankind
- uninsurable
- medcenter
- hearings
- overstaffed
- mortgages
- outlaid
- intergovernmental
- plugging
- indepth
- capsize
- sensationalism
- blase
- sel
- sadist
- oleo
- oregano
- ight
- semolina
- absorbs
- vulnerable
- align
- bombings
- aligned
- tensions
- forceful
- cr
- expedited
- deserving
- mandate
- grassroots
- introspective
- schoo
- visitation
- advantaged
- energies
- tiananmen
- custodians
- immigrated
- brightest
- burst
- lanes
- winterized
- yourselfer
- representatives
- homemaking
- accessed
- uzi
- flyswatter
- utilized
- acquiring
- illicit
- gatlinburg
- cosa
- hiked
- ardmore
- cloud
- ledges
- hyatt
- gully
- trench
- tenkiller
- enlisting
- seductive
- pinion
- totality
- revealed
- legislat
- abrupt
- ruder
- arrives
- '1'
- microcomputers
- gateway
- apollo
- faulkner
- emblem
- candice
- bergen
- ghosts
- haunted
- dianetics
- gibberish
- broudigan
- journeys
- mailman
- karl
- malone
- hacking
- fillmont
- generically
- cyclist
- techy
- hackers
- davy
- crockett
- sailor
- sailed
- mck
- equalize
- semiretired
- dementia
- insisted
- rejuvenating
- coldest
- cus
- celltrex
- jeri
- maceo
- rampages
- cocoons
- occa
- uniqueness
- winfrey
- prebuilt
- workbench
- subcontracted
- subbed
- scramble
- championships
- peacefulness
- birdie
- quadruple
- whizzing
- spectators
- scrambles
- kerr
- mcgee
- infrared
- suffice
- notifies
- supplying
- angles
- anticrime
- outings
- sec
- arlene
- lister
- poked
- togethers
- dearly
- swoosh
- skate
- begonias
- destruct
- concessions
- drizzly
- huddled
- cages
- fanatics
- straightforward
- piston
- oiling
- altog
- reelection
- provisional
- locate
- incomewise
- ifs
- ands
- buts
- '4'
- hel
- discontinue
- narrowing
- nitty
- gritty
- faithful
- shoppers
- yourselves
- straighten
- stems
- relating
- supporters
- antisupporters
- contras
- dictator
- fascist
- siesta
- mouths
- reflecting
- dabble
- chalk
- chesapeake
- suspended
- ath
- tutored
- goofing
- piney
- diameter
- calmness
- outwitting
- shiners
- infla
- inflatable
- raft
- cottonmouth
- coves
- walkie
- talkies
- handcrafted
- semifixed
- automated
- crafted
- stateside
- adage
- advising
- embarrassment
- jessie
- helms
- intelligently
- mistreated
- papa
- doc
- tyrant
- puberty
- tibby
- perfumed
- legendary
- brookies
- rainbows
- accommodated
- specialists
- replanted
- rods
- norfolk
- portsmouth
- hikes
- pests
- chaperon
- calloway
- variegated
- beetles
- borderline
- zaps
- ligustrum
- apron
- gourds
- bolton
- symphonies
- caller
- sax
- houseful
- crabs
- sensation
- tingling
- oddball
- waitressing
- crunches
- relevance
- federally
- hogs
- barns
- revealing
- horticultural
- groundskeepers
- dormant
- centipede
- crops
- behold
- cuttings
- mit
- diamante
- boozier
- passengers
- shining
- becca
- nina
- palmer
- remarrying
- griffins
- crackers
- burritos
- debone
- notoriety
- jurisprudence
- thoroughfare
- sleeper
- herd
- cima
- savages
- plywood
- beams
- migrate
- undercover
- barbiturates
- codeine
- drixoral
- unsolved
- mcgillis
- weeknights
- physicist
- facet
- hurst
- greensboro
- celebrities
- repeaters
- zealand
- statistically
- outbound
- astronomy
- gallagher
- pictured
- betters
- hubble
- telescope
- planets
- habitable
- backers
- zippers
- snaps
- dull
- pretechnology
- shelled
- duplicates
- regulat
- regulators
- regulator
- lever
- pulley
- chev
- oi
- resur
- ourse
- hesitating
- russ
- noons
- flaw
- gasket
- fury
- exceptionally
- surfaced
- repeatedly
- escapes
- pragmatic
- consti
- opponents
- laural
- squeaked
- andrews
- clou
- crept
- firewood
- maples
- dogwoods
- lowell
- unu
- periodicals
- historic
- interes
- lawful
- scanners
- attempted
- thoroughness
- mag
- announcers
- tele
- ivan
- rodriguez
- ballplayers
- routing
- enthusiast
- ducted
- gettin
- brussels
- sprouts
- kale
- pony
- grazing
- pears
- extinguishers
- depleter
- extinguisher
- timed
- contaminants
- probe
- ionization
- miller
- temptation
- squareness
- buckles
- fea
- lettering
- vin
- vinyl
- balloons
- recy
- commented
- nudge
- decomposable
- flips
- emptying
- regressive
- defen
- kate
- curves
- raphael
- atchafalaya
- sausa
- alvarez
- applebee
- nonstructured
- torture
- nur
- fai
- glorious
- esoteric
- producer
- hairspray
- batch
- partic
- preteen
- unlikely
- dynamic
- raunchy
- horrifyingly
- poppins
- differed
- eclipses
- belie
- lebaron
- peeling
- gears
- oklahoman
- beatings
- proy
- condoms
- stupidity
- truthful
- faded
- marker
- reflective
- adheres
- sealing
- dings
- variance
- prop
- pressuring
- primed
- bragging
- sickening
- shitty
- drags
- burners
- putts
- teeing
- lodging
- dialers
- provision
- specify
- dialing
- prised
- weir
- overloads
- hoosiers
- crossing
- delancey
- thrillers
- backless
- ani
- nick
- nite
- dragnet
- bald
- marlo
- collier
- brigham
- estonia
- agriculture
- foodwise
- rioting
- secede
- proportionately
- hinders
- tubs
- brougham
- trunks
- shy
- gadgetry
- '6'
- interiors
- veered
- revolving
- reverting
- envy
- exhausts
- hairy
- gettingest
- daught
- bertinelli
- dysfunctional
- childfaring
- miracles
- bette
- midler
- redbook
- previewing
- postage
- unauthorized
- mayors
- discredit
- ps
- productions
- chariots
- gladiator
- fluent
- batches
- subtitle
- subtitled
- gems
- supernatural
- accusing
- migh
- mondays
- thrust
- lifters
- drills
- rocking
- referee
- abrasive
- maintaining
- posed
- refusing
- coins
- conversions
- dormitory
- unused
- ramp
- hydraulic
- disposer
- escapement
- incorporating
- leonard
- nimoy
- trekkie
- luke
- spock
- mccoy
- admiral
- hobbled
- vulcans
- doohan
- scotty
- addams
- averaging
- decrease
- munich
- snows
- chattanooga
- lori
- coldness
- membered
- unemp
- fetus
- complications
- slobs
- equation
- nameless
- malformed
- sincere
- deliberations
- dismissed
- indicted
- revenge
- subsequent
- provoked
- provocation
- qualifies
- mitigating
- contender
- linguini
- hawaiian
- luau
- angie
- shellfish
- clam
- cheeses
- nachos
- resurrection
- lutheran
- scanned
- cooperating
- toss
- inmate
- interpretation
- blanks
- executioner
- bamorghini
- skyhawk
- dominican
- nantes
- castles
- vineyard
- consignment
- goodwill
- crushes
- sewer
- res
- unoccupied
- assassinated
- menace
- perspec
- relativity
- vantage
- weighted
- reflect
- subservient
- integration
- ith
- frien
- drudgery
- montpe
- mont
- monteplier
- montpelier
- everett
- yack
- tromping
- unlimited
- wedge
- fairway
- flus
- startling
- '286'
- turret
- scien
- simulators
- plugged
- upgrades
- custer
- '386'
- trenches
- trencher
- stunt
- cul
- sac
- rearranged
- clancy
- novell
- netware
- ark
- ladonna
- peck
- bourne
- ultimatum
- enveloped
- amsterdam
- holland
- harpsichordist
- forte
- warrington
- cheating
- harry
- heroic
- mayfield
- corrupts
- lig
- hatteras
- imaging
- legalese
- himsnelf
- koop
- scarcity
- highland
- jogs
- gyms
- inequities
- stimulate
- deductor
- bentsen
- drunks
- lafferty
- infringe
- snuffed
- snuff
- compares
- gilmore
- accomplishes
- william
- thrice
- mating
- sows
- suckling
- hernia
- carcass
- cloves
- pineapples
- cranberries
- hominy
- barb
- automatics
- avis
- crashed
- lens
- porsche
- turbo
- carrera
- mys
- mushrooming
- percentagewise
- folderol
- lifeguard
- jarring
- flui
- watchers
- pokes
- blamed
- ceases
- intravenous
- cell
- quests
- subsidies
- slashed
- entitlement
- trades
- beauticians
- unending
- spiral
- consumers
- unf
- ailments
- magerick
- celtic
- transplanted
- rolando
- harper
- plaint
- straighter
- dayer
- plumbed
- bolted
- logan
- accredited
- professorship
- distressing
- fiel
- treasury
- refunds
- halt
- spying
- scaled
- loading
- challenger
- stat
- mirv
- roomy
- cargo
- recommends
- volvos
- wagons
- conscientiously
- emiss
- hypothesize
- muncie
- terre
- haute
- triggering
- verify
- drivable
- emerges
- overgrazed
- reclaimed
- prettiest
- palm
- paintbrush
- septic
- hummingbirds
- hummingbird
- pooped
- annuals
- countrified
- supermarket
- coaster
- afterburners
- gliding
- oomph
- subs
- gambled
- insulating
- spec
- verandas
- genes
- drapes
- guppies
- platies
- fishies
- glacier
- playgrounds
- wilderness
- scaries
- rayburn
- curling
- nominal
- fulfill
- synagogue
- geriatrics
- app
- degenerative
- communiky
- enhance
- assist
- text
- biogra
- daniels
- prince
- phillip
- criticizing
- miniseries
- scarlett
- spectacular
- torrents
- ligh
- horizontally
- arid
- crisp
- sleigh
- brighton
- springtime
- skie
- hammered
- subtly
- brianna
- lib
- submerged
- loosening
- leaks
- tar
- gravel
- plastered
- drywalled
- plastering
- terri
- exasperating
- swelling
- squirming
- swells
- shrinks
- retains
- highlight
- captive
- legos
- technic
- lego
- stare
- engagements
- sousa
- refreshments
- rehearsal
- donations
- municipal
- conduct
- nitny
- altoona
- lockhaven
- nighttimes
- ama
- emerson
- maceboast
- circuitry
- vacationer
- wausau
- unduly
- sunglasses
- grip
- durable
- faulty
- recliner
- pinto
- sequoias
- redwoods
- bryce
- tetons
- sequoia
- driveways
- snowmen
- snowballs
- marketed
- acceleration
- suspension
- lumbar
- sma
- bur
- skyrocketing
- govern
- exclude
- ballgame
- warrant
- rounds
- brats
- eff
- nativity
- facings
- casings
- relieve
- strase
- reliever
- relieving
- sander
- cabinet
- equipments
- dado
- rotary
- sicknesses
- bryan
- mamas
- packards
- solburns
- frown
- niggardly
- chintzy
- megs
- mirroring
- epidemic
- immunizations
- rays
- mumps
- rubella
- inaccuracy
- defined
- issued
- hypocritical
- stings
- laundering
- contr
- governed
- discomfort
- stea
- holster
- spontaneous
- headquarters
- bitterest
- fluctuations
- texts
- doen
- rosie
- '''neil'
- thomases
- trimmer
- clump
- tithing
- homeowner
- computerization
- stale
- subroutine
- libra
- clara
- beastie
- triggered
- pledged
- fren
- ally
- organi
- trombone
- weathers
- facetious
- directors
- spells
- compulsive
- childr
- fluffs
- toppings
- brea
- torque
- underdrive
- sportier
- beetle
- coolers
- bonneville
- secondaries
- quadrajet
- compulsion
- elevation
- variations
- hilltops
- mines
- hamster
- cruelty
- parakeet
- parakreet
- burmese
- deactivated
- infatuated
- jobbies
- visualize
- boggling
- slid
- clamped
- kisses
- everywh
- brag
- gramm
- overturning
- renegotiate
- kickbacks
- valdez
- defi
- batted
- hangs
- threats
- emit
- che
- churning
- remembrance
- networking
- conformance
- wyatt
- extremey
- bennigan
- vincent
- chefalia
- whataburger
- zillion
- mercado
- juarez
- tallest
- ewaldes
- cont
- stoneleigh
- chews
- yapping
- collies
- roughest
- hollered
- battling
- obedience
- squats
- vaca
- pilgrims
- medieval
- relics
- bemerton
- newness
- turin
- muffins
- requests
- helman
- tart
- zing
- cele
- layering
- fluffier
- joins
- jennifer
- unselfish
- tutoring
- affiliated
- aimlessly
- perky
- shins
- hyper
- burdensome
- earphones
- timbuktu
- onna
- lieutenant
- biologist
- sliding
- tremors
- variedly
- bakers
- aprons
- sweatshirt
- wigs
- lamb
- bunnies
- symbols
- milky
- polytechnochloride
- mought
- trashmore
- lifts
- riverview
- tranged
- strongest
- recessionary
- stagnate
- unteachable
- prominent
- chide
- remaining
- backbone
- newborns
- fullest
- firewh
- daffodil
- jung
- aquinas
- libretto
- rossini
- mahler
- dutchen
- trumpets
- elixir
- floated
- swapped
- tyme
- tempco
- trooper
- gisland
- carribean
- unpacking
- lotto
- alcatraz
- hairdresser
- crui
- janice
- furry
- eaves
- rafter
- cactuses
- furrows
- wrung
- plink
- construe
- thinkings
- bue
- buechele
- grieves
- gullible
- manufactures
- borden
- bib
- overalls
- oshman
- evaluated
- unfor
- linguistic
- austria
- niagara
- coasts
- carolinas
- leisurely
- modesto
- cheeseburgers
- incapable
- hygienic
- inoperable
- oxygen
- banish
- relocated
- realtor
- listings
- precautions
- integrate
- cooperatives
- reallocate
- reorganize
- accelerate
- transient
- commish
- tenderhearted
- galaxies
- crud
- mutations
- feazure
- ballooned
- reclamation
- merits
- axiom
- fiends
- sensitivity
- aboveboard
- evaluating
- veggies
- unarmed
- resembling
- tallow
- scalloped
- weighing
- strap
- squeaker
- closing
- mullin
- squeakers
- marquee
- bluish
- hydrogen
- sulfide
- h2s
- ramps
- vaccine
- preventable
- syringes
- needles
- feared
- ruf
- riffraff
- haves
- nots
- earhout
- bulletproof
- vest
- hedge
- tollbooth
- hatcher
- taverns
- sailboats
- ancle
- lounge
- cocktail
- sailer
- cruiser
- hull
- spars
- rigging
- gusts
- wearisome
- flaky
- markups
- arming
- stra
- quail
- swedish
- munch
- intermission
- doughy
- frosts
- iceberg
- schoolteacher
- altrusa
- upholstery
- garl
- jupiter
- musically
- auditions
- repertory
- outlet
- auditory
- lear
- educationally
- verified
- chording
- pianist
- min
- ec
- subbranch
- emigrated
- beware
- entrepreneurial
- ventures
- banked
- stored
- footsteps
- postcards
- notify
- notifying
- steals
- hides
- subsequently
- corrective
- leers
- downright
- outright
- shu
- newest
- apathetic
- absol
- prolong
- roofing
- retool
- zigzag
- kan
- untalented
- washed
- salvageable
- gluing
- feds
- interrupting
- faults
- caucasian
- educ
- thei
- officed
- deputy
- pruned
- gladiolas
- amaryllis
- conf
- plantings
- sprout
- narcissus
- psychic
- rerun
- activate
- rusted
- rusts
- fenders
- repainted
- acco
- dreary
- expen
- salting
- weinstocks
- wad
- hilt
- dolphene
- feelt
- throwed
- wheelchairs
- emjoy
- anheimer
- tela
- kindly
- innovated
- endeavors
- adam
- particulars
- abusive
- evolutionary
- duplication
- imagers
- allocate
- optimally
- squawk
- evolution
- insurers
- entity
- burnable
- ticketed
- charities
- braved
- suede
- cardigan
- appointments
- unlined
- toasty
- lightweight
- fireplaces
- dense
- ethanol
- smokestacks
- mowers
- wedded
- organism
- nutritionally
- bamba
- szechuan
- pancho
- binders
- assignments
- developments
- cashew
- avoiding
- suey
- disburse
- squeeze
- sq
- faculties
- pauper
- brokerage
- anticipation
- cherished
- commodity
- famuel
- slopes
- biness
- furlough
- promoted
- nec
- shasta
- salmon
- sk
- walleye
- fighters
- fillet
- foil
- seekers
- scrutiny
- tarrant
- bobsy
- accu
- smiled
- growled
- mistrials
- railroaded
- convalescent
- unsettling
- senile
- graying
- exercisings
- unaffordable
- restricts
- casse
- gabrielli
- bankrupted
- cello
- viola
- composers
- boutiques
- darling
- chanting
- canseco
- ramming
- vinny
- utility
- outweighing
- sundance
- smithsonian
- crosswords
- planners
- artists
- bazo
- faron
- spiro
- gyro
- dulcimer
- jarreau
- contorted
- bonnie
- rait
- grammy
- unedu
- sprayer
- routers
- cookie
- varnish
- smoother
- hayloft
- franklin
- gradual
- increasement
- torpedoed
- downside
- blythe
- tonkin
- macintoshes
- graphical
- multitasking
- gestures
- vocabulary
- compilers
- consultation
- interactive
- discriminating
- correlate
- funnest
- gentler
- panicked
- sassy
- westmin
- westminster
- infra
- mondale
- situa
- circuses
- disrepair
- dashboard
- ce
- beefing
- patrols
- visibility
- lifted
- cumberland
- cobb
- thefts
- superficial
- cracked
- electrically
- manufactured
- bordering
- elects
- aerodyne
- aerob
- brace
- publicize
- killings
- duri
- commentators
- blurbs
- bog
- dur
- countdown
- newscasts
- unreasonable
- moderator
- unorganized
- moderated
- assumingly
- importers
- dahlmer
- ohi
- nightmarish
- withheld
- sovereign
- martial
- puritanical
- permissible
- acquitting
- acquit
- impaneling
- dismissing
- foreman
- deliberating
- una
- restate
- unannounced
- sweep
- definitive
- bodily
- behaviors
- enters
- privacies
- melanie
- spry
- announcements
- anson
- fayetteville
- waynesboro
- delinquency
- fre
- gainfully
- tremen
- thriving
- towar
- grit
- pail
- latent
- compression
- ovens
- armor
- fierce
- finagle
- nationalizing
- cutoff
- operat
- unionized
- distinction
- institutionally
- expedient
- innovativeness
- expedi
- unequal
- plaintiff
- novices
- bets
- leaky
- luby
- taping
- promo
- blurb
- mutt
- hooper
- veterin
- spay
- neuter
- frie
- shorties
- decreased
- unrestricted
- glut
- magnum
- rushes
- oper
- preset
- styro
- frank
- shocks
- allot
- frowned
- chronicle
- analytical
- abnormality
- overwhelmingly
- academia
- descriptions
- addictive
- reevaluate
- divvy
- allocated
- psy
- psychedelic
- crosby
- stills
- performers
- secular
- druggie
- shipping
- maximize
- actuall
- revelation
- polymers
- roadways
- hoop
- funn
- heavenly
- retailers
- induce
- inducement
- recycler
- saskatoon
- welfor
- employing
- deposits
- arithmetic
- sums
- colleague
- internet
- infusions
- incurring
- surveying
- assesses
- footloose
- smattering
- greetings
- snobby
- paled
- refrained
- acute
- indivigal
- thrives
- categorized
- receptionist
- lar
- curve
- critter
- incumbent
- entrenched
- standardizing
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d2
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
esiebomajeremiah/autonlp-email-classification-657119381 | esiebomajeremiah | 2022-03-22T13:57:29Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:esiebomajeremiah/autonlp-data-email-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-22T13:54:29Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- esiebomajeremiah/autonlp-data-email-classification
co2_eq_emissions: 3.516233232503715
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 657119381
- CO2 Emissions (in grams): 3.516233232503715
## Validation Metrics
- Loss: 0.00037395773688331246
- Accuracy: 1.0
- Precision: 1.0
- Recall: 1.0
- AUC: 1.0
- F1: 1.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/esiebomajeremiah/autonlp-email-classification-657119381
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("esiebomajeremiah/autonlp-email-classification-657119381", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("esiebomajeremiah/autonlp-email-classification-657119381", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
edwardjross/xlm-roberta-base-finetuned-panx-all | edwardjross | 2022-03-22T13:46:27Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-22T13:33:47Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1812
- F1: 0.8567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2983 | 1.0 | 1252 | 0.1945 | 0.8033 |
| 0.1603 | 2.0 | 2504 | 0.1889 | 0.8441 |
| 0.1012 | 3.0 | 3756 | 0.1812 | 0.8567 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Ketzu/koelectra-sts-v0.6 | Ketzu | 2022-03-22T13:18:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-20T11:10:57Z | ---
tags:
- generated_from_trainer
metrics:
- spearmanr
model-index:
- name: koelectra-sts-v0.6
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8698381401893762
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-sts-v0.6
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0059
- Pearson: 0.9988
- Spearmanr: 0.8698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------:|
| 0.0036 | 1.0 | 6250 | 0.0082 | 0.9983 | 0.8698 |
| 0.0038 | 2.0 | 12500 | 0.0065 | 0.9986 | 0.8697 |
| 0.0105 | 3.0 | 18750 | 0.0071 | 0.9985 | 0.8698 |
| 0.0008 | 4.0 | 25000 | 0.0059 | 0.9988 | 0.8698 |
| 0.0008 | 5.0 | 31250 | 0.0059 | 0.9988 | 0.8698 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.10.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/laurentozon | huggingtweets | 2022-03-22T12:21:52Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-22T12:21:17Z | ---
language: en
thumbnail: http://www.huggingtweets.com/laurentozon/1647951707700/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1505670688635564034/K4L2yhhB_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Laurent Ozon</div>
<div style="text-align: center; font-size: 14px;">@laurentozon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Laurent Ozon.
| Data | Laurent Ozon |
| --- | --- |
| Tweets downloaded | 3192 |
| Retweets | 753 |
| Short tweets | 382 |
| Tweets kept | 2057 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3uddth9b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @laurentozon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dzqbuuu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dzqbuuu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/laurentozon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
saattrupdan/voxpopuli-wav2vec2-large-cv8-da | saattrupdan | 2022-03-22T09:58:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"da",
"dataset:common_voice_8_0",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-02T23:29:05Z | ---
language:
- da
license: cc-by-nc-4.0
tasks:
- automatic-speech-recognition
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: voxpopuli-wav2vec2-large-cv8-da
results:
- task:
type: automatic-speech-recognition
dataset:
type: mozilla-foundation/common_voice_8_0
args: da
name: Danish Common Voice 8.0
metrics:
- type: wer
value: 40.54
- task:
type: automatic-speech-recognition
dataset:
type: Alvenir/alvenir_asr_da_eval
name: Alvenir ASR test dataset
metrics:
- type: wer
value: 40.66
---
# VoxPopuli-Wav2vec2-large-CV8-da
## Model description
This model is a fine-tuned version of the Swedish acoustic model [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) on the Danish part of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), containing ~6 crowdsourced hours of read-aloud Danish speech.
## Performance
The model achieves the following WER scores (lower is better):
| **Dataset** | **WER without LM** | **WER with 5-gram LM** |
| :---: | ---: | ---: |
| [Danish part of Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/viewer/da/train) | 48.04 | 40.54 |
| [Alvenir test set](https://huggingface.co/datasets/Alvenir/alvenir_asr_da_eval) | 48.43 | 40.66 | |
edoumazane/distilbert-base-uncased-finetuned-ner | edoumazane | 2022-03-22T09:56:14Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-22T09:27:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9247134038800705
- name: Recall
type: recall
value: 0.9384718648618414
- name: F1
type: f1
value: 0.9315418355449449
- name: Accuracy
type: accuracy
value: 0.9836529143565221
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9247
- Recall: 0.9385
- F1: 0.9315
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2421 | 1.0 | 878 | 0.0701 | 0.9083 | 0.9217 | 0.9149 | 0.9801 |
| 0.0555 | 2.0 | 1756 | 0.0599 | 0.9204 | 0.9357 | 0.9280 | 0.9830 |
| 0.0311 | 3.0 | 2634 | 0.0612 | 0.9247 | 0.9385 | 0.9315 | 0.9837 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
merve/anime-faces-generator | merve | 2022-03-22T09:15:31Z | 0 | 2 | keras | [
"keras",
"tf-keras",
"dcgan",
"dataset:merve/anime-faces",
"license:apache-2.0",
"region:us"
] | null | 2022-03-04T16:41:30Z | ---
license: apache-2.0
library_name: keras
tags:
- dcgan
datasets:
- merve/anime-faces
---
## Model description
Anime face generator model using [TensorFlow DCGAN example](https://www.tensorflow.org/tutorials/generative/dcgan).
## Training and evaluation data
Model is trained on [anime faces dataset](https://huggingface.co/datasets/merve/anime-faces). The dataset consists of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm [here](https://github.com/nagadomi/lbpcascade_animeface). All images are resized to 64 * 64 for the sake of convenience. The model takes a noise as input and then Conv2DTranspose is used to do upsampling. If you want to pass this to another discriminator, the output shape consists of 28x28 images.
## How to use this model
You can use this model to generate new anime faces. If you want to continuously train, use with [discriminator](https://huggingface.co/merve/anime-faces-discriminator) using `tf.GradientTape()` as mentioned in the DCGAN tutorial.
```
from huggingface_hub import from_pretrained_keras
model = from_pretrained_keras("merve/anime-faces-generator")
```
You can generate examples using a noise.
```
seed = tf.random.normal([number_of_examples_to_generate, noise])
predictions = model(seed, training=False) # inference mode
```
## Intended use and biases
This model is not intended for production.
### Generated images
 |
Yaxin/xlm-roberta-base-conll2003-ner | Yaxin | 2022-03-22T08:11:52Z | 81 | 3 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-22T07:36:34Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test-conll2003-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9459188783174762
- name: Recall
type: recall
value: 0.9537192864355436
- name: F1
type: f1
value: 0.94980306712478
- name: Accuracy
type: accuracy
value: 0.9911218410498034
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-conll2003-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0470
- Precision: 0.9459
- Recall: 0.9537
- F1: 0.9498
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
lazyturtl/WEC-types | lazyturtl | 2022-03-22T04:54:04Z | 60 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-03-22T04:53:55Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: WEC-types
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7830188870429993
---
# WEC-types
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Attenuators

#### Oscillating water column

#### Overtopping Devices

#### Point Absorber
 |
razent/SciFive-large-Pubmed_PMC-MedNLI | razent | 2022-03-22T04:05:21Z | 675 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"mednli",
"en",
"dataset:pubmed",
"dataset:pmc/open_access",
"arxiv:2106.03598",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-20T17:24:33Z | ---
language:
- en
tags:
- text2text-generation
- mednli
datasets:
- pubmed
- pmc/open_access
widget:
- text: "mednli: sentence1: In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA. sentence2: The patient is hemodynamically stable"
---
# SciFive Pubmed+PMC Large on MedNLI
## Introduction
Finetuned SciFive Pubmed+PMC Large model achieved state-of-the-art results on [MedNLI (Medical Natural Language Inference)](https://paperswithcode.com/sota/natural-language-inference-on-mednli)
Paper: [SciFive: a text-to-text transformer model for biomedical literature](https://arxiv.org/abs/2106.03598)
Authors: _Long N. Phan, James T. Anibal, Hieu Tran, Shaurya Chanana, Erol Bahadroglu, Alec Peltekian, Grégoire Altan-Bonnet_
## How to use
For more details, do check out [our Github repo](https://github.com/justinphan3110/SciFive).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI")
model = AutoModelForSeq2SeqLM.from_pretrained("razent/SciFive-large-Pubmed_PMC-MedNLI")
model.cuda()
sent_1 = "In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA."
sent_2 = "The patient is hemodynamically stable"
text = f"mednli: sentence1: {sent_1} sentence2: {sent_2}"
encoding = tokenizer.encode_plus(text, padding='max_length', max_length=256, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=8,
early_stopping=True
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(line)
``` |
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES | StivenLancheros | 2022-03-21T22:36:06Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-21T22:05:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_AugmentedTransfer_ES
This model is a fine-tuned version of [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2043
- Precision: 0.8666
- Recall: 0.8614
- F1: 0.8639
- Accuracy: 0.9734
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in Spanish (MT translated) and English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated. To improve F1 score the transfer learning was completed in two steps.
Using [StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES](https://huggingface.co/StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES) as a base model, I finetuned once more on the original CRAFT dataset in English.
Biobert --> Augmented CRAFT --> CRAFT ES (MT translated)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0088 | 1.0 | 1360 | 0.1793 | 0.8616 | 0.8487 | 0.8551 | 0.9721 |
| 0.0046 | 2.0 | 2720 | 0.1925 | 0.8618 | 0.8426 | 0.8521 | 0.9713 |
| 0.0032 | 3.0 | 4080 | 0.1926 | 0.8558 | 0.8630 | 0.8594 | 0.9725 |
| 0.0011 | 4.0 | 5440 | 0.2043 | 0.8666 | 0.8614 | 0.8639 | 0.9734 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES | StivenLancheros | 2022-03-21T22:25:59Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-21T20:16:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Precision: 0.8298
- Recall: 0.8306
- F1: 0.8302
- Accuracy: 0.9659
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0624 | 1.0 | 4078 | 0.1844 | 0.8002 | 0.7923 | 0.7963 | 0.9607 |
| 0.0284 | 2.0 | 8156 | 0.1937 | 0.8394 | 0.7988 | 0.8186 | 0.9637 |
| 0.0118 | 3.0 | 12234 | 0.2007 | 0.8285 | 0.8232 | 0.8258 | 0.9649 |
| 0.0043 | 4.0 | 16312 | 0.2224 | 0.8298 | 0.8306 | 0.8302 | 0.9659 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
elena-soare/bat-pre-trained | elena-soare | 2022-03-21T22:23:37Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-21T21:28:30Z | # Text2SQL Task T5-Base + E-commerce pre-training
This is our T5 model pre-trained on 18k e-commerce pages from popular blogs and fine-tuned on Spider using a schema serialization.
## Running the model
Inspired by the work done by [Picard](https://github.com/ElementAI/picard/) by adding a pre-training step for better performance on e-commerce data.
```python
[question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ...
```
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN | StivenLancheros | 2022-03-21T22:07:55Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-21T20:11:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_EN
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2276
- Precision: 0.8078
- Recall: 0.8258
- F1: 0.8167
- Accuracy: 0.9629
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical. This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Both datasets (original, augmented) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0842 | 1.0 | 2719 | 0.1765 | 0.7606 | 0.7785 | 0.7695 | 0.9542 |
| 0.0392 | 2.0 | 5438 | 0.1971 | 0.7990 | 0.7958 | 0.7974 | 0.9596 |
| 0.0138 | 3.0 | 8157 | 0.2094 | 0.8013 | 0.8196 | 0.8103 | 0.9620 |
| 0.0082 | 4.0 | 10876 | 0.2276 | 0.8078 | 0.8258 | 0.8167 | 0.9629 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Ebtihal/AraBertMo_base_V8 | Ebtihal | 2022-03-21T22:03:44Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:04Z | Arabic Model AraBertMo_base_V8
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " السلام عليكم ورحمة[MASK] وبركاتة"
- text: " اهلا وسهلا بكم في [MASK] من سيربح المليون"
- text: " مرحبا بك عزيزي الزائر [MASK] موقعنا "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert). AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name. Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V8' model was pre-trained on ~3 million words: [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 40032| 8 | 64 | 5008 | 10h 5m 57s | 7.2164 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V8")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V8")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
huggingtweets/elonmusk-garyvee | huggingtweets | 2022-03-21T19:57:10Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-21T19:55:22Z | ---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-garyvee/1647892564866/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1503591435324563456/foUrqiEw_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493524673962852353/qRxbC9Xq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Gary Vaynerchuk</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-garyvee</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Gary Vaynerchuk.
| Data | Elon Musk | Gary Vaynerchuk |
| --- | --- | --- |
| Tweets downloaded | 2200 | 3247 |
| Retweets | 102 | 712 |
| Short tweets | 671 | 842 |
| Tweets kept | 1427 | 1693 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/abt9l46e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-garyvee's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/4wls4y5v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/4wls4y5v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-garyvee')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ameer05/distilbart-cnn-12-6-finetuned-resume-summarizer | Ameer05 | 2022-03-21T19:35:06Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-21T19:18:43Z | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-resume-summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-resume-summarizer
This model is a fine-tuned version of [Ameer05/model-tokenizer-repo](https://huggingface.co/Ameer05/model-tokenizer-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1123
- Rouge1: 52.5826
- Rouge2: 34.3861
- Rougel: 41.8525
- Rougelsum: 51.0015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.91 | 5 | 3.2243 | 42.8593 | 24.8652 | 34.1789 | 41.406 |
| No log | 1.91 | 10 | 2.6948 | 48.8571 | 28.6711 | 39.2648 | 46.188 |
| No log | 2.91 | 15 | 2.4665 | 50.6085 | 30.4034 | 39.7406 | 48.5449 |
| No log | 3.91 | 20 | 2.3329 | 52.2357 | 32.3398 | 41.574 | 49.4316 |
| 3.6611 | 4.91 | 25 | 2.2362 | 52.0134 | 33.1612 | 41.3103 | 50.255 |
| 3.6611 | 5.91 | 30 | 2.1833 | 51.5434 | 32.7045 | 40.5683 | 49.4238 |
| 3.6611 | 6.91 | 35 | 2.1462 | 53.5144 | 35.4518 | 42.8615 | 51.4053 |
| 3.6611 | 7.91 | 40 | 2.1518 | 52.0985 | 33.6754 | 41.5936 | 50.5159 |
| 2.0326 | 8.91 | 45 | 2.1075 | 53.1401 | 34.9721 | 42.2973 | 51.8454 |
| 2.0326 | 9.91 | 50 | 2.1123 | 52.5826 | 34.3861 | 41.8525 | 51.0015 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
cb2-kai/finetuning-sentiment-model-3000-samples | cb2-kai | 2022-03-21T18:34:27Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-21T14:19:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8679245283018867
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3568
- Accuracy: 0.86
- F1: 0.8679
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/rebeudeter | huggingtweets | 2022-03-21T17:55:17Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-21T17:55:08Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1421289007753859077/3X1VHMRx_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Billy ☄️🧡</div>
<div style="text-align: center; font-size: 14px;">@rebeudeter</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Billy ☄️🧡.
| Data | Billy ☄️🧡 |
| --- | --- |
| Tweets downloaded | 3220 |
| Retweets | 363 |
| Short tweets | 205 |
| Tweets kept | 2652 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3mz5i9lj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rebeudeter's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qau529e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qau529e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rebeudeter')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ianMconversica/autonlp-test-654919306 | ianMconversica | 2022-03-21T17:29:34Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autonlp",
"unk",
"dataset:McIan91/autonlp-data-test",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-21T17:28:50Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- McIan91/autonlp-data-test
co2_eq_emissions: 0.7013851565380207
---
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 654919306
- CO2 Emissions (in grams): 0.7013851565380207
## Validation Metrics
- Loss: 2.5570242404937744
- Rouge1: 72.7273
- Rouge2: 44.4444
- RougeL: 72.7273
- RougeLsum: 72.7273
- Gen Len: 17.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/McIan91/autonlp-test-654919306
``` |
espnet/marathi_openslr64 | espnet | 2022-03-21T16:23:56Z | 1 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:mr_openslr64",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-21T16:17:30Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- mr_openslr64
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/marathi_openslr64`
This model was trained by Sujay Suresh Kumar using mr_openslr64 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 91325a1e58ca0b13494b94bf79b186b095fe0b58
pip install -e .
cd egs2/mr_openslr64/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/marathi_openslr64
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Mar 21 16:06:03 UTC 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.11.0+cu102`
- Git hash: `91325a1e58ca0b13494b94bf79b186b095fe0b58`
- Commit date: `Mon Mar 21 00:40:52 2022 +0000`
## asr_train_asr_conformer_xlsr_raw_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_asr_model_valid.acc.ave/marathi_test|299|3625|72.9|22.5|4.7|1.7|28.9|88.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_asr_model_valid.acc.ave/marathi_test|299|20557|91.4|3.1|5.5|1.9|10.5|88.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_asr_model_valid.acc.ave/marathi_test|299|13562|86.5|6.3|7.1|1.4|14.9|88.6|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_xlsr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_xlsr_raw_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 60
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 3
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_bpe150_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/marathi_train_sp/wav.scp
- speech
- sound
- - dump/raw/marathi_train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/marathi_dev/wav.scp
- speech
- sound
- - dump/raw/marathi_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- ▁
- ा
- ी
- े
- त
- र
- ं
- न
- क
- ्
- व
- ि
- ल
- ▁म
- स
- ो
- श
- द
- च
- म
- ▁अ
- ▁आ
- ण
- ु
- ला
- ह
- ▁आहे
- य
- ▁स
- ग
- ▁ह
- ्या
- चा
- ▁प
- ड
- ▁क
- प
- ट
- ▁ब
- ज
- र्
- ्र
- ▁?
- ▁ज
- ब
- ून
- वा
- ▁एक
- ▁या
- ळ
- ात
- ख
- ध
- ▁ति
- ठ
- ल्या
- ले
- ू
- ▁तुम्हाला
- ां
- ार
- घ
- ची
- ▁अस
- थ
- ▁का
- ने
- णि
- ॅ
- ▁त
- ▁परवा
- ▁ते
- ली
- ▁गेल
- ळा
- ष
- ▁कर
- .
- च्या
- ▁न
- वर
- ▁त्या
- ▁प्र
- ▁करू
- ▁ग
- ्ट
- ई
- झ
- ▁फ
- ाय
- क्ष
- ▁काय
- पूर
- ▁होती
- मध
- ▁तिथ
- ▁काही
- ए
- ▁वि
- ▁दोन
- ▁महिन्या
- व्हा
- तील
- जार
- ▁नाही
- ँ
- ▁पुत
- ॉ
- ▁झाला
- ▁दिसल
- ▁साल
- ▁रस्त्यावर
- स्त
- जवळ
- न्म
- मध्य
- ऊ
- ▁इथे
- ▁तुमच
- ▁शकते
- मान
- ▁उद्
- फ
- ै
- ढ
- ','
- इ
- ौ
-
- ृ
- ओ
- ः
- ॲ
- आ
- '-'
- ञ
- औ
- '!'
- ऑ
- ऱ
- ऐ
- छ
- उ
- '?'
- भ
- अ
- ऋ
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_xlsr
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 4
linear_units: 1024
num_blocks: 3
dropout_rate: 0.3
positional_dropout_rate: 0.3
attention_dropout_rate: 0.3
input_layer: conv2d
normalize_before: true
macaron_style: false
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 17
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 1024
num_blocks: 3
dropout_rate: 0.3
positional_dropout_rate: 0.3
self_attention_dropout_rate: 0.3
src_attention_dropout_rate: 0.3
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/aaf_openslr57 | espnet | 2022-03-21T14:36:37Z | 1 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"fr",
"dataset:openslr",
"arxiv:1804.00015",
"region:us"
] | automatic-speech-recognition | 2022-03-21T04:58:18Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: fr
datasets:
- openslr
---
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Newt007/bin_cls_att.h5 | Newt007 | 2022-03-21T14:18:09Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-21T14:11:06Z | Binary-classification model for malicious and benign requests
```
from keras import models
models.load_model('xxx.h5')
```
---
language:
- python 3.7
---
libraries:
- keras==2.4.3
- tensorflow==2.3.1
|
huggingtweets/rupertboneham-rupertskids-survivorcbs | huggingtweets | 2022-03-21T13:31:40Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-21T13:26:08Z | ---
language: en
thumbnail: http://www.huggingtweets.com/rupertboneham-rupertskids-survivorcbs/1647869465531/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2879716355/bd3a0d75f2ec004c61cf470e66895eda_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/984777181963448321/GZEqLnVr_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1488244197467381765/3F2BzfCJ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Rupert Boneham & Rupert Boneham & SURVIVOR</div>
<div style="text-align: center; font-size: 14px;">@rupertboneham-rupertskids-survivorcbs</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Rupert Boneham & Rupert Boneham & SURVIVOR.
| Data | Rupert Boneham | Rupert Boneham | SURVIVOR |
| --- | --- | --- | --- |
| Tweets downloaded | 3139 | 352 | 3222 |
| Retweets | 710 | 151 | 551 |
| Short tweets | 142 | 17 | 540 |
| Tweets kept | 2287 | 184 | 2131 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2m3rl64a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rupertboneham-rupertskids-survivorcbs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1o5vktei) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1o5vktei/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rupertboneham-rupertskids-survivorcbs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
beston91/gpt2-xl_ft_logits_5k_2 | beston91 | 2022-03-21T10:16:30Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-20T23:02:24Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_logits_5k_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_logits_5k_2
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.2407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 27 | 6.1106 |
| No log | 1.99 | 54 | 6.1400 |
| No log | 2.99 | 81 | 6.1875 |
| No log | 3.99 | 108 | 6.2407 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 17.59415626525879 |
imkaushalpatel/YOLOv5 | imkaushalpatel | 2022-03-21T09:50:21Z | 0 | 0 | null | [
"region:us"
] | null | 2022-03-21T09:49:14Z | YOLOv5 🚀 is a family of compound-scaled object detection models trained on the COCO dataset, and includes simple functionality for Test Time Augmentation (TTA), model ensembling, hyperparameter evolution, and export to ONNX, CoreML and TFLite.
|
Ameer05/test | Ameer05 | 2022-03-21T09:35:03Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-21T08:16:45Z | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [Ameer05/tokenizer-repo](https://huggingface.co/Ameer05/tokenizer-repo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6109
- Rouge1: 54.9442
- Rouge2: 45.3299
- Rougel: 50.5219
- Rougelsum: 53.6475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.91 | 5 | 2.3705 | 53.62 | 44.3835 | 49.6135 | 52.693 |
| No log | 1.91 | 10 | 1.9035 | 47.478 | 37.0934 | 39.7935 | 45.1881 |
| No log | 2.91 | 15 | 1.7990 | 54.2488 | 45.0782 | 49.8421 | 52.7564 |
| No log | 3.91 | 20 | 1.7125 | 55.7903 | 46.7554 | 52.2733 | 54.9389 |
| 2.4456 | 4.91 | 25 | 1.6421 | 52.2279 | 43.4391 | 49.6955 | 51.2915 |
| 2.4456 | 5.91 | 30 | 1.6102 | 55.8598 | 47.3293 | 53.1337 | 54.8596 |
| 2.4456 | 6.91 | 35 | 1.6164 | 53.7902 | 44.6622 | 49.5045 | 52.2304 |
| 2.4456 | 7.91 | 40 | 1.6015 | 51.5597 | 42.0333 | 47.9639 | 50.1154 |
| 1.239 | 8.91 | 45 | 1.6067 | 53.0301 | 43.7214 | 49.0227 | 51.8109 |
| 1.239 | 9.91 | 50 | 1.6109 | 54.9442 | 45.3299 | 50.5219 | 53.6475 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
Yaxin/electra-small-discriminator-yelp-mlm | Yaxin | 2022-03-21T09:21:02Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-21T08:41:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: test-electra-small-yelp
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: yelp_review_full yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.5677007577622891
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-electra-small-yelp
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the yelp_review_full yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2601
- Accuracy: 0.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
doctorlan/autonlp-ctrip-653519223 | doctorlan | 2022-03-21T09:01:53Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:doctorlan/autonlp-data-ctrip",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-21T08:38:42Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- doctorlan/autonlp-data-ctrip
co2_eq_emissions: 24.879856894708393
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 653519223
- CO2 Emissions (in grams): 24.879856894708393
## Validation Metrics
- Loss: 0.14671853184700012
- Accuracy: 0.9676666666666667
- Precision: 0.9794159885112494
- Recall: 0.9742857142857143
- AUC: 0.9901396825396825
- F1: 0.9768441155407017
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/doctorlan/autonlp-ctrip-653519223
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("doctorlan/autonlp-ctrip-653519223", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("doctorlan/autonlp-ctrip-653519223", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
doctorlan/autonlp-JD-bert-653619233 | doctorlan | 2022-03-21T08:54:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:doctorlan/autonlp-data-JD-bert",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-21T08:48:42Z | ---
tags: autonlp
language: unk
widget:
- text: "I love AutoNLP 🤗"
datasets:
- doctorlan/autonlp-data-JD-bert
co2_eq_emissions: 5.919372931976555
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 653619233
- CO2 Emissions (in grams): 5.919372931976555
## Validation Metrics
- Loss: 0.15083155035972595
- Accuracy: 0.952650883627876
- Precision: 0.9631399317406143
- Recall: 0.9412941961307538
- AUC: 0.9828776962419389
- F1: 0.9520917678812415
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/doctorlan/autonlp-JD-bert-653619233
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("doctorlan/autonlp-JD-bert-653619233", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("doctorlan/autonlp-JD-bert-653619233", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
mrp/SimCSE-model-WangchanBERTa-V2 | mrp | 2022-03-21T08:34:51Z | 7 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"camembert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-21T08:33:54Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 221 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: CamembertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
IsaacSST/gpt2-xl-ft-d4-0.3 | IsaacSST | 2022-03-21T04:24:22Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-21T01:38:11Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d4-0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d4-0.3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.2334 |
| No log | 2.0 | 312 | 1.2392 |
| No log | 3.0 | 468 | 1.2944 |
| 1.1868 | 4.0 | 624 | 1.3401 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln28 | BigSalmon | 2022-03-21T03:14:50Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-21T03:03:13Z | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln28")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln28")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
``` |
dodobird/distilbert-base-uncased-finetuned-emotion | dodobird | 2022-03-21T03:04:10Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-21T00:37:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9248889383977278
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2154
- Accuracy: 0.9245
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8175 | 1.0 | 250 | 0.3139 | 0.9025 | 0.8986 |
| 0.2485 | 2.0 | 500 | 0.2154 | 0.9245 | 0.9249 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
youzanai/bert-shipping-address-chinese | youzanai | 2022-03-21T02:43:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | ---
license: apache-2.0
---
基于有赞客户收货地址语料训练的bert模型。
模型示例代码参考 https://github.com/youzanai/trexpark |
youzanai/bert-customer-message-chinese | youzanai | 2022-03-21T02:43:18Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-08T01:52:53Z | 基于有赞商家客服中客户提问语料训练的bert模型。
模型示例代码参考 https://github.com/youzanai/trexpark |
espnet/MInDS-14_es-ES | espnet | 2022-03-21T02:31:06Z | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2022-03-21T01:15:24Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: es-ES
license: mit
---
# RESULTS
## Environments
- date: `Mon Mar 14 22:28:37 UTC 2022`
- python version: `3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `d5322b2dc4844dce1d14268b6848607e2a3dee21`
- Commit date: `Mon Mar 14 20:21:16 2022 +0000`
## asr_train_asr_raw_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|49|4134|64.6|23.5|11.8|16.4|51.8|98.0|
|inference_asr_model_valid.acc.ave_5best/valid|47|4178|66.8|20.2|13.0|19.2|52.5|100.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|49|8690|73.2|18.0|8.8|12.9|39.7|98.0|
|inference_asr_model_valid.acc.ave_5best/valid|47|8751|74.3|15.7|10.0|15.6|41.3|100.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|
beston91/gpt2-xl_ft_mult_10k | beston91 | 2022-03-20T22:27:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-18T15:46:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_10k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 54 | 1.3358 |
| No log | 1.99 | 108 | 0.7486 |
| No log | 2.99 | 162 | 0.6997 |
| No log | 3.99 | 216 | 0.6916 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
### Perplexity
Score: 25.89222526550293
### Dataset Size
Size: 5000 |
wasilkas/wav2vec2-base-timit-demo-colab | wasilkas | 2022-03-20T20:04:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-20T18:08:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the TIMIT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4491
- Wer: 0.3382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4787 | 4.0 | 500 | 1.4190 | 0.9939 |
| 0.5835 | 8.0 | 1000 | 0.4711 | 0.4370 |
| 0.219 | 12.0 | 1500 | 0.4555 | 0.3994 |
| 0.1251 | 16.0 | 2000 | 0.4515 | 0.3654 |
| 0.0834 | 20.0 | 2500 | 0.4923 | 0.3564 |
| 0.0632 | 24.0 | 3000 | 0.4410 | 0.3399 |
| 0.0491 | 28.0 | 3500 | 0.4491 | 0.3382 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
KoboldAI/GPT-Neo-2.7B-Shinen | KoboldAI | 2022-03-20T18:49:18Z | 669 | 22 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: en
license: mit
---
# GPT-Neo 2.7B - Shinen
## Model Description
GPT-Neo 2.7B-Shinen is a finetune created using EleutherAI's GPT-Neo 2.7B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content.
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training data
The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way:
```
[Theme: <theme1>, <theme2> ,<theme3>]
<Story goes here>
```
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Shinen')
>>> generator("She was staring at me", do_sample=True, min_length=50)
[{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo-Shinen was trained on a dataset known to contain profanity, lewd, and otherwise abrasive language. GPT-Neo-Shinen *WILL* produce socially unacceptable text without warning.
GPT-Neo-Shinen will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model is made using the following software:
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
``` |
cammy/pegasus-cnn_dailymail-1000-lit-evalMA-ga | cammy | 2022-03-20T14:36:20Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-20T13:26:27Z | ---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-cnn_dailymail-1000-lit-evalMA-ga
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-cnn_dailymail-1000-lit-evalMA-ga
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6852
- Rouge1: 25.789
- Rouge2: 11.0694
- Rougel: 20.7716
- Rougelsum: 22.4851
- Gen Len: 46.32
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 250 | 1.7061 | 25.8286 | 10.8156 | 20.9502 | 22.6588 | 44.36 |
| 1.4533 | 2.0 | 500 | 1.6876 | 26.0862 | 11.5197 | 21.1282 | 23.0963 | 45.65 |
| 1.4533 | 3.0 | 750 | 1.6852 | 25.789 | 11.0694 | 20.7716 | 22.4851 | 46.32 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vinaykudari/gpt2-acled-t2s | vinaykudari | 2022-03-20T14:26:41Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-20T03:17:45Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-acled-t2s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-acled-t2s
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2978 | 1.0 | 6621 | 1.2262 |
| 1.0378 | 2.0 | 13242 | 1.0048 |
| 0.9537 | 3.0 | 19863 | 0.9414 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
KoboldAI/GPT-J-6B-Janeway | KoboldAI | 2022-03-20T12:59:44Z | 4,477 | 13 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"en",
"arxiv:2101.00027",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: en
license: mit
---
# GPT-J 6B - Janeway
## Model Description
GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model.
## Training data
The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.
Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]`
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Janeway')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model uses the following model as base:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
|
KoboldAI/GPT-Neo-2.7B-Janeway | KoboldAI | 2022-03-20T12:57:50Z | 124 | 6 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:04Z | ---
language: en
license: mit
---
# GPT-Neo 2.7B - Janeway
## Model Description
GPT-Neo 2.7B-Janeway is a finetune created using EleutherAI's GPT-Neo 2.7B model.
## Training data
The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.
Some parts of the dataset have been prepended using the following text: `[Genre: <genre1>,<genre2>]`
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='KoboldAI/GPT-Neo-2.7B-Janeway')
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
### BibTeX entry and citation info
The model is made using the following software:
```bibtex
@software{gpt-neo,
author = {Black, Sid and
Leo, Gao and
Wang, Phil and
Leahy, Connor and
Biderman, Stella},
title = {{GPT-Neo: Large Scale Autoregressive Language
Modeling with Mesh-Tensorflow}},
month = mar,
year = 2021,
note = {{If you use this software, please cite it using
these metadata.}},
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.5297715},
url = {https://doi.org/10.5281/zenodo.5297715}
}
``` |
mitiku/AmharicWICPostag10Tags | mitiku | 2022-03-20T10:11:33Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-06T20:46:20Z | ---
tags:
- generated_from_trainer
model-index:
- name: AmharicWICPostag10Tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicWICPostag10Tags
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
mitiku/AmharicCacoPostag | mitiku | 2022-03-20T10:11:18Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-06T20:34:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: AmharicCacoPostag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicCacoPostag
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
mitiku/AmharicWICPostag | mitiku | 2022-03-20T10:10:58Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-06T20:42:53Z | ---
tags:
- generated_from_trainer
model-index:
- name: AmharicWICPostag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AmharicWICPostag
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
mrp/simcse-model-wangchanberta | mrp | 2022-03-20T09:00:47Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"feature-extraction",
"arxiv:2104.08821",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-20T08:34:14Z | # {mrp/simcse-model-wangchanberta}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) by using mBERT as the baseline model and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["ฉันนะคือคนรักชาติยังไงละ!", "พวกสามกีบล้มเจ้า!"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
``` |
willcai/wav2vec2_common_voice_accents_5 | willcai | 2022-03-20T07:07:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-19T22:07:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4163 | 1.34 | 400 | 0.5520 |
| 0.3305 | 2.68 | 800 | 0.1698 |
| 0.2138 | 4.03 | 1200 | 0.1104 |
| 0.1714 | 5.37 | 1600 | 0.0944 |
| 0.1546 | 6.71 | 2000 | 0.0700 |
| 0.1434 | 8.05 | 2400 | 0.0610 |
| 0.1272 | 9.4 | 2800 | 0.0493 |
| 0.1183 | 10.74 | 3200 | 0.0371 |
| 0.1113 | 12.08 | 3600 | 0.0468 |
| 0.1013 | 13.42 | 4000 | 0.0336 |
| 0.0923 | 14.77 | 4400 | 0.0282 |
| 0.0854 | 16.11 | 4800 | 0.0410 |
| 0.0791 | 17.45 | 5200 | 0.0252 |
| 0.0713 | 18.79 | 5600 | 0.0128 |
| 0.0662 | 20.13 | 6000 | 0.0252 |
| 0.0635 | 21.48 | 6400 | 0.0080 |
| 0.0607 | 22.82 | 6800 | 0.0098 |
| 0.0557 | 24.16 | 7200 | 0.0069 |
| 0.0511 | 25.5 | 7600 | 0.0057 |
| 0.0474 | 26.85 | 8000 | 0.0046 |
| 0.045 | 28.19 | 8400 | 0.0037 |
| 0.0426 | 29.53 | 8800 | 0.0027 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
espnet/ftshijt_espnet2_asr_dsing_hubert_conformer | espnet | 2022-03-20T04:46:53Z | 1 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:dsing",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | 2022-03-20T04:45:28Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- dsing
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/ftshijt_espnet2_asr_dsing_hubert_conformer`
This model was trained by jiatong using dsing recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/dsing/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_dsing_hubert_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Mar 19 23:02:37 EDT 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `c1ed71c6899e54c0b3dad82687886b1183cd0885`
- Commit date: `Wed Mar 16 23:34:49 2022 -0400`
## asr_train_asr_conformer7_hubert_ll60k_large_raw_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|4018|83.6|9.4|7.0|6.4|22.8|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|4632|81.4|12.3|6.3|4.5|23.1|52.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|18692|88.5|3.1|8.4|5.9|17.4|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|21787|87.9|4.3|7.8|4.5|16.6|52.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/dev|482|6097|82.2|7.1|10.7|5.7|23.5|58.3|
|decode_asr_lm_lm_train_lm_bpe500_valid.loss.ave_asr_model_latest/test|480|7736|81.7|9.2|9.1|4.0|22.3|52.1|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer7_hubert_ll60k_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_hubert_ll60k_large_raw_bpe500_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 35
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_bpe500_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train30_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train30_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0025
scheduler: warmuplr
scheduler_conf:
warmup_steps: 40000
token_list:
- <blank>
- <unk>
- ▁I
- ''''
- ▁YOU
- S
- T
- ▁THE
- M
- ▁ME
- ▁A
- ▁AND
- ▁TO
- E
- A
- ING
- D
- ▁MY
- ▁
- O
- ▁IT
- I
- N
- RE
- Y
- ▁BE
- ▁IN
- ▁ON
- ▁LOVE
- U
- ▁WE
- LL
- H
- ▁YOUR
- ▁S
- IN
- ▁OF
- ▁DO
- ▁THAT
- ▁ALL
- L
- ▁DON
- ▁OH
- ▁LIKE
- ▁KNOW
- ▁FOR
- ▁CAN
- ▁JUST
- P
- ▁BUT
- ED
- K
- ▁WHEN
- ▁SO
- R
- ▁GO
- ▁WHAT
- ▁C
- ▁WITH
- W
- ▁F
- C
- ▁NO
- ER
- ▁ONE
- ▁LET
- VE
- ES
- ▁NOW
- ▁BABY
- G
- ▁GOT
- ▁COME
- CAUSE
- LE
- B
- ▁B
- AR
- ▁UP
- ▁'
- ▁W
- ▁SEE
- ▁TIME
- ▁ARE
- ▁G
- ▁LOOK
- ▁THIS
- F
- ▁IS
- ▁NEVER
- ▁M
- ▁P
- AN
- ▁WAS
- ▁WAY
- ▁IF
- OR
- ▁SAY
- V
- ▁R
- ▁T
- ▁DOWN
- RA
- ▁THERE
- ▁HEART
- ▁NOT
- RO
- ▁WILL
- ▁OUT
- CE
- ▁WANT
- ▁YEAH
- ▁HAVE
- ▁GIVE
- ▁TOO
- ▁GONNA
- ▁HOW
- ▁NEED
- ▁GET
- ▁TAKE
- ▁EVERY
- ▁FEEL
- ▁HE
- EN
- ▁FROM
- ▁HA
- ▁K
- ▁SHE
- 'ON'
- ▁DI
- RI
- ▁ONLY
- NE
- ▁WHO
- ▁AWAY
- ▁E
- ▁D
- ▁LIFE
- ▁MAKE
- IC
- ▁BACK
- ▁WHERE
- ▁MADE
- ▁DAY
- ▁HERE
- ▁LO
- ▁HER
- ▁AS
- ▁GOOD
- ▁WANNA
- ▁OOH
- ▁TELL
- LY
- TH
- ▁WON
- ▁LIGHT
- ▁KEEP
- ▁MA
- ▁LA
- ▁SH
- ▁WORLD
- ▁MORE
- ▁LI
- AL
- ▁COULD
- ▁GIRL
- ▁NOTHING
- ▁EVER
- ▁THINK
- IE
- ▁BY
- ▁AT
- ▁TONIGHT
- ▁THEY
- ▁CALL
- ▁HO
- ▁WOULD
- IL
- ▁OUR
- ▁FALL
- ▁NIGHT
- ▁THAN
- ▁DE
- ▁SOME
- ▁WAIT
- ▁RIGHT
- ▁RE
- ▁HALLELUJAH
- ▁TH
- NG
- ▁CO
- ▁WERE
- ▁TALK
- ET
- ▁BO
- ▁HOLD
- UR
- ▁BEEN
- ▁US
- ▁PA
- VER
- ▁EYES
- ▁DREAM
- ▁SONG
- ▁SHOULD
- ▁STILL
- ▁OVER
- TA
- ▁ANYMORE
- IGHT
- ▁STAY
- ▁BETTER
- LESS
- ▁THROUGH
- ▁LITTLE
- X
- ▁GONE
- ▁AIN
- ▁DA
- ▁HOLDING
- ▁HURT
- ▁TRY
- ▁FIND
- Z
- DE
- ▁LAST
- ▁SAID
- ▁ALWAYS
- ▁BODY
- ▁MIND
- ▁CRY
- ▁EVEN
- ▁RUN
- ▁HOPE
- ▁WITHOUT
- ▁MISS
- ▁ABOUT
- ▁HAND
- ▁J
- ▁AGAIN
- ▁THOUGH
- ▁NAH
- ▁LIVE
- ▁BA
- ▁OLD
- ▁HEAD
- ▁FIRE
- ▁MAN
- ▁SOMETHING
- ▁WHY
- THER
- ▁HOME
- ▁OR
- ▁INSIDE
- ▁NEW
- ▁HEY
- TION
- ▁EVERYTHING
- ▁HAD
- ▁SOMETIMES
- ▁HARD
- ▁TOUCH
- ▁HEAR
- ▁AM
- ▁MUCH
- ▁LONG
- ▁STAR
- GETTING
- ▁WALK
- ▁PEOPLE
- ▁BEFORE
- ▁CLOSE
- ▁TWO
- ▁FAR
- ▁SHOW
- ▁STAND
- ▁LOSE
- ▁HELP
- ▁NAME
- ▁BOY
- ▁TRUE
- ▁PLAY
- ▁DARK
- ▁THINGS
- ▁NA
- ▁TEAR
- ▁END
- ▁NOBODY
- ▁SEA
- ▁ROCKABYE
- ▁BELIEVE
- ▁BROKE
- ▁AROUND
- ▁START
- ▁KISS
- ▁FEELING
- ▁BREAK
- ▁SOMEONE
- ▁FRIEND
- ▁ALONE
- ▁BEAUTIFUL
- ▁CRAZY
- ▁OWN
- OSE
- ▁STOP
- ▁LOST
- ▁HIM
- ▁BAD
- ▁CHANCE
- ▁REALLY
- ▁WISH
- ▁MOVE
- ▁SKY
- ▁PLACE
- AKE
- ▁LEAVE
- ▁YA
- ▁STRONG
- ▁PUT
- ▁OPEN
- ▁WRONG
- ▁COLD
- OCK
- ▁USED
- ▁FOUND
- ▁LONELY
- ▁DANCE
- EACH
- ▁ANOTHER
- ▁SIDE
- ▁UNDER
- ▁MATTER
- ▁THESE
- ▁CARE
- ▁MINE
- ▁SHINE
- ▁AFRAID
- ▁TURN
- ▁PLEASE
- ▁SUN
- ▁DIAMOND
- ▁UNTIL
- ▁FACE
- ▁LEARN
- ▁TRUST
- ▁WONDER
- ▁BREATH
- ATE
- ▁SORRY
- ▁HU
- ▁WATCH
- ▁LATE
- ROUND
- ▁ARMS
- ▁PERFECT
- ▁MAYBE
- ▁PULL
- ▁REMEMBER
- ▁FIGHT
- ▁MYSELF
- ▁INTO
- ▁DARLING
- ▁THUNDER
- ▁FOLLOW
- ▁REASON
- ▁BURN
- ▁HIS
- ▁MUST
- ▁FREE
- ▁FLASHLIGHT
- ▁1
- ▁ENOUGH
- ▁DRINK
- ▁WORDS
- ▁HIDE
- ▁UN
- ▁FORGET
- ▁SURE
- ▁CHANGE
- ▁SMILE
- ▁PROMISE
- ▁FOREVER
- '2'
- ▁SWEET
- ▁SAME
- ▁OOOH
- ▁PART
- ▁SOMEBODY
- NESS
- ▁BRIGHT
- ▁HEAVEN
- ▁DEEP
- ▁HIGH
- ▁INSTEAD
- ▁MOMENT
- ▁ALONG
- ▁ALRIGHT
- ▁SLOW
- ▁TOMORROW
- ▁SOUL
- ▁QU
- ▁PUSH
- ▁CHANDELIER
- ▁LEFT
- SIDE
- ▁TOLD
- ▁KNEW
- READY
- ▁LOVING
- ▁SAW
- '3'
- ▁WORK
- ▁DANCING
- ▁THREE
- ▁SAVE
- ▁SHOOT
- ▁LEAD
- ▁SKI
- ▁WILD
- ▁WIND
- ▁WHILE
- ▁EDGE
- ▁HAPPY
- ▁FEAR
- STUCK
- ▁MOST
- ▁LISTEN
- ▁WOAH
- ▁FIRST
- ▁JOLENE
- ▁VOICE
- ▁COMP
- ▁MILLION
- FUL
- ▁OOOOOH
- ▁CAME
- ▁RISE
- ▁NEXT
- ▁COUNT
- ▁MOUNTAIN
- ▁ROOM
- ▁BLUE
- ▁HIT
- ▁RAISE
- J
- ▁THOUSAND
- ▁SHAP
- ▁TREAT
- ▁DRY
- ▁FINALLY
- ▁TITANIUM
- ▁CARRY
- ▁TRUTH
- ▁WATER
- ▁MORNING
- TIME
- ▁BELONG
- ▁UMA
- ▁ALIVE
- ▁ELSE
- ▁ANGEL
- ▁BRAND
- ▁APART
- ▁EVERYBODY
- ▁SOUND
- ▁GUESS
- ▁PRAY
- ▁FAITH
- ▁AFTER
- ▁THROW
- ▁TRIED
- ▁SLEEP
- ▁FOOL
- ▁DISCOVERING
- ▁FUCK
- ▁TASTE
- ▁UNDERSTAND
- ▁SHAME
- ▁POWER
- ▁WELCOME
- ▁FELT
- ▁SAFE
- ▁DESERVE
- ▁GAME
- ▁SUPERMA
- ▁SWEAR
- ▁BETWEEN
- ▁GLASS
- ▁CATCH
- ▁TOGETHER
- '0'
- '4'
- '6'
- '5'
- '1'
- '8'
- '7'
- '9'
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d2
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Wikidepia/gpt2-spam | Wikidepia | 2022-03-20T01:10:59Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-20T01:08:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt2-spam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-spam
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
beston91/gpt2-xl_ft_mult_1k | beston91 | 2022-03-19T23:56:20Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-18T23:49:34Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl_ft_mult_1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl_ft_mult_1k
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.91 | 5 | 6.7968 |
| No log | 1.91 | 10 | 6.6621 |
| No log | 2.91 | 15 | 6.4335 |
| No log | 3.91 | 20 | 6.1137 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
espnet/ml_openslr63 | espnet | 2022-03-19T23:33:01Z | 1 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"ml",
"dataset:openslr",
"arxiv:1804.00015",
"region:us"
] | automatic-speech-recognition | 2022-03-19T22:54:54Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: ml
datasets:
- openslr
---
## ESPnet2 ASR pretrained model
### ``
This model was trained by Preksha Patel, Ruben Mampilli, and Bharani Ujjaini Kempaiah using egs2/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
msamogh/autonlp-cai-out-of-scope-649919112 | msamogh | 2022-03-19T21:40:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:msamogh/autonlp-data-cai-out-of-scope",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-19T21:40:14Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 0.49924480682533606
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919112
- CO2 Emissions (in grams): 0.49924480682533606
## Validation Metrics
- Loss: 0.49354293942451477
- Accuracy: 0.8064516129032258
- Precision: 0.8181818181818182
- Recall: 0.9
- AUC: 0.8689393939393939
- F1: 0.8571428571428572
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919112
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919112", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919112", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
msamogh/autonlp-cai-out-of-scope-649919118 | msamogh | 2022-03-19T21:40:40Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:msamogh/autonlp-data-cai-out-of-scope",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-19T21:40:15Z | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP 🤗"
datasets:
- msamogh/autonlp-data-cai-out-of-scope
co2_eq_emissions: 0.3996916853309825
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 649919118
- CO2 Emissions (in grams): 0.3996916853309825
## Validation Metrics
- Loss: 0.48289698362350464
- Accuracy: 0.8064516129032258
- Precision: 0.828125
- Recall: 0.8833333333333333
- AUC: 0.8353535353535354
- F1: 0.8548387096774193
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/msamogh/autonlp-cai-out-of-scope-649919118
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919118", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("msamogh/autonlp-cai-out-of-scope-649919118", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
sanchit-gandhi/wav2vec2-2-gpt2-no-adapter-regularisation | sanchit-gandhi | 2022-03-19T17:43:39Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-17T16:34:45Z | ---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7494
- Wer: 1.0532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4828 | 2.8 | 2500 | 4.0554 | 1.7873 |
| 0.8683 | 5.61 | 5000 | 2.5401 | 1.3156 |
| 0.4394 | 8.41 | 7500 | 1.7519 | 1.1129 |
| 0.0497 | 11.21 | 10000 | 1.7102 | 1.0738 |
| 0.031 | 14.01 | 12500 | 1.7395 | 1.0512 |
| 0.0508 | 16.82 | 15000 | 1.7254 | 1.0463 |
| 0.0462 | 19.62 | 17500 | 1.7494 | 1.0532 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
vinaykudari/distilGPT-ft-eli5 | vinaykudari | 2022-03-19T17:24:50Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-19T16:05:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilGPT-ft-eli5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilGPT-ft-eli5
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 281 | 5.8277 |
| 5.7427 | 2.0 | 562 | 5.7525 |
| 5.7427 | 3.0 | 843 | 5.7016 |
| 5.5614 | 4.0 | 1124 | 5.6593 |
| 5.5614 | 5.0 | 1405 | 5.6273 |
| 5.4408 | 6.0 | 1686 | 5.6029 |
| 5.4408 | 7.0 | 1967 | 5.5855 |
| 5.3522 | 8.0 | 2248 | 5.5739 |
| 5.2948 | 9.0 | 2529 | 5.5670 |
| 5.2948 | 10.0 | 2810 | 5.5643 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
ShahafAricha/nqg-gpt2 | ShahafAricha | 2022-03-19T17:20:23Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-18T21:51:57Z | ---
license: other
---
---
datasets:
- squad
tags:
- question-generation
widget:
- text: "The Technikum was conceived in the early 1900s by the German-Jewish fund Ezrah as a school of [HL]engineering and sciences[HL].[SEP]"
---
# Transformer QG on SQuAD
HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
**This is a Reproduce Version from distilled squad dataset**
More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD)
## Usage
### Input Format
```
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
``` |
huggingtweets/abombayboy | huggingtweets | 2022-03-19T16:13:12Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-19T15:53:28Z | ---
language: en
thumbnail: http://www.huggingtweets.com/abombayboy/1647706387106/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1465673407178043396/aYbTBRbu_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bombay Boy</div>
<div style="text-align: center; font-size: 14px;">@abombayboy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bombay Boy.
| Data | Bombay Boy |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 927 |
| Short tweets | 181 |
| Tweets kept | 2130 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3paz3q98/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @abombayboy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/331ordwj) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/331ordwj/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/abombayboy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
richardc7/electricidad-small-finetuned-amazon-review-classification | richardc7 | 2022-03-19T15:29:47Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-17T12:37:33Z | ---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: electricidad-small-finetuned-amazon-review-classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.581
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electricidad-small-finetuned-amazon-review-classification
This model is a fine-tuned version of [mrm8488/electricidad-small-discriminator](https://huggingface.co/mrm8488/electricidad-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9601
- Accuracy: 0.581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0136 | 1.0 | 25000 | 1.0153 | 0.5414 |
| 0.9416 | 2.0 | 50000 | 0.9942 | 0.5576 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
IsaacSST/gpt2-xl-ft-d3 | IsaacSST | 2022-03-19T15:18:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-19T12:41:36Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-ft-d3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-ft-d3
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 2022
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.2135 |
| No log | 2.0 | 312 | 1.2181 |
| No log | 3.0 | 468 | 1.2754 |
| 1.1743 | 4.0 | 624 | 1.3252 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mansidw/fake-tipping-6000-samples | mansidw | 2022-03-19T09:46:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-19T09:23:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: fake-tipping-6000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake-tipping-6000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Pavithra/code-parrot | Pavithra | 2022-03-19T04:04:29Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-19T03:52:29Z | # CodeParrot 🦜 (small)
CodeParrot 🦜 is a GPT-2 model (110M parameters) trained to generate Python code.
## Usage
You can load the CodeParrot model and tokenizer directly in `transformers`:
```Python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("lvwerra/codeparrot-small")
model = AutoModelWithLMHead.from_pretrained("lvwerra/codeparrot-small")
inputs = tokenizer("def hello_world():", return_tensors="pt")
outputs = model(**inputs)
```
or with a `pipeline`:
```Python
from transformers import pipeline
pipe = pipeline("text-generation", model="lvwerra/codeparrot-small")
outputs = pipe("def hello_world():")
```
## Training
The model was trained on the cleaned [CodeParrot 🦜 dataset](https://huggingface.co/datasets/lvwerra/codeparrot-clean) with the following settings:
|Config|Value|
|-------|-----|
|Batch size| 192 |
|Context size| 1024 |
|Training steps| 150'000|
|Gradient accumulation| 1|
|Gradient checkpointing| False|
|Learning rate| 5e-4 |
|Weight decay | 0.1 |
|Warmup steps| 2000 |
|Schedule| Cosine |
The training was executed on 16 x A100 (40GB) GPUs. This setting amounts to roughly 29 billion tokens.
## Performance
We evaluated the model on OpenAI's [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark which consists of programming challenges:
| Metric | Value |
|-------|-----|
|pass@1 | 3.80% |
|pass@10 | 6.57% |
|pass@100 | 12.78% |
The [pass@k metric](https://huggingface.co/metrics/code_eval) tells the probability that at least one out of k generations passes the tests.
## Resources
- Dataset: [full](https://huggingface.co/datasets/lvwerra/codeparrot-clean), [train](https://huggingface.co/datasets/lvwerra/codeparrot-clean-train), [valid](https://huggingface.co/datasets/lvwerra/codeparrot-clean-valid)
- Code: [repository](https://github.com/huggingface/transformers/tree/master/examples/research_projects/codeparrot)
- Spaces: [generation](), [highlighting]() |
vinaykudari/t5-ft-billsum | vinaykudari | 2022-03-18T23:11:57Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-18T22:02:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
model-index:
- name: t5-ft-billsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-ft-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 99 | 2.6250 |
| No log | 2.0 | 198 | 2.4587 |
| No log | 3.0 | 297 | 2.3865 |
| No log | 4.0 | 396 | 2.3431 |
| No log | 5.0 | 495 | 2.3226 |
| 2.7775 | 6.0 | 594 | 2.3019 |
| 2.7775 | 7.0 | 693 | 2.2882 |
| 2.7775 | 8.0 | 792 | 2.2802 |
| 2.7775 | 9.0 | 891 | 2.2764 |
| 2.7775 | 10.0 | 990 | 2.2752 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
eliasws/openApiT5-distilled-description-v1 | eliasws | 2022-03-18T18:47:47Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-18T18:44:45Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3681 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3681,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
saattrupdan/xlmr-base-texas-squad-fr | saattrupdan | 2022-03-18T16:56:07Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-03-02T23:29:05Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlmr-base-texas-squad-fr
results: []
widget:
- text: "Comment obtenir la coagulation?"
context: "La coagulation peut être obtenue soit par action d'une enzyme, la présure, soit par fermentation provoquée par des bactéries lactiques (le lactose est alors transformé en acide lactique), soit très fréquemment par combinaison des deux méthodes précédentes, soit par chauffage associé à une acidification directe (vinaigre…). On procède ensuite à l'égouttage. On obtient alors le caillé et le lactosérum. Le lactosérum peut aussi être utilisé directement : fromage de lactosérum comme le sérac, ou par réincorporation de ses composants."
---
# TExAS-SQuAD-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-fr dataset.
It achieves the following results on the evaluation set:
- Exact match: xx.xx%
- F1-score: xx.xx%
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1478 | 0.23 | 1000 | 1.8543 |
| 1.9827 | 0.46 | 2000 | 1.7643 |
| 1.8427 | 0.69 | 3000 | 1.6789 |
| 1.8372 | 0.92 | 4000 | 1.6137 |
| 1.7318 | 1.15 | 5000 | 1.6093 |
| 1.6603 | 1.38 | 6000 | 1.7157 |
| 1.6334 | 1.61 | 7000 | 1.6302 |
| 1.6716 | 1.84 | 8000 | 1.5845 |
| 1.5192 | 2.06 | 9000 | 1.6690 |
| 1.5174 | 2.29 | 10000 | 1.6669 |
| 1.4611 | 2.52 | 11000 | 1.6301 |
| 1.4648 | 2.75 | 12000 | 1.6009 |
| 1.5052 | 2.98 | 13000 | 1.6133 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
mfleck/wav2vec2-large-xls-r-300m-german-with-lm | mfleck | 2022-03-18T16:48:09Z | 5 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-10T16:46:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-german-with-lm
results: []
---
# wav2vec2-large-xls-r-300m-german-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the German set of the Common Voice dataset.
It achieves a Word Error Rate of 8,8 percent on the evaluation set
## Model description
German wav2vec2-xls-r-300m trained on the full train set of Common Voice dataset with a n-gram language model.
Full code available in [my Github repository](https://github.com/MichaelFleck92/asr-wav2vec)
## Citation
Feel free to cite this work by
```
@misc{mfleck/wav2vec2-large-xls-r-300m-german-with-lm,
title={XLS-R-300 Wav2Vec2 German with language model},
author={Fleck, Michael},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mfleck/wav2vec2-large-xls-r-300m-german-with-lm}},
year={2022}
}
```
## Intended uses & limitations
Inference Usage
```python
from transformers import pipeline
pipe = pipeline(model="mfleck/wav2vec2-large-xls-r-300m-german-with-lm")
output = pipe("/path/to/file.wav",chunk_length_s=5, stride_length_s=1)
print(output["text"])
```
## Training and evaluation data
Script used for training (takes about 80 hours on a single A100 40GB)
```python
import random
import re
import json
from typing import Any, Dict, List, Optional, Union
import pandas as pd
import numpy as np
import torch
# import soundfile
from datasets import load_dataset, load_metric, Audio
from dataclasses import dataclass, field
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2FeatureExtractor, Wav2Vec2Processor, TrainingArguments, Trainer, Wav2Vec2ForCTC
'''
Most parts of this script are following the tutorial: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
'''
common_voice_train = load_dataset("common_voice", "de", split="train+validation")
# Use train dataset with less training data
#common_voice_train = load_dataset("common_voice", "de", split="train[:3%]")
common_voice_test = load_dataset("common_voice", "de", split="test")
# Remove unused columns
common_voice_train = common_voice_train.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
common_voice_test = common_voice_test.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "segment", "up_votes"])
# Remove batches with chars which do not exist in German
print(len(common_voice_train))
regex = "[^A-Za-zäöüÄÖÜß,?.! ]+"
common_voice_train = common_voice_train.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
common_voice_test = common_voice_test.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
print(len(common_voice_train))
# Remove special chars from transcripts
chars_to_remove_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
def remove_special_characters(batch):
batch["sentence"] = re.sub(chars_to_remove_regex, '', batch["sentence"]).lower()
return batch
common_voice_train = common_voice_train.map(remove_special_characters, num_proc=10)
common_voice_test = common_voice_test.map(remove_special_characters, num_proc=10)
# Show some random transcripts to proof that preprocessing worked as expected
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
print(str(dataset[picks]))
show_random_elements(common_voice_train.remove_columns(["path","audio"]))
# Extract all chars which exist in datasets and add wav2vek tokens
def extract_all_chars(batch):
all_text = " ".join(batch["sentence"])
vocab = list(set(all_text))
return {"vocab": [vocab], "all_text": [all_text]}
vocab_train = common_voice_train.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_train.column_names)
vocab_test = common_voice_test.map(extract_all_chars, batched=True, batch_size=-1, keep_in_memory=True, remove_columns=common_voice_test.column_names)
vocab_list = list(set(vocab_train["vocab"][0]) | set(vocab_test["vocab"][0]))
vocab_dict = {v: k for k, v in enumerate(sorted(vocab_list))}
vocab_dict
vocab_dict["|"] = vocab_dict[" "]
del vocab_dict[" "]
vocab_dict["[UNK]"] = len(vocab_dict)
vocab_dict["[PAD]"] = len(vocab_dict)
len(vocab_dict)
with open('vocab.json', 'w') as vocab_file:
json.dump(vocab_dict, vocab_file)
# Create tokenizer and repo at Huggingface
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
repo_name = "wav2vec2-large-xls-r-300m-german-with-lm"
tokenizer.push_to_hub(repo_name)
print("pushed to hub")
# Create feature extractor and processor
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=True)
processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
# Cast audio column
common_voice_train = common_voice_train.cast_column("audio", Audio(sampling_rate=16_000))
common_voice_test = common_voice_test.cast_column("audio", Audio(sampling_rate=16_000))
# Convert audio signal to array and 16khz sampling rate
def prepare_dataset(batch):
audio = batch["audio"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
# Save an audio file to check if it gets loaded correctly
# soundfile.write("/home/debian/trainnew/test.wav",batch["input_values"],audio["sampling_rate"])
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["sentence"]).input_ids
return batch
common_voice_train = common_voice_train.map(prepare_dataset, remove_columns=common_voice_train.column_names)
common_voice_test = common_voice_test.map(prepare_dataset, remove_columns=common_voice_test.column_names)
print("dataset prepared")
@dataclass
class DataCollatorCTCWithPadding:
"""
Data collator that will dynamically pad the inputs received.
Args:
processor (:class:`~transformers.Wav2Vec2Processor`)
The processor used for proccessing the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
"""
processor: Wav2Vec2Processor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lenghts and need
# different padding methods
input_features = [{"input_values": feature["input_values"]} for feature in features]
label_features = [{"input_ids": feature["labels"]} for feature in features]
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
with self.processor.as_target_processor():
labels_batch = self.processor.pad(
label_features,
padding=self.padding,
return_tensors="pt",
)
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True)
# Use word error rate as metric
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
# Model and training parameters
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-xls-r-300m",
attention_dropout=0.094,
hidden_dropout=0.01,
feat_proj_dropout=0.04,
mask_time_prob=0.08,
layerdrop=0.04,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer),
)
model.freeze_feature_extractor()
training_args = TrainingArguments(
output_dir=repo_name,
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=2,
evaluation_strategy="steps",
num_train_epochs=20,
gradient_checkpointing=True,
fp16=True,
save_steps=5000,
eval_steps=5000,
logging_steps=100,
learning_rate=1e-4,
warmup_steps=500,
save_total_limit=3,
push_to_hub=True,
)
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=common_voice_train,
eval_dataset=common_voice_test,
tokenizer=processor.feature_extractor,
)
# Start fine tuning
trainer.train()
# When done push final model to Huggingface hub
trainer.push_to_hub()
```
The model achieves a Word Error Rate of 8,8% using the following script:
```python
import argparse
import re
from typing import Dict
import torch
from datasets import Audio, Dataset, load_dataset, load_metric
from transformers import AutoFeatureExtractor, pipeline
# load dataset
dataset = load_dataset("common_voice", "de", split="test")
# use only 1% of data
#dataset = load_dataset("common_voice", "de", split="test[:1%]")
# load processor
feature_extractor = AutoFeatureExtractor.from_pretrained("mfleck/wav2vec2-large-xls-r-300m-german-with-lm")
sampling_rate = feature_extractor.sampling_rate
dataset = dataset.cast_column("audio", Audio(sampling_rate=sampling_rate))
# load eval pipeline
# device=0 means GPU, use device=-1 for CPU
asr = pipeline("automatic-speech-recognition", model="mfleck/wav2vec2-large-xls-r-300m-german-with-lm", device=0)
# Remove batches with chars which do not exist in German
regex = "[^A-Za-zäöüÄÖÜß,?.! ]+"
dataset = dataset.filter(lambda example: bool(re.search(regex, example['sentence']))==False)
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\']'
# map function to decode audio
def map_to_pred(batch):
prediction = asr(batch["audio"]["array"], chunk_length_s=5, stride_length_s=1)
# Print automatic generated transcript
#print(str(prediction))
batch["prediction"] = prediction["text"]
text = batch["sentence"]
batch["target"] = re.sub(chars_to_ignore_regex, "", text.lower()) + " "
return batch
# run inference on all examples
result = dataset.map(map_to_pred, remove_columns=dataset.column_names)
# load metric
wer = load_metric("wer")
cer = load_metric("cer")
# compute metrics
wer_result = wer.compute(references=result["target"], predictions=result["prediction"])
cer_result = cer.compute(references=result["target"], predictions=result["prediction"])
# print results
result_str = f"WER: {wer_result}\n" f"CER: {cer_result}"
print(result_str)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1396 | 1.42 | 5000 | 0.1449 | 0.1479 |
| 0.1169 | 2.83 | 10000 | 0.1285 | 0.1286 |
| 0.0938 | 4.25 | 15000 | 0.1277 | 0.1230 |
| 0.0924 | 5.67 | 20000 | 0.1305 | 0.1191 |
| 0.0765 | 7.09 | 25000 | 0.1256 | 0.1158 |
| 0.0749 | 8.5 | 30000 | 0.1186 | 0.1092 |
| 0.066 | 9.92 | 35000 | 0.1173 | 0.1068 |
| 0.0581 | 11.34 | 40000 | 0.1225 | 0.1030 |
| 0.0582 | 12.75 | 45000 | 0.1153 | 0.0999 |
| 0.0507 | 14.17 | 50000 | 0.1182 | 0.0971 |
| 0.0491 | 15.59 | 55000 | 0.1136 | 0.0939 |
| 0.045 | 17.01 | 60000 | 0.1140 | 0.0914 |
| 0.0395 | 18.42 | 65000 | 0.1160 | 0.0902 |
| 0.037 | 19.84 | 70000 | 0.1148 | 0.0882 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ScandinavianMrT/distilbert-IMDB-NEG | ScandinavianMrT | 2022-03-18T16:43:11Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-18T16:15:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-IMDB-NEG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-IMDB-NEG
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1871
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1865 | 1.0 | 2000 | 0.1871 | 0.9346 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
facebook/wav2vec2-large-xlsr-53 | facebook | 2022-03-18T16:11:44Z | 557,799 | 121 | transformers | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"pretraining",
"speech",
"multilingual",
"dataset:common_voice",
"arxiv:2006.13979",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z | ---
language: multilingual
datasets:
- common_voice
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-XLSR-53
[Facebook's XLSR-Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information.
[Paper](https://arxiv.org/abs/2006.13979)
Authors: Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli
**Abstract**
This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations and jointly learns a quantization of the latents shared across languages. The resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong individual models. Analysis shows that the latent discrete speech representations are shared across languages with increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing XLSR-53, a large model pretrained in 53 languages.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
See [this notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb) for more information on how to fine-tune the model.

|
omerm/test_model | omerm | 2022-03-18T15:15:04Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-03-15T15:35:51Z | ---
license: apache-2.0
---
|
TestSB3/ppo-CartPole-v1 | TestSB3 | 2022-03-18T13:41:36Z | 0 | 0 | null | [
"gym",
"reinforcement-learning",
"region:us"
] | reinforcement-learning | 2022-03-18T10:20:52Z | ---
tags:
- gym
- reinforcement-learning
---
# TestSB3/ppo-CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1 using the [rl-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) library.
## Usage (with RL-baselines3-zoo)
Just clone the [rl-baselines3-zoo](https://github.com/DLR-RM/rl-baselines3-zoo) library.
Then run:
```python
python enjoy.py --algo ppo --env CartPole-v1
```
## Evaluation Results
Mean Reward: 500.0 +/- 0.0 (300 test episodes)
## Citing the Project
To cite this repository in publications:
```
@misc{rl-zoo3,
author = {Raffin, Antonin},
title = {RL Baselines3 Zoo},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/DLR-RM/rl-baselines3-zoo}},
}
``` |
willcai/wav2vec2_common_voice_accents_4 | willcai | 2022-03-18T11:11:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-18T01:46:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_4
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.615 | 1.28 | 400 | 0.8202 |
| 0.3778 | 2.56 | 800 | 0.1587 |
| 0.2229 | 3.85 | 1200 | 0.1027 |
| 0.1799 | 5.13 | 1600 | 0.0879 |
| 0.1617 | 6.41 | 2000 | 0.0772 |
| 0.1474 | 7.69 | 2400 | 0.0625 |
| 0.134 | 8.97 | 2800 | 0.0498 |
| 0.1213 | 10.26 | 3200 | 0.0429 |
| 0.1186 | 11.54 | 3600 | 0.0434 |
| 0.1118 | 12.82 | 4000 | 0.0312 |
| 0.1026 | 14.1 | 4400 | 0.0365 |
| 0.0951 | 15.38 | 4800 | 0.0321 |
| 0.0902 | 16.67 | 5200 | 0.0262 |
| 0.0843 | 17.95 | 5600 | 0.0208 |
| 0.0744 | 19.23 | 6000 | 0.0140 |
| 0.0718 | 20.51 | 6400 | 0.0204 |
| 0.0694 | 21.79 | 6800 | 0.0133 |
| 0.0636 | 23.08 | 7200 | 0.0104 |
| 0.0609 | 24.36 | 7600 | 0.0084 |
| 0.0559 | 25.64 | 8000 | 0.0050 |
| 0.0527 | 26.92 | 8400 | 0.0089 |
| 0.0495 | 28.21 | 8800 | 0.0058 |
| 0.0471 | 29.49 | 9200 | 0.0047 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
reichenbach/switch-transformer-classification | reichenbach | 2022-03-18T10:53:01Z | 2 | 2 | tf-keras | [
"tf-keras",
"generic",
"switch-transformers",
"mixture-of-experts",
"arxiv:2101.03961",
"region:us"
] | null | 2022-03-06T09:20:19Z | ---
tags:
- generic
- switch-transformers
- mixture-of-experts
---
## Tensorflow Keras Implementation of Switch Transformers for Text Classification.
This repo contains the models [Switch Transformers for Text Classification](https://keras.io/examples/nlp/text_classification_with_switch_transformer/).
Credits: [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/) - Original Author
HF Contribution: [Rishav Chandra Varma](https://huggingface.co/reichenbach)
## Background Information
### Introduction
In this example, we demonstrates implementation of the [Switch Transformer](https://arxiv.org/abs/2101.03961) model for text classification. For the purpose of this example, we are imdb dataset present in Keras Module.
### What is specialty of Switch Transformer ?
The Switch Transformer replaces the feed forward network (FFN) layer in the standard Transformer with a Mixture of Expert (MoE) routing layer, where each expert operates independently on the tokens in the sequence. This allows increasing the model size without increasing the computation needed to process each example.
Note that, for training the Switch Transformer efficiently, data and model parallelism need to be applied, so that expert modules can run simultaneously, each on its own accelerator. While the implementation described in the paper uses the [TensorFlow Mesh](https://github.com/tensorflow/mesh) framework for distributed training, this example presents a simple, non-distributed implementation of the Switch Transformer model for demonstration purposes.
|
sven-nm/roberta_classics_ner | sven-nm | 2022-03-18T10:14:20Z | 22 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"classics",
"citation mining",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
language:
- en
tags:
- classics
- citation mining
widget:
- text: "Homer's Iliad opens with an invocation to the muse (1. 1)."
---
### Model and entities
`roberta_classics_ner` is a domain-specific RoBERTa-based model for named entity recognition in Classical Studies. It recognises bibliographical entities, such as:
| id | label | desciption | Example |
| --- | ------------- | ------------------------------------------- | --------------------- |
| 0 | 'O' | Out of entity | |
| 1 | 'B-AAUTHOR' | Ancient authors | *Herodotus* |
| 2 | 'I-AAUTHOR' | | |
| 3 | 'B-AWORK' | The title of an ancient work | *Symposium*, *Aeneid* |
| 4 | 'I-AWORK' | | |
| 5 | 'B-REFAUWORK' | A structured reference to an ancient work | *Homer, Il.* |
| 6 | 'I-REFAUWORK' | | |
| 7 | 'B-REFSCOPE' | The scope of a reference | *II.1.993a30–b11* |
| 8 | 'I-REFSCOPE' | | |
| 9 | 'B-FRAGREF' | A reference to fragmentary texts or scholia | *Frag. 19. West* |
| 10 | 'I-FRAGREF' | | |
### Example
```
B-AAUTHOR B-AWORK B-REFSCOPE
Homer 's Iliad opens with an invocation to the muse ( 1. 1).
```
### Dataset
`roberta_classics_ner` was fine-tuned and evaluated on `EpiBau`, a dataset which has not been released publicly yet. It is composed of four volumes of [Structures of Epic Poetry](https://www.epische-bauformen.uni-rostock.de/), a compendium on the narrative patterns and structural elements in ancient epic.
Entity counts of the `Epibau` dataset are the following:
| | train-set | dev-set | test-set |
| -------------- | --------- | ------- | -------- |
| word count | 712462 | 125729 | 122324 |
| AAUTHOR | 4436 | 1368 | 1511 |
| AWORK | 3145 | 780 | 670 |
| REFAUWORK | 5102 | 988 | 1209 |
| REFSCOPE | 14768 | 3193 | 2847 |
| FRAGREF | 266 | 29 | 33 |
| total entities | 13822 | 1415 | 2419 |
### Results
The model was developed in the context of experiments reported [here](http://infoscience.epfl.ch/record/291236?&ln=en).Trained and tested on `EpiBau` with a 85-15 split, the model yields a general F1 score of **.82** (micro-averages). Detailed scores are displayed below. Evaluation was performed with the [CLEF-HIPE-scorer](https://github.com/impresso/CLEF-HIPE-2020-scorer), in strict mode)
| metric | AAUTHOR | AWORK | REFSCOPE | REFAUWORK |
| --------- | ------- | ----- | -------- | --------- |
| F1 | .819 | .796 | .863 | .756 |
| Precision | .842 | .818 | .860 | .755 |
| Recall | .797 | .766 | .756 | .866 |
Questions, remarks, help or contribution ? Get in touch [here](https://github.com/AjaxMultiCommentary), we'll be happy to chat !
|
cammy/bart-large-cnn-100-MDS-own | cammy | 2022-03-18T09:32:08Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-18T09:31:07Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-100-MDS-own
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100-MDS-own
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5357
- Rouge1: 22.4039
- Rouge2: 4.681
- Rougel: 13.1526
- Rougelsum: 15.7986
- Gen Len: 70.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.3375 | 25.7428 | 6.754 | 16.4131 | 19.6269 | 81.9 |
| No log | 2.0 | 50 | 3.5357 | 22.4039 | 4.681 | 13.1526 | 15.7986 | 70.3 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
aaraki/bert-base-uncased-finetuned-swag | aaraki | 2022-03-18T08:16:58Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2022-03-18T06:29:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5155
- Accuracy: 0.8002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6904 | 1.0 | 4597 | 0.5155 | 0.8002 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
youzanai/bert-product-title-chinese | youzanai | 2022-03-18T06:19:06Z | 6 | 3 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | 基于有赞商品标题语料训练的bert模型。
模型示例代码参考 https://github.com/youzanai/trexpark |
brad1141/Longformer-finetuned-norm | brad1141 | 2022-03-18T05:42:11Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"longformer",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-18T02:29:24Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Longformer-finetuned-norm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Longformer-finetuned-norm
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8127
- Precision: 0.8429
- Recall: 0.8701
- F1: 0.8562
- Accuracy: 0.8221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8008 | 1.0 | 1012 | 0.5839 | 0.8266 | 0.8637 | 0.8447 | 0.8084 |
| 0.5168 | 2.0 | 2024 | 0.5927 | 0.7940 | 0.9102 | 0.8481 | 0.8117 |
| 0.3936 | 3.0 | 3036 | 0.5651 | 0.8476 | 0.8501 | 0.8488 | 0.8143 |
| 0.2939 | 4.0 | 4048 | 0.6411 | 0.8494 | 0.8578 | 0.8536 | 0.8204 |
| 0.2165 | 5.0 | 5060 | 0.6833 | 0.8409 | 0.8822 | 0.8611 | 0.8270 |
| 0.1561 | 6.0 | 6072 | 0.7643 | 0.8404 | 0.8810 | 0.8602 | 0.8259 |
| 0.1164 | 7.0 | 7084 | 0.8127 | 0.8429 | 0.8701 | 0.8562 | 0.8221 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
beston91/gpt2-xl-ft-logits-5k | beston91 | 2022-03-18T02:54:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-17T23:54:46Z | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-vanilla-debiased-5000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-vanilla-debiased-5000
This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0371
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 27 | 6.1985 |
| No log | 1.99 | 54 | 6.4583 |
| No log | 2.99 | 81 | 6.7709 |
| No log | 3.99 | 108 | 7.0371 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
BigSalmon/InformalToFormalLincoln26 | BigSalmon | 2022-03-18T02:37:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-08T22:59:35Z | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln26")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln26")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
Text: failing to draw in the masses, the NBA has fallen into disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap solutions could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (California High-Speed Rail): built with an eye on the future, california's high-speed rail service resolves to change the face of travel.
Essay Intro (YIMBY's Need To Win): home to the most expensive housing market in the united states, san francisco is the city in which the yimby and anti-yimby hordes wage an eternal battle.
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- declining viewership facing the nba.
- does not have to be this way.
- in fact, many solutions exist.
- the four point line would surely draw in eyes.
text: failing to draw in the masses, the nba has ( fallen into / succumb to / bowed to ) disrepair. such does not have to be the case, however. in fact, a myriad of simple, relatively cheap ( solutions / interventions / enhancements ) could revive the league. the addition of the much-hyped four-point line would surely juice viewership.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
``` |
saghar/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103 | saghar | 2022-03-18T02:24:28Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"dataset:wikitext",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-17T21:19:53Z | ---
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-distilled-from-RoBERTa-Large-finetuned-wikitext103
This model is a fine-tuned version of [nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large](https://huggingface.co/nreimers/MiniLMv2-L6-H384-distilled-from-RoBERTa-Large) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.9694 | 1.0 | 3125 | 5.1757 |
| 5.2228 | 2.0 | 6250 | 4.8847 |
| 5.0653 | 3.0 | 9375 | 4.8236 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.3
|
anton-l/xtreme_s_xlsr_300m_minds14_old_splits | anton-l | 2022-03-17T22:23:22Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"automatic-speech-recognition",
"google/xtreme_s",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-03-14T18:02:05Z | ---
license: apache-2.0
tags:
- automatic-speech-recognition
- google/xtreme_s
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_minds14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2890
- F1: 0.9474
- Accuracy: 0.9470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 2.551 | 2.7 | 200 | 2.5855 | 0.0407 | 0.1201 |
| 1.6934 | 5.41 | 400 | 1.5072 | 0.5862 | 0.6085 |
| 0.5914 | 8.11 | 600 | 0.7274 | 0.8270 | 0.8232 |
| 0.3896 | 10.81 | 800 | 0.4402 | 0.8905 | 0.8890 |
| 0.5052 | 13.51 | 1000 | 0.4483 | 0.8837 | 0.8829 |
| 0.4806 | 16.22 | 1200 | 0.4981 | 0.8784 | 0.8787 |
| 0.2103 | 18.92 | 1400 | 0.4957 | 0.8810 | 0.8817 |
| 0.4198 | 21.62 | 1600 | 0.5161 | 0.8927 | 0.8921 |
| 0.11 | 24.32 | 1800 | 0.4456 | 0.8923 | 0.8902 |
| 0.1233 | 27.03 | 2000 | 0.3858 | 0.9016 | 0.9012 |
| 0.1827 | 29.73 | 2200 | 0.3765 | 0.9162 | 0.9159 |
| 0.1235 | 32.43 | 2400 | 0.3716 | 0.9134 | 0.9128 |
| 0.1873 | 35.14 | 2600 | 0.3080 | 0.9314 | 0.9311 |
| 0.017 | 37.84 | 2800 | 0.2629 | 0.9415 | 0.9409 |
| 0.0436 | 40.54 | 3000 | 0.3159 | 0.9397 | 0.9390 |
| 0.0455 | 43.24 | 3200 | 0.2963 | 0.9393 | 0.9390 |
| 0.046 | 45.95 | 3400 | 0.2914 | 0.9457 | 0.9451 |
| 0.0042 | 48.65 | 3600 | 0.2890 | 0.9474 | 0.9470 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
huggingtweets/charlieykim | huggingtweets | 2022-03-17T20:43:26Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/780199274046976001/ewIzqDV5_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Charlie Kim</div>
<div style="text-align: center; font-size: 14px;">@charlieykim</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Charlie Kim.
| Data | Charlie Kim |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 234 |
| Short tweets | 29 |
| Tweets kept | 2985 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ql0zb69/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @charlieykim's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/arss897f) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/arss897f/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/charlieykim')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cammy/led-large-16384-arxiv-100-MDS | cammy | 2022-03-17T19:09:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-03-17T18:49:41Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-large-16384-arxiv-100-MDS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-large-16384-arxiv-100-MDS
This model is a fine-tuned version of [allenai/led-large-16384-arxiv](https://huggingface.co/allenai/led-large-16384-arxiv) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3897
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 512.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 25 | 3.1144 | 13.2756 | 2.6204 | 9.2686 | 10.2289 | 184.0 |
| No log | 2.0 | 50 | 3.3897 | 0.0 | 0.0 | 0.0 | 0.0 | 512.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Subsets and Splits