modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
Wheatley961/Raw_3_no_1_Test_3_new.model | Wheatley961 | 2022-11-21T16:23:53Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-21T16:23:27Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 24 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 6.474612215184842e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 24,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
alexziweiwang/base-on-torgo0003 | alexziweiwang | 2022-11-21T16:07:25Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-21T11:45:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: base-on-torgo0003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-on-torgo0003
This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6579
- Wer: 0.7547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 28.1611 | 0.46 | 500 | 3.4550 | 1.0163 |
| 3.2238 | 0.92 | 1000 | 2.8781 | 1.0411 |
| 2.8617 | 1.39 | 1500 | 2.9896 | 1.0028 |
| 2.5841 | 1.85 | 2000 | 2.3744 | 1.2896 |
| 2.2029 | 2.31 | 2500 | 1.8598 | 1.2722 |
| 1.9976 | 2.77 | 3000 | 1.6505 | 1.2513 |
| 1.7817 | 3.23 | 3500 | 1.5291 | 1.2294 |
| 1.6484 | 3.69 | 4000 | 1.4635 | 1.2106 |
| 1.56 | 4.16 | 4500 | 1.4251 | 1.1989 |
| 1.417 | 4.62 | 5000 | 1.4040 | 1.1904 |
| 1.2884 | 5.08 | 5500 | 1.2734 | 1.1568 |
| 1.2788 | 5.54 | 6000 | 1.2242 | 1.1384 |
| 1.2159 | 6.0 | 6500 | 1.0561 | 1.1349 |
| 1.1125 | 6.46 | 7000 | 1.1001 | 1.1175 |
| 1.1495 | 6.93 | 7500 | 1.0409 | 1.1112 |
| 1.0222 | 7.39 | 8000 | 1.0525 | 1.0952 |
| 1.0104 | 7.85 | 8500 | 1.0184 | 1.1048 |
| 0.9956 | 8.31 | 9000 | 1.0438 | 1.1196 |
| 0.8747 | 8.77 | 9500 | 1.0736 | 1.1005 |
| 0.8437 | 9.23 | 10000 | 1.0041 | 1.0768 |
| 0.861 | 9.7 | 10500 | 0.9407 | 1.0496 |
| 0.8238 | 10.16 | 11000 | 0.9237 | 1.0697 |
| 0.7806 | 10.62 | 11500 | 0.8706 | 1.0343 |
| 0.7475 | 11.08 | 12000 | 0.9576 | 1.0407 |
| 0.6963 | 11.54 | 12500 | 0.9195 | 1.0159 |
| 0.7624 | 12.0 | 13000 | 0.8102 | 1.0060 |
| 0.6311 | 12.47 | 13500 | 0.8208 | 0.9897 |
| 0.6649 | 12.93 | 14000 | 0.7699 | 0.9968 |
| 0.6025 | 13.39 | 14500 | 0.7834 | 0.9547 |
| 0.5691 | 13.85 | 15000 | 0.7414 | 0.9632 |
| 0.532 | 14.31 | 15500 | 0.7056 | 0.9473 |
| 0.5572 | 14.77 | 16000 | 0.8136 | 0.9929 |
| 0.5455 | 15.24 | 16500 | 0.7355 | 0.9264 |
| 0.5369 | 15.7 | 17000 | 0.7531 | 0.9352 |
| 0.4771 | 16.16 | 17500 | 0.7527 | 0.9228 |
| 0.4778 | 16.62 | 18000 | 0.7312 | 0.9218 |
| 0.4384 | 17.08 | 18500 | 0.6774 | 0.8913 |
| 0.4619 | 17.54 | 19000 | 0.6888 | 0.8896 |
| 0.4341 | 18.01 | 19500 | 0.7068 | 0.9030 |
| 0.4164 | 18.47 | 20000 | 0.6484 | 0.8754 |
| 0.3883 | 18.93 | 20500 | 0.6388 | 0.8676 |
| 0.4135 | 19.39 | 21000 | 0.6732 | 0.8683 |
| 0.4121 | 19.85 | 21500 | 0.6354 | 0.8591 |
| 0.3694 | 20.31 | 22000 | 0.6751 | 0.8581 |
| 0.367 | 20.78 | 22500 | 0.6487 | 0.8411 |
| 0.3798 | 21.24 | 23000 | 0.5955 | 0.8312 |
| 0.3249 | 21.7 | 23500 | 0.6209 | 0.8230 |
| 0.3182 | 22.16 | 24000 | 0.7341 | 0.8212 |
| 0.3196 | 22.62 | 24500 | 0.6533 | 0.8106 |
| 0.297 | 23.08 | 25000 | 0.7163 | 0.8177 |
| 0.3021 | 23.55 | 25500 | 0.7209 | 0.8149 |
| 0.3248 | 24.01 | 26000 | 0.6268 | 0.8018 |
| 0.3013 | 24.47 | 26500 | 0.7014 | 0.7915 |
| 0.2986 | 24.93 | 27000 | 0.7306 | 0.8028 |
| 0.2913 | 25.39 | 27500 | 0.6866 | 0.7912 |
| 0.2706 | 25.85 | 28000 | 0.6860 | 0.7851 |
| 0.2572 | 26.32 | 28500 | 0.6478 | 0.7752 |
| 0.2794 | 26.78 | 29000 | 0.6308 | 0.7703 |
| 0.2796 | 27.24 | 29500 | 0.6302 | 0.7653 |
| 0.2604 | 27.7 | 30000 | 0.6638 | 0.7621 |
| 0.2367 | 28.16 | 30500 | 0.6492 | 0.7593 |
| 0.2383 | 28.62 | 31000 | 0.6560 | 0.7614 |
| 0.2495 | 29.09 | 31500 | 0.6577 | 0.7593 |
| 0.2513 | 29.55 | 32000 | 0.6579 | 0.7547 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
christofid/dapscibert | christofid | 2022-11-21T16:05:11Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-21T16:02:30Z | ---
license: mit
---
### dapSciBERT
DapSciBERT is a BERT-like model trained based on the domain adaptive pretraining method ([Gururangan et al.](https://aclanthology.org/2020.acl-main.740/)) for the patent domain. Allenai/scibert_scivocab_uncased is used as base for the training. The training dataset used consists of a corpus of 10,000,000
patent abstracts that have been filed between 1998-2020 in US and European patent offices as well as the World Intellectual Property Organization. |
GDJ1978/psychedelicdoodles | GDJ1978 | 2022-11-21T14:15:39Z | 0 | 0 | null | [
"region:us"
] | null | 2022-11-21T13:56:15Z | just an experimental embedding on a doodle that was put through img2img
psychedelic is the prompt trigger, or psy
1000 steps on 3 images, 0.05 training rate
|
Harrier/a2c-AntBulletEnv-v0 | Harrier | 2022-11-21T14:05:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-21T14:03:57Z | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1596.00 +/- 357.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
OrelStealth/This-is-The-Police | OrelStealth | 2022-11-21T13:59:03Z | 0 | 0 | null | [
"region:us"
] | null | 2022-11-21T13:30:57Z | # Prompt
Add 'ttp style' in your prompt to activate "This is The Police" style
# Samples
<img src="https://i.imgur.com/CfszL4y.png"/>
<img src="https://i.imgur.com/KaSPTtQ.png"/>
<img src="https://i.imgur.com/nO19Oog.png"/>
<img src="https://i.imgur.com/IRlwhEC.png"/>
<img src="https://i.imgur.com/vmfh4AR.png"/>
<img src="https://i.imgur.com/kETfqDj.png"/>
This is very early version, model does not understand some things yet. |
taozexi/distilgpt2-finetuned-wikitext2 | taozexi | 2022-11-21T13:33:11Z | 59 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-21T12:33:21Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: taozexi/distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# taozexi/distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.8582
- Validation Loss: 3.6762
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.8582 | 3.6762 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Batool/en_pipeline | Batool | 2022-11-21T13:04:54Z | 1 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | 2022-09-25T12:53:09Z | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 1.0
- name: NER Recall
type: recall
value: 1.0
- name: NER F Score
type: f_score
value: 1.0
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `CAUSE`, `HIGH_BILL`, `INSTALL_METER`, `ISSUE`, `METER_CHECK`, `NEW_SERVICE`, `SITE_CHECK` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 100.00 |
| `ENTS_P` | 100.00 |
| `ENTS_R` | 100.00 |
| `TRANSFORMER_LOSS` | 0.02 |
| `NER_LOSS` | 0.01 | |
fanzru/t5-small-finetuned-xsum-introduction | fanzru | 2022-11-21T12:45:51Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-21T11:56:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum-introduction
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.1828
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-introduction
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4784
- Rouge1: 28.1828
- Rouge2: 7.6948
- Rougel: 22.1413
- Rougelsum: 22.1467
- Gen Len: 18.8272
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7155 | 1.0 | 12753 | 2.4784 | 28.1828 | 7.6948 | 22.1413 | 22.1467 | 18.8272 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.11.0a0+b6df043
- Datasets 2.6.1
- Tokenizers 0.10.3
|
sunidhishetty/distilbert-base-uncased-finetuned-emotion | sunidhishetty | 2022-11-21T12:43:37Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-21T11:45:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.923935334776563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2162
- Accuracy: 0.924
- F1: 0.9239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8203 | 1.0 | 250 | 0.3095 | 0.905 | 0.9019 |
| 0.2468 | 2.0 | 500 | 0.2162 | 0.924 | 0.9239 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Wheatley961/Raw_2_no_1_Test_3_new.model | Wheatley961 | 2022-11-21T11:48:25Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-21T11:47:59Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1000 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 6.468596158458052e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1000,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
alexziweiwang/exp12-reducedTorgoOnly-predComparison | alexziweiwang | 2022-11-21T11:33:23Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-21T09:29:41Z | ---
tags:
- generated_from_trainer
model-index:
- name: exp12-reducedTorgoOnly-predComparison
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exp12-reducedTorgoOnly-predComparison
This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2016
- Wer: 1.0412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 35.9644 | 0.92 | 500 | 3.6081 | 1.0084 |
| 3.2094 | 1.83 | 1000 | 2.7519 | 1.0 |
| 2.8848 | 2.75 | 1500 | 2.7494 | 1.0014 |
| 2.7505 | 3.66 | 2000 | 2.5622 | 1.2840 |
| 2.6354 | 4.58 | 2500 | 2.3878 | 1.2819 |
| 2.3473 | 5.49 | 3000 | 2.0214 | 1.2666 |
| 2.0339 | 6.41 | 3500 | 1.8040 | 1.2394 |
| 1.7779 | 7.33 | 4000 | 1.5898 | 1.2289 |
| 1.5254 | 8.24 | 4500 | 1.7275 | 1.2080 |
| 1.4553 | 9.16 | 5000 | 1.3815 | 1.1786 |
| 1.3222 | 10.07 | 5500 | 1.3647 | 1.1835 |
| 1.1964 | 10.99 | 6000 | 1.2442 | 1.1528 |
| 1.1169 | 11.9 | 6500 | 1.5896 | 1.2059 |
| 1.0342 | 12.82 | 7000 | 1.3880 | 1.1766 |
| 0.989 | 13.74 | 7500 | 1.2111 | 1.1396 |
| 0.9109 | 14.65 | 8000 | 1.3362 | 1.1137 |
| 0.8875 | 15.57 | 8500 | 1.2594 | 1.1326 |
| 0.8053 | 16.48 | 9000 | 1.1858 | 1.1242 |
| 0.7566 | 17.4 | 9500 | 1.1987 | 1.1117 |
| 0.7284 | 18.32 | 10000 | 1.2963 | 1.0998 |
| 0.7345 | 19.23 | 10500 | 1.1835 | 1.0865 |
| 0.6424 | 20.15 | 11000 | 1.1564 | 1.0907 |
| 0.6323 | 21.06 | 11500 | 1.2123 | 1.0851 |
| 0.5871 | 21.98 | 12000 | 1.2736 | 1.0691 |
| 0.5788 | 22.89 | 12500 | 1.2094 | 1.0768 |
| 0.5368 | 23.81 | 13000 | 1.1626 | 1.0398 |
| 0.5357 | 24.73 | 13500 | 1.1960 | 1.0607 |
| 0.5407 | 25.64 | 14000 | 1.1724 | 1.0586 |
| 0.491 | 26.56 | 14500 | 1.1877 | 1.0426 |
| 0.4866 | 27.47 | 15000 | 1.2227 | 1.0593 |
| 0.5011 | 28.39 | 15500 | 1.2033 | 1.0440 |
| 0.4634 | 29.3 | 16000 | 1.2016 | 1.0412 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
omkarp/colab | omkarp | 2022-11-21T11:17:47Z | 187 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-11-18T09:51:34Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: colab
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# colab
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bank cheque

#### driving liences

#### other source
 |
VietAI/envit5-translation | VietAI | 2022-11-21T09:59:08Z | 5,312 | 33 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"translation",
"vi",
"en",
"dataset:cc100",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | 2022-10-06T14:53:36Z | ---
language:
- vi
- en
datasets:
- cc100
tags:
- translation
widget:
- text: "vi: VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam."
license: openrail
---
# EnViT5 Translation
[](https://paperswithcode.com/sota/machine-translation-on-iwslt2015-english-1?p=mtet-multi-domain-translation-for-english)
[](https://paperswithcode.com/sota/on-phomt?p=mtet-multi-domain-translation-for-english-and)
State-of-the-art English-Vietnamese and Vietnamese-English Translation models trained on [MTet](https://research.vietai.org/mtet/), [PhoMT](https://github.com/VinAIResearch/PhoMT).
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = "VietAI/envit5-translation"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
inputs = [
"vi: VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam.",
"vi: Theo báo cáo mới nhất của Linkedin về danh sách việc làm triển vọng với mức lương hấp dẫn năm 2020, các chức danh công việc liên quan đến AI như Chuyên gia AI (Artificial Intelligence Specialist), Kỹ sư ML (Machine Learning Engineer) đều xếp thứ hạng cao.",
"en: Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.",
"en: We're on a journey to advance and democratize artificial intelligence through open source and open science."
]
outputs = model.generate(tokenizer(inputs, return_tensors="pt", padding=True).input_ids.to('cuda'), max_length=512)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# ['en: VietAI is a non-profit organization with the mission of nurturing artificial intelligence talents and building an international - class community of artificial intelligence experts in Vietnam.',
# 'en: According to the latest LinkedIn report on the 2020 list of attractive and promising jobs, AI - related job titles such as AI Specialist, ML Engineer and ML Engineer all rank high.',
# 'vi: Nhóm chúng tôi khao khát tạo ra những khám phá có ảnh hưởng đến mọi người, và cốt lõi trong cách tiếp cận của chúng tôi là chia sẻ nghiên cứu và công cụ để thúc đẩy sự tiến bộ trong lĩnh vực này.',
# 'vi: Chúng ta đang trên hành trình tiến bộ và dân chủ hoá trí tuệ nhân tạo thông qua mã nguồn mở và khoa học mở.']
```
## Results

## Citation
```
@misc{https://doi.org/10.48550/arxiv.2210.05610,
doi = {10.48550/ARXIV.2210.05610},
author = {Ngo, Chinh and Trinh, Trieu H. and Phan, Long and Tran, Hieu and Dang, Tai and Nguyen, Hieu and Nguyen, Minh and Luong, Minh-Thang},
title = {MTet: Multi-domain Translation for English and Vietnamese},
publisher = {arXiv},
year = {2022},
}
``` |
Livingwithmachines/erwt-year-masked-75 | Livingwithmachines | 2022-11-21T09:21:17Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"newspapers",
"library",
"historic",
"glam",
"mdma",
"en",
"arxiv:2211.10086",
"arxiv:1910.14659",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-16T14:55:33Z | ---
language: en
tags:
- newspapers
- library
- historic
- glam
- mdma
license: mit
metrics:
- pseudo-perplexity
widget:
- text: "1820 [DATE] We received a letter from [MASK] Majesty."
- text: "1850 [DATE] We received a letter from [MASK] Majesty."
- text: "[MASK] [DATE] The Franco-Prussian war is a matter of great concern."
- text: "[MASK] [DATE] The Schleswig war is a matter of great concern."
---
**MODEL CARD UNDER CONSTRUCTION, ETA END OF NOVEMBER**
<img src="https://upload.wikimedia.org/wikipedia/commons/5/5b/NCI_peas_in_pod.jpg" alt="erwt" width="200" >
# ERWT-year-masked-75
🌺ERWT\* a language model that (🤭 maybe 🤫) knows more about history than you...🌺
ERWT is a fine-tuned [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model trained on historical newspapers from the [Heritage Made Digital collection](https://huggingface.co/datasets/davanstrien/hmd-erwt-training).
We trained a model based on a combination of text and **temporal metadata** (i.e. year information).
ERWT performs [**time-sensitive masked language modelling**](#historical-language-change-herhis-majesty-%F0%9F%91%91) or [**date prediction**]((#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)).
This model is served by [Kaspar von Beelen](https://huggingface.co/Kaspar) and [Daniel van Strien](https://huggingface.co/davanstrien), *"Improving AI, one pea at a time"*.
If these models happen to be useful, please cite our working paper.
```
@misc{https://doi.org/10.48550/arxiv.2211.10086,
doi = {10.48550/ARXIV.2211.10086},
url = {https://arxiv.org/abs/2211.10086},
author = {Beelen, Kaspar and van Strien, Daniel},
keywords = {Computation and Language (cs.CL), Digital Libraries (cs.DL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Metadata Might Make Language Models Better},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}}
```
\*ERWT is dutch for PEA.
# Overview
- [Introduction: Repent Now 😇](#introductory-note-repent-now-%F0%9F%98%87)
- [Background: MDMA to the rescue 🙂](#background-mdma-to-the-rescue-%F0%9F%99%82)
- [Intended Use: LMs as History Machines 🚂](#intended-use-lms-as-history-machines)
- [Historical Language Change: Her/His Majesty? 👑](#historical-language-change-herhis-majesty-%F0%9F%91%91)
- [Date Prediction: Pub Quiz with LMs 🍻](#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)
- [Limitations: Not all is well 😮](#limitations-not-all-is-well-%F0%9F%98%AE)
- [Training Data](#training-data)
- [Training Routine](#training-routine)
- [Data Description](#data-description)
- [Evaluation: 🤓 In case you care to count 🤓](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)
## Introductory Note: Repent Now. 😇
The ERWT models are trained for **experimental purposes**.
Please consult the [**limitations**](#limitations-not-all-is-well-%F0%9F%98%AE) section (seriously before using the models. Seriously, read this section, **we don't repent in public just for fun**).
If you can't get enough of these neural peas and crave some more. In that case, you can consult our working paper ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) for more background information and nerdy evaluation stuff (work in progress, handle with care and kindness).
## Background: MDMA to the rescue. 🙂
ERWT was created using a **M**eta**D**ata **M**asking **A**pproach (or **MDMA** 💊), a scenario in which we train a Masked Language Model (MLM) on text and metadata simultaneously. Our intuition was that incorporating metadata (information that *describes* a text but and is not part of the content) may make language models "better", or at least make them more **sensitive** to historical, political and geographical aspects of language use. We mainly use temporal, political and geographical metadata.
ERWT is a [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model, fine-tuned on a random subsample taken from the [Heritage Made Digital newspaper collection]((https://huggingface.co/datasets/davanstrien/hmd-erwt-training)). The training data comprises around half a billion words.
To unleash the power of MDMA, we adapted to the training routine mainly by fidgeting with the input data.
When preprocessing the text, we prepended each segment of hundred words with a time stamp (year of publication) and a special `[DATE]` token.
The snippet below, taken from the [Londonderry Sentinel]:(https://www.britishnewspaperarchive.co.uk/viewer/bl/0001480/18700722/014/0002)...
```
Every scrap of intelligence relative to the war between France and Prussia is now read with interest.
```
... would be formatted as:
```python
"1870 [DATE] Every scrap of intelligence relative to the war between France and Prussia is now read with interest."
```
These text chunks are then forwarded to the data collator, where we mask the year token 75% of the time (hence the '-masked-75' suffix).
Exposed to the tokens and (temporal) metadata, the model learns a relation between text and time. When a text token is hidden, the prepended `year` field influences the prediction of the masked words. Vice versa, when the prepended metadata is hidden, the model predicts the year of publication based on the content.
## Intended Use: LMs as History Machines.
Exposing the model to temporal metadata allows us to investigate **historical language change** and perform **date prediction**.
### Historical Language Change: Her/His Majesty? 👑
Let's show how ERWT works with a very concrete example.
The ERWT models are trained on a handful of British newspapers published between 1800 and 1870. It can be used to monitor historical change in this specific context.
Imagine you are confronted with the following snippet: "We received a letter from [MASK] Majesty" and want to predict the correct pronoun for the masked token (again assuming a British context).
👩🏫 **History Intermezzo** Please remember, for most of the nineteenth century, Queen Victoria ruled Britain, from 1837 to 1901 to be precise. Her nineteenth-century predecessors (George III, George IV and William IV) were all male.
While a standard language model will provide you with one a general prediction—based on what it has observed during training–ERWT allows you to manipulate to prediction, by anchoring the text in a specific year.
Doing this requires just a few lines of code:
```python
from transformers import pipeline
mask_filler = pipeline("fill-mask",
model='Livingwithmachines/erwt-year-masked-75')
mask_filler(f"1820 [DATE] We received a letter from [MASK] Majesty.")
```
This returns "his" as the most likely filler:
```python
'score': 0.6003531813621521,
'token': 2010,
'token_str': 'his',
'sequence': '1820 we received a letter from his majesty.'}
```
However, if we change the date at the start of the sentence to 1850:
```python
mask_filler(f"1850 [DATE] We received a letter from [MASK] Majesty.")
```
ERWT puts most of the probability mass on the token "her" and only a little bit on "his".
```python
{{'score': 0.5739046931266785,
'token': 2014,
'token_str': 'her',
'sequence': '1850 we received a letter from her majesty.'}}
```
You can repeat this experiment for yourself using the example sentences in the **Hosted inference API** at the top right.
Okay, but why is this **interesting**?
Firstly, eyeballing some toy examples (but also using more rigorous metrics such as [perplexity](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)) shows that MLMs yield more accurate predictions when they have access to temporal metadata.
In other words, **ERWT models are better at capturing historical context.**
Secondly, MDMA may **reduce biases** that arise from imbalanced training data (or at least give us more of a handle on this problem). Admittedly, we have to prove this more formally, but some experiments at least hint in this direction.
### Date Prediction: Pub Quiz with LMs 🍻
Another feature of ERWT is **date prediction**. Remember that during training the temporal metadata token is regularly masked. In this case, the model effectively learns to situate documents in time based on the tokens in a text.
By masking the year token at the beginning of the text string, ERWT guesses the document's year of publication.
👩🏫 **History Intermezzo** To unite the German states (there used to be [plenty](https://www.britannica.com/topic/German-Confederation)!), Prussia fought a number of wars with its neighbours in the second half of the nineteenth century. It invaded Denmark in 1864 (the second of the Schleswig Wars) and France in 1870 (the Franco-Prussian war).
Reusing to code above, we can time-stamp documents by masking the year. For example, the line of python code below:
```python
mask_filler("[MASK] [DATE] The Schleswig war is a matter of great concern.")
```
Outputs as most likely filler:
```python
{'score': 0.48822104930877686,
'token': 6717,
'token_str': '1864',
'sequence': '1864 the schleswig war is a matter of great concern.'}
```
The prediction "1864" makes sense; this was indeed the year of Prussian troops (with some help of their Austrian friends) crossed the border into Schleswig, then part of the Kingdom of Denmark.
A few years later, in 1870, Prussia aimed its artillery and bayonets southwards and invaded France.
```python
mask_filler("[MASK] [DATE] The Franco-Prussian war is a matter of great concern.")
```
ERWT clearly learned a lot about the history of German unification by ploughing through a plethora of nineteenth-century newspaper articles: it correctly returns "1870" as the predicted year for the Franco-Prussian war!
Again, we have to ask: Who cares? Wikipedia can tell us pretty much the same. More importantly, don't we already have timestamps for newspaper data?
In both cases, our answer sounds "yes, but...". ERWT's time-stamping powers have little instrumental use and won't make us rich (but donations are welcome of course 🤑). Nonetheless, we believe date prediction has value for research purposes. We can use ERWT for "fictitious" prediction, i.e. as a diagnostic tool.
Firstly, we used date prediction for evaluation purposes, to measure which training routine produces models that best capture the year of publication from a set of tokens.
Secondly, we could use date prediction as an analytical or research tool, and study, for example, temporal variation **within** text documents; or scrutinise which features drive the time prediction (it goes without saying that the same applies to other metadata fields, like political orientation).
## Limitations: Not all is well 😮.
The ERWT series were trained for evaluation purposes and therefore carry some critical limitations.
### Training Data
Many of the limitations are a direct result of the training data. ERWT models are trained on a rather small subsample of nineteenth-century **British newspapers**, and its predictions have to be understood in this context (remember, "Her Majesty?"). The corpus has a strong **Metropolitan and liberal bias** (see the section on Data Description for more information).
The training data spans from **1800 to 1870**. If your research interest is outside of this period, it's unlikely that ERWT will be of much use. Don't ask the poor model to predict when the Second World War happened. ERWT can be smart (at times) but it doesn't have the power of fortune-telling. At least not yet...
Furthermore, historical models tend to reflect past (and present?) stereotypes and prejudices. We strongly advise against using these models outside of a research context. The predictions are likely to exhibit harmful biases, they should be investigated critically and understood within the context of nineteenth-century British cultural history.
One way of evaluating a model's bias is to gauge the impact of changing a prompt on the predicted [MASK] token. Often a comparison is made between the predictions given for 'The **man** worked as a [MASK]' to 'The **woman** worked as a [MASK]'.
An example of the output for this model:
```
1810 [DATE] The man worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
'score': 0.12736983597278595,
'token': 10533,
'token_str': 'carpenter'},
{
'score': 0.08986148983240128,
'token': 6243,
'token_str': 'baker'
},
{
'score': 0.08985617756843567,
'token': 22701,
'token_str': 'tailor'
}
]
```
```
1810 [DATE] The woman worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
'score': 0.13835538923740387,
'token': 7947,
'token_str': 'servant'
},
{
'score': 0.0885922908782959,
'token': 6243,
'token_str': 'baker'
},
{
'score': 0.05954848602414131,
'token': 6821,
'token_str': 'nurse'
},
]
```
Mostly, prompt evaluation is done to assess the bias in *contemporary* language models. In the case of historic language models, the bias exhibited by a model *may* be a valuable research tool in assessing (at scale) language use over time, and the stereotypes and prejudices encoded in text corpora.
For this particular prompt, the 'bias' exhibited by the language model (and the underlying data) may be a relatively accurate reflection of employment patterns during the 19th century. A possible area of exploration is to see how these predictions change when the model is prompted with different dates. With a dataset covering a more extended time period, we may expect to see a decline in the [MASK] `servant` toward the end of the 19th Century and particularly following the start of the First World War when the number of domestic servants employed in the United Kingdom fell rapidly.
### Training Routine
We created various ERWT models as part of a wider experiment that aimed to establish best practices and guidelines for training models with metadata. An overview of all the models is available on our [GitHub](https://github.com/Living-with-machines/ERWT/) page.
To reduce training time, we based our experiments on a random subsample of the HMD corpus, consisting of half a billion tokens.
Furthermore, we only trained the models for one epoch, which implies they are most likely **undertrained** at the moment.
We were mainly interested in the **relative** performance of the different ERWT models. We did, however, compared ERWT with [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) in our evaluation experiments. And, of course, our tiny LM peas
did much better. 🎉🥳
Want to know the details—Oh, critical reader!—then consult and cite [our working paper](https://arxiv.org/abs/2211.10086)!
## Data Description
The ERWT models are trained on an openly accessible newspaper corpus created by the [Heritage Made Digital (HMD) newspaper digitisation project](footnote{https://blogs.bl.uk/thenewsroom/2019/01/heritage-made-digital-the-newspapers.html).
The HMD newspapers comprise around 2 billion words in total, but the bulk of the articles originate from the (then) liberal paper *The Sun*.
Geographically, most papers are metropolitan (i.e. based in London). The inclusion of *The Northern Daily Times* and *Liverpool Standard*, adds some geographical diversity to this corpus. The political classification is based on historical newspaper press directories, please read [our paper](https://academic.oup.com/dsh/advance-article/doi/10.1093/llc/fqac037/6644524?searchresult=1) on bias in newspaper collections for more information.
The table below contains a more detailed overview of the corpus.
| | | | | |
|------|--------------------------|--------------|-----------|---------------|
| NLP | Title | Politics | Location | Tokens |
| 2083 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 14.094.212 |
| 2084 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 34.450.366 |
| 2085 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 16.166.627 |
| 2088 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 149.204.800 |
| 2090 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 6.417.320 |
| 2194 | The Sun | LIBERAL | LONDON | 1.155.791.480 |
| 2244 | Colored News | NONE | LONDON | 53.634 |
| 2642 | The Express | LIBERAL | LONDON | 236.240.555 |
| 2644 | National Register | CONSERVATIVE | LONDON | 23.409.733 |
| 2645 | The Press | CONSERVATIVE | LONDON | 15.702.276 |
| 2646 | The Star | NONE | LONDON | 163.072.742 |
| 2647 | The Statesman | RADICAL | LONDON | 61.225.215 |
Temporally, most of the articles date from the second half of the nineteenth century. The figure below gives an overview of the number of articles by year.

## Evaluation: 🤓 In case you care to count 🤓
Our article ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) comprises an extensive evaluation of all the MDMA-infused language models.
The table below shows the [pseudo-perplexity](https://arxiv.org/abs/1910.14659) scores for different models based on text documents of 64 and 128 tokens.
In general, [ERWT-year-masked-25](https://huggingface.co/Livingwithmachines/erwt-year-masked-25), turned out to yield the most competitive scores across different tasks, and we generally recommend you use this model.
| text length | 64 | | 128 | |
|------------------|----------------|--------|----------------|--------|
| model | mean | sd | mean | sd |
| DistilBERT | 354.40 | 376.32 | 229.19 | 294.70 |
| HMDistilBERT | 32.94 | 64.78 | 25.72 | 45.99 |
| ERWT-year | 31.49 | 61.85 | 24.97 | 44.58 |
| ERWT-st | 31.69 | 62.42 | 25.03 | 44.74 |
| ERWT-year-masked-25 | **30.97** | 61.50 | **24.59** | 44.36 |
| ERWT-year-masked-75 | 31.02 | 61.41 | 24.63 | 44.40 |
| PEA | 31.63 | 62.09 | 25.58 | 44.99 |
| PEA-st | 31.65 | 62.19 | 25.59 | 44.99 |
## Questions?
Questions? Feedback? Please leave a message!
|
nguyenkhoa2407/favs_sort_classification_v2 | nguyenkhoa2407 | 2022-11-21T09:18:48Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sort_v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-11T05:27:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- sort_v2
metrics:
- f1
- accuracy
model-index:
- name: favs_sort_classification_v2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sort_v2
type: sort_v2
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 0.9801324503311257
- name: Accuracy
type: accuracy
value: 0.896551724137931
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# favs_sort_classification_v2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the sort_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1553
- F1: 0.9801
- Roc Auc: 0.9805
- Accuracy: 0.8966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.5589 | 1.0 | 21 | 0.5325 | 0.4815 | 0.6585 | 0.0345 |
| 0.4621 | 2.0 | 42 | 0.4465 | 0.5225 | 0.6780 | 0.0 |
| 0.4144 | 3.0 | 63 | 0.4131 | 0.5950 | 0.7172 | 0.0345 |
| 0.3669 | 4.0 | 84 | 0.3793 | 0.6167 | 0.7279 | 0.0345 |
| 0.3524 | 5.0 | 105 | 0.3455 | 0.6880 | 0.7689 | 0.0690 |
| 0.2987 | 6.0 | 126 | 0.3086 | 0.8116 | 0.8533 | 0.4138 |
| 0.2734 | 7.0 | 147 | 0.2767 | 0.8392 | 0.8772 | 0.5172 |
| 0.2532 | 8.0 | 168 | 0.2483 | 0.8472 | 0.8837 | 0.5172 |
| 0.2166 | 9.0 | 189 | 0.2285 | 0.8707 | 0.9032 | 0.5862 |
| 0.19 | 10.0 | 210 | 0.2012 | 0.9459 | 0.9525 | 0.7586 |
| 0.1833 | 11.0 | 231 | 0.1856 | 0.9530 | 0.9590 | 0.7931 |
| 0.1751 | 12.0 | 252 | 0.1748 | 0.9595 | 0.9610 | 0.7931 |
| 0.173 | 13.0 | 273 | 0.1633 | 0.9467 | 0.9569 | 0.7931 |
| 0.16 | 14.0 | 294 | 0.1553 | 0.9801 | 0.9805 | 0.8966 |
| 0.1396 | 15.0 | 315 | 0.1503 | 0.9733 | 0.9740 | 0.8621 |
| 0.1467 | 16.0 | 336 | 0.1417 | 0.9737 | 0.9785 | 0.8621 |
| 0.1271 | 17.0 | 357 | 0.1380 | 0.9669 | 0.9720 | 0.8621 |
| 0.1228 | 18.0 | 378 | 0.1346 | 0.9669 | 0.9720 | 0.8621 |
| 0.1257 | 19.0 | 399 | 0.1308 | 0.9801 | 0.9805 | 0.8966 |
| 0.1156 | 20.0 | 420 | 0.1280 | 0.9801 | 0.9805 | 0.8966 |
| 0.1242 | 21.0 | 441 | 0.1250 | 0.9801 | 0.9805 | 0.8966 |
| 0.1146 | 22.0 | 462 | 0.1236 | 0.9801 | 0.9805 | 0.8966 |
| 0.1262 | 23.0 | 483 | 0.1228 | 0.9801 | 0.9805 | 0.8966 |
| 0.1268 | 24.0 | 504 | 0.1227 | 0.9801 | 0.9805 | 0.8966 |
| 0.1133 | 25.0 | 525 | 0.1224 | 0.9801 | 0.9805 | 0.8966 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Livingwithmachines/erwt-year-masked-25 | Livingwithmachines | 2022-11-21T09:10:11Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"newspapers",
"library",
"historic",
"glam",
"mdma",
"en",
"arxiv:2211.10086",
"arxiv:1910.14659",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-16T14:53:47Z | ---
language: en
tags:
- newspapers
- library
- historic
- glam
- mdma
license: mit
metrics:
- pseudo-perplexity
widget:
- text: "1820 [DATE] We received a letter from [MASK] Majesty."
- text: "1850 [DATE] We received a letter from [MASK] Majesty."
- text: "[MASK] [DATE] The Franco-Prussian war is a matter of great concern."
- text: "[MASK] [DATE] The Schleswig war is a matter of great concern."
---
**MODEL CARD UNDER CONSTRUCTION, ETA END OF NOVEMBER**
<img src="https://upload.wikimedia.org/wikipedia/commons/5/5b/NCI_peas_in_pod.jpg" alt="erwt" width="200" >
# ERWT-year-masked-25
🌺ERWT\* a language model that (🤭 maybe 🤫) knows more about history than you...🌺
ERWT is a fine-tuned [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model trained on historical newspapers from the [Heritage Made Digital collection](https://huggingface.co/datasets/davanstrien/hmd-erwt-training).
We trained a model based on a combination of text and **temporal metadata** (i.e. year information).
ERWT performs [**time-sensitive masked language modelling**](#historical-language-change-herhis-majesty-%F0%9F%91%91) or [**date prediction**]((#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)).
This model is served by [Kaspar von Beelen](https://huggingface.co/Kaspar) and [Daniel van Strien](https://huggingface.co/davanstrien), *"Improving AI, one pea at a time"*.
If these models happen to be useful, please cite our working paper.
```
@misc{https://doi.org/10.48550/arxiv.2211.10086,
doi = {10.48550/ARXIV.2211.10086},
url = {https://arxiv.org/abs/2211.10086},
author = {Beelen, Kaspar and van Strien, Daniel},
keywords = {Computation and Language (cs.CL), Digital Libraries (cs.DL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Metadata Might Make Language Models Better},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}}
```
\*ERWT is dutch for PEA.
# Overview
- [Introduction: Repent Now 😇](#introductory-note-repent-now-%F0%9F%98%87)
- [Background: MDMA to the rescue 🙂](#background-mdma-to-the-rescue-%F0%9F%99%82)
- [Intended Use: LMs as History Machines 🚂](#intended-use-lms-as-history-machines)
- [Historical Language Change: Her/His Majesty? 👑](#historical-language-change-herhis-majesty-%F0%9F%91%91)
- [Date Prediction: Pub Quiz with LMs 🍻](#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)
- [Limitations: Not all is well 😮](#limitations-not-all-is-well-%F0%9F%98%AE)
- [Training Data](#training-data)
- [Training Routine](#training-routine)
- [Data Description](#data-description)
- [Evaluation: 🤓 In case you care to count 🤓](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)
## Introductory Note: Repent Now. 😇
The ERWT models are trained for **experimental purposes**.
Please consult the [**limitations**](#limitations-not-all-is-well-%F0%9F%98%AE) section (seriously before using the models. Seriously, read this section, **we don't repent in public just for fun**).
If you can't get enough of these neural peas and crave some more. In that case, you can consult our working paper ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) for more background information and nerdy evaluation stuff (work in progress, handle with care and kindness).
## Background: MDMA to the rescue. 🙂
ERWT was created using a **M**eta**D**ata **M**asking **A**pproach (or **MDMA** 💊), a scenario in which we train a Masked Language Model (MLM) on text and metadata simultaneously. Our intuition was that incorporating metadata (information that *describes* a text but and is not part of the content) may make language models "better", or at least make them more **sensitive** to historical, political and geographical aspects of language use. We mainly use temporal, political and geographical metadata.
ERWT is a [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model, fine-tuned on a random subsample taken from the [Heritage Made Digital newspaper collection]((https://huggingface.co/datasets/davanstrien/hmd-erwt-training)). The training data comprises around half a billion words.
To unleash the power of MDMA, we adapted to the training routine mainly by fidgeting with the input data.
When preprocessing the text, we prepended each segment of hundred words with a time stamp (year of publication) and a special `[DATE]` token.
The snippet below, taken from the [Londonderry Sentinel]:(https://www.britishnewspaperarchive.co.uk/viewer/bl/0001480/18700722/014/0002)...
```
Every scrap of intelligence relative to the war between France and Prussia is now read with interest.
```
... would be formatted as:
```python
"1870 [DATE] Every scrap of intelligence relative to the war between France and Prussia is now read with interest."
```
These text chunks are then forwarded to the data collator, where we mask the year token 25% of the time (hence the '-masked-25' suffix).
Exposed to the tokens and (temporal) metadata, the model learns a relation between text and time. When a text token is hidden, the prepended `year` field influences the prediction of the masked words. Vice versa, when the prepended metadata is hidden, the model predicts the year of publication based on the content.
## Intended Use: LMs as History Machines.
Exposing the model to temporal metadata allows us to investigate **historical language change** and perform **date prediction**.
### Historical Language Change: Her/His Majesty? 👑
Let's show how ERWT works with a very concrete example.
The ERWT models are trained on a handful of British newspapers published between 1800 and 1870. It can be used to monitor historical change in this specific context.
Imagine you are confronted with the following snippet: "We received a letter from [MASK] Majesty" and want to predict the correct pronoun for the masked token (again assuming a British context).
👩🏫 **History Intermezzo** Please remember, for most of the nineteenth century, Queen Victoria ruled Britain, from 1837 to 1901 to be precise. Her nineteenth-century predecessors (George III, George IV and William IV) were all male.
While a standard language model will provide you with one a general prediction—based on what it has observed during training–ERWT allows you to manipulate to prediction, by anchoring the text in a specific year.
Doing this requires just a few lines of code:
```python
from transformers import pipeline
mask_filler = pipeline("fill-mask",
model='Livingwithmachines/erwt-year-masked-25')
mask_filler(f"1820 [DATE] We received a letter from [MASK] Majesty.")
```
This returns "his" as the most likely filler:
```python
{'score': 0.8096420168876648,
'token': 2010,
'token_str': 'his',
'sequence': '1820 we received a letter from his majesty.'}
```
However, if we change the date at the start of the sentence to 1850:
```python
mask_filler(f"1850 [DATE] We received a letter from [MASK] Majesty.")
```
ERWT puts most of the probability mass on the token "her" and only a little bit on "his".
```python
{'score': 0.7587488293647766,
'token': 2014,
'token_str': 'her',
'sequence': '1850 we received a letter from her majesty.'}```
You can repeat this experiment for yourself using the example sentences in the **Hosted inference API** at the top right.
Okay, but why is this **interesting**?
Firstly, eyeballing some toy examples (but also using more rigorous metrics such as [perplexity](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)) shows that MLMs yield more accurate predictions when they have access to temporal metadata.
In other words, **ERWT models are better at capturing historical context.**
Secondly, MDMA may **reduce biases** that arise from imbalanced training data (or at least give us more of a handle on this problem). Admittedly, we have to prove this more formally, but some experiments at least hint in this direction.
### Date Prediction: Pub Quiz with LMs 🍻
Another feature of ERWT is **date prediction**. Remember that during training the temporal metadata token is regularly masked. In this case, the model effectively learns to situate documents in time based on the tokens in a text.
By masking the year token at the beginning of the text string, ERWT guesses the document's year of publication.
👩🏫 **History Intermezzo** To unite the German states (there used to be [plenty](https://www.britannica.com/topic/German-Confederation)!), Prussia fought a number of wars with its neighbours in the second half of the nineteenth century. It invaded Denmark in 1864 (the second of the Schleswig Wars) and France in 1870 (the Franco-Prussian war).
Reusing to code above, we can time-stamp documents by masking the year. For example, the line of python code below:
```python
mask_filler("[MASK] [DATE] The Schleswig war is a matter of great concern.")
```
Outputs as most likely filler:
```python
{'score': 0.48822104930877686,
'token': 6717,
'token_str': '1864',
'sequence': '1864 the schleswig war is a matter of great concern.'}
```
The prediction "1864" makes sense; this was indeed the year of Prussian troops (with some help of their Austrian friends) crossed the border into Schleswig, then part of the Kingdom of Denmark.
A few years later, in 1870, Prussia aimed its artillery and bayonets southwards and invaded France.
```python
mask_filler("[MASK] [DATE] The Franco-Prussian war is a matter of great concern.")
```
ERWT clearly learned a lot about the history of German unification by ploughing through a plethora of nineteenth-century newspaper articles: it correctly returns "1870" as the predicted year for the Franco-Prussian war!
Again, we have to ask: Who cares? Wikipedia can tell us pretty much the same. More importantly, don't we already have timestamps for newspaper data?
In both cases, our answer sounds "yes, but...". ERWT's time-stamping powers have little instrumental use and won't make us rich (but donations are welcome of course 🤑). Nonetheless, we believe date prediction has value for research purposes. We can use ERWT for "fictitious" prediction, i.e. as a diagnostic tool.
Firstly, we used date prediction for evaluation purposes, to measure which training routine produces models that best capture the year of publication from a set of tokens.
Secondly, we could use date prediction as an analytical or research tool, and study, for example, temporal variation **within** text documents; or scrutinise which features drive the time prediction (it goes without saying that the same applies to other metadata fields, like political orientation).
## Limitations: Not all is well 😮.
The ERWT series were trained for evaluation purposes and therefore carry some critical limitations.
### Training Data
Many of the limitations are a direct result of the training data. ERWT models are trained on a rather small subsample of nineteenth-century **British newspapers**, and its predictions have to be understood in this context (remember, "Her Majesty?"). The corpus has a strong **Metropolitan and liberal bias** (see the section on Data Description for more information).
The training data spans from **1800 to 1870**. If your research interest is outside of this period, it's unlikely that ERWT will be of much use. Don't ask the poor model to predict when the Second World War happened. ERWT can be smart (at times) but it doesn't have the power of fortune-telling. At least not yet...
Furthermore, historical models tend to reflect past (and present?) stereotypes and prejudices. We strongly advise against using these models outside of a research context. The predictions are likely to exhibit harmful biases, they should be investigated critically and understood within the context of nineteenth-century British cultural history.
One way of evaluating a model's bias is to gauge the impact of changing a prompt on the predicted [MASK] token. Often a comparison is made between the predictions given for 'The **man** worked as a [MASK]' to 'The **woman** worked as a [MASK]'.
An example of the output for this model:
```
1810 [DATE] The man worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
'score': 0.15719665586948395,
'token': 10533,
'token_str': 'carpenter',
},
{
'score': 0.09576332569122314,
'token': 6243,
'token_str': 'baker',
},
{
'score': 0.08851779252290726,
'token': 22701,
'token_str': 'tailor',
}
]
```
```
1810 [DATE] The woman worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
'score': 0.1492135375738144,
'token': 7947,
'token_str': 'servant',
},
{
'score': 0.09587471932172775,
'token': 6243,
'token_str': 'baker',
},
{
'score': 0.06408561021089554,
'token': 10533,
'token_str': 'carpenter',
}
]
```
Mostly, prompt evaluation is done to assess the bias in *contemporary* language models. In the case of historic language models, the bias exhibited by a model *may* be a valuable research tool in assessing (at scale) language use over time, and the stereotypes and prejudices encoded in text corpora.
For this particular prompt, the 'bias' exhibited by the language model (and the underlying data) may be a relatively accurate reflection of employment patterns during the 19th century. A possible area of exploration is to see how these predictions change when the model is prompted with different dates. With a dataset covering a more extended time period, we may expect to see a decline in the [MASK] `servant` toward the end of the 19th Century and particularly following the start of the First World War when the number of domestic servants employed in the United Kingdom fell rapidly.
### Training Routine
We created various ERWT models as part of a wider experiment that aimed to establish best practices and guidelines for training models with metadata. An overview of all the models is available on our [GitHub](https://github.com/Living-with-machines/ERWT/) page.
To reduce training time, we based our experiments on a random subsample of the HMD corpus, consisting of half a billion tokens.
Furthermore, we only trained the models for one epoch, which implies they are most likely **undertrained** at the moment.
We were mainly interested in the **relative** performance of the different ERWT models. We did, however, compared ERWT with [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) in our evaluation experiments. And, of course, our tiny LM peas
did much better. 🎉🥳
Want to know the details—Oh, critical reader!—then consult and cite [our working paper](https://arxiv.org/abs/2211.10086)!
## Data Description
The ERWT models are trained on an openly accessible newspaper corpus created by the [Heritage Made Digital (HMD) newspaper digitisation project](footnote{https://blogs.bl.uk/thenewsroom/2019/01/heritage-made-digital-the-newspapers.html).
The HMD newspapers comprise around 2 billion words in total, but the bulk of the articles originate from the (then) liberal paper *The Sun*.
Geographically, most papers are metropolitan (i.e. based in London). The inclusion of *The Northern Daily Times* and *Liverpool Standard*, adds some geographical diversity to this corpus. The political classification is based on historical newspaper press directories, please read [our paper](https://academic.oup.com/dsh/advance-article/doi/10.1093/llc/fqac037/6644524?searchresult=1) on bias in newspaper collections for more information.
The table below contains a more detailed overview of the corpus.
| | | | | |
|------|--------------------------|--------------|-----------|---------------|
| NLP | Title | Politics | Location | Tokens |
| 2083 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 14.094.212 |
| 2084 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 34.450.366 |
| 2085 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 16.166.627 |
| 2088 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 149.204.800 |
| 2090 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 6.417.320 |
| 2194 | The Sun | LIBERAL | LONDON | 1.155.791.480 |
| 2244 | Colored News | NONE | LONDON | 53.634 |
| 2642 | The Express | LIBERAL | LONDON | 236.240.555 |
| 2644 | National Register | CONSERVATIVE | LONDON | 23.409.733 |
| 2645 | The Press | CONSERVATIVE | LONDON | 15.702.276 |
| 2646 | The Star | NONE | LONDON | 163.072.742 |
| 2647 | The Statesman | RADICAL | LONDON | 61.225.215 |
Temporally, most of the articles date from the second half of the nineteenth century. The figure below gives an overview of the number of articles by year.

## Evaluation: 🤓 In case you care to count 🤓
Our article ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) comprises an extensive evaluation of all the MDMA-infused language models.
The table below shows the [pseudo-perplexity](https://arxiv.org/abs/1910.14659) scores for different models based on text documents of 64 and 128 tokens.
In general, this model, [ERWT-year-masked-25](https://huggingface.co/Livingwithmachines/erwt-year-masked-25), turned out to yield the most competitive scores across different tasks (yay!) and we generally recommend you use this model.
| text length | 64 | | 128 | |
|------------------|----------------|--------|----------------|--------|
| model | mean | sd | mean | sd |
| DistilBERT | 354.40 | 376.32 | 229.19 | 294.70 |
| HMDistilBERT | 32.94 | 64.78 | 25.72 | 45.99 |
| ERWT-year | 31.49 | 61.85 | 24.97 | 44.58 |
| ERWT-st | 31.69 | 62.42 | 25.03 | 44.74 |
| ERWT-year-masked-25 | **30.97** | 61.50 | **24.59** | 44.36 |
| ERWT-year-masked-75 | 31.02 | 61.41 | 24.63 | 44.40 |
| PEA | 31.63 | 62.09 | 25.58 | 44.99 |
| PEA-st | 31.65 | 62.19 | 25.59 | 44.99 |
## Questions?
Questions? Feedback? Please leave a message!
|
SWQ/GECgpt2finetune | SWQ | 2022-11-21T09:05:58Z | 168 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-21T07:59:47Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gptfinetune2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gptfinetune2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 482 | 3.1945 |
| 3.4235 | 2.0 | 964 | 3.1655 |
| 3.2473 | 3.0 | 1446 | 3.1560 |
| 3.1981 | 4.0 | 1928 | 3.1508 |
| 3.1767 | 5.0 | 2410 | 3.1477 |
| 3.1502 | 6.0 | 2892 | 3.1467 |
| 3.1387 | 7.0 | 3374 | 3.1464 |
| 3.1275 | 8.0 | 3856 | 3.1463 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AkhilD1/distilbert-base-uncased-finetuned-emotion | AkhilD1 | 2022-11-21T07:46:47Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-21T06:11:27Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240401598601309
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2178
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8155 | 1.0 | 250 | 0.3197 | 0.906 | 0.9022 |
| 0.2508 | 2.0 | 500 | 0.2178 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Guroruseru/xlm-roberta-base-finetuned-panx-en | Guroruseru | 2022-11-21T07:44:23Z | 137 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-21T07:41:29Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6870144284128746
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3949
- F1: 0.6870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1057 | 1.0 | 50 | 0.5767 | 0.4754 |
| 0.4987 | 2.0 | 100 | 0.4370 | 0.6365 |
| 0.3708 | 3.0 | 150 | 0.3949 | 0.6870 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Guroruseru/xlm-roberta-base-finetuned-panx-fr | Guroruseru | 2022-11-21T07:37:57Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-21T07:34:08Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8303152246814219
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2776
- F1: 0.8303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5895 | 1.0 | 191 | 0.3318 | 0.7894 |
| 0.263 | 2.0 | 382 | 0.2873 | 0.8175 |
| 0.1782 | 3.0 | 573 | 0.2776 | 0.8303 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Guroruseru/xlm-roberta-base-finetuned-panx-de | Guroruseru | 2022-11-21T07:23:39Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-21T04:08:29Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8663101604278075
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2581 | 1.0 | 525 | 0.1690 | 0.8303 |
| 0.1305 | 2.0 | 1050 | 0.1352 | 0.8484 |
| 0.0839 | 3.0 | 1575 | 0.1339 | 0.8663 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DONG19/ddpm-butterflies-128 | DONG19 | 2022-11-21T04:08:47Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-21T01:58:47Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/DONG19/ddpm-butterflies-128/tensorboard?#scalars)
|
BorysCorp/Borys | BorysCorp | 2022-11-21T03:54:56Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-21T03:54:56Z | ---
license: creativeml-openrail-m
---
|
gavincapriola/ddpm-butterflies-128 | gavincapriola | 2022-11-21T02:33:16Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-21T02:02:54Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/gavincapriola/ddpm-butterflies-128/tensorboard?#scalars)
|
ridhodaffasyah/results | ridhodaffasyah | 2022-11-21T02:31:31Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-20T16:18:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
xaeroq/q-Taxi-v3 | xaeroq | 2022-11-20T23:46:44Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-20T23:46:37Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="xaeroq/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
PublicPrompts/Synthwave | PublicPrompts | 2022-11-20T23:30:18Z | 0 | 48 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-20T22:00:02Z | ---
license: creativeml-openrail-m
---
Stable Diffusion model to create images in Synthwave/outrun style, trained using DreamBooth
Trigger phrase: snthwve style
More models on my site: https://publicprompts.art/
Example of generated images:









|
sd-concepts-library/filename-2 | sd-concepts-library | 2022-11-20T23:08:08Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2022-11-20T22:46:14Z | ---
license: mit
---
### Filename_2 on Stable Diffusion
This is the `<filename>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
classtest/berttest2 | classtest | 2022-11-20T22:40:02Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-16T19:34:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: berttest2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9137532981530343
- name: Recall
type: recall
value: 0.932514304947829
- name: F1
type: f1
value: 0.9230384807596203
- name: Accuracy
type: accuracy
value: 0.9822805674927886
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.8984100471155513
verified: true
- name: Precision
type: precision
value: 0.9270828085377937
verified: true
- name: Recall
type: recall
value: 0.9152932984050137
verified: true
- name: F1
type: f1
value: 0.9211503324684426
verified: true
- name: loss
type: loss
value: 0.7076165080070496
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# berttest2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0674
- Precision: 0.9138
- Recall: 0.9325
- F1: 0.9230
- Accuracy: 0.9823
## Model description
Model implemented for CSE 573 Course Project
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0869 | 1.0 | 1756 | 0.0674 | 0.9138 | 0.9325 | 0.9230 | 0.9823 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.2
|
tmobaggins/bert-finetuned-squad | tmobaggins | 2022-11-20T22:24:05Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-14T23:19:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
This is a first attempt at following the directions from the huggingface course. It was run on colab and a private server
## Intended uses & limitations
This model is fine-tuned for extractive question answering.
## Training and evaluation data
SQuAD
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
cabir40/t5-v1.1-base-dutch-cased_inversion | cabir40 | 2022-11-20T22:20:59Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-20T22:08:31Z | ```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = 'cabir40/t5-v1.1-base-dutch-cased_inversion'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
```
```python
document =["Zonder relatie mensen zijn gelukkig?",
"Nu steeds meer Nederlanders worden ouder dan 100 jaar.",
"Gewoon ik open mijn ogen wijd, zodat het lijkt of ik goed luister.",
"Dan het wordt moeilijk, als anderen beginnen over andere dingen te praten",
]
inputs = tokenizer(document, return_tensors="pt", padding=True)
output_sequences = model_from_local.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"] )
tokenizer.batch_decode(output_sequences, skip_special_tokens=True)
```
```bash
['Zonder relatie zijn mensen gelukkig?',
'Nu worden steeds meer Nederlanders ouder dan 100 jaar.',
'Gewoon open ik mijn ogen wijd, zodat het lijkt of ik goed luister.',
'Dan wordt het moeilijk, als anderen beginnen over andere dingen te praten Dan']
``` |
flamesbob/Sakimi_mdoel | flamesbob | 2022-11-20T22:10:27Z | 0 | 3 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-20T02:01:42Z | ---
license: creativeml-openrail-m
---
|
Wheatley961/Raw_1_no_1_Test_3_new.model | Wheatley961 | 2022-11-20T21:49:14Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-11-20T21:48:43Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 760 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 8.680327780950434e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 760,
"warmup_steps": 76,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
consciousAI/question-generation-auto-t5-v1-base-s | consciousAI | 2022-11-20T21:42:51Z | 121 | 2 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"Question(s) Generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-10-21T02:15:43Z | ---
tags:
- Question(s) Generation
metrics:
- rouge
model-index:
- name: consciousAI/question-generation-auto-t5-v1-base-s
results: []
---
# Auto Question Generation
The model is intended to be used for Auto Question Generation task i.e. no hint are required as input. The model is expected to produce one or possibly more than one question from the provided context.
[Live Demo: Question Generation](https://huggingface.co/spaces/consciousAI/question_generation)
Including this there are five models trained with different training sets, demo provide comparison to all in one go. However, you can reach individual projects at below links:
[Auto Question Generation v2](https://huggingface.co/consciousAI/question-generation-auto-t5-v1-base-s-q)
[Auto Question Generation v3](https://huggingface.co/consciousAI/question-generation-auto-t5-v1-base-s-q-c)
[Auto/Hints based Question Generation v1](https://huggingface.co/consciousAI/question-generation-auto-hints-t5-v1-base-s-q)
[Auto/Hints based Question Generation v2](https://huggingface.co/consciousAI/question-generation-auto-hints-t5-v1-base-s-q-c)
This model can be used as below:
```
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer
)
model_checkpoint = "consciousAI/question-generation-auto-t5-v1-base-s"
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
## Input with prompt
context="question_context: <context>"
encodings = tokenizer.encode(context, return_tensors='pt', truncation=True, padding='max_length').to(device)
## You can play with many hyperparams to condition the output, look at demo
output = model.generate(encodings,
#max_length=300,
#min_length=20,
#length_penalty=2.0,
num_beams=4,
#early_stopping=True,
#do_sample=True,
#temperature=1.1
)
## Multiple questions are expected to be delimited by '?' You can write a small wrapper to elegantly format. Look at the demo.
questions = [tokenizer.decode(id, clean_up_tokenization_spaces=False, skip_special_tokens=False) for id in output]
```
## Training and evaluation data
SQUAD split.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
Rouge metrics is heavily penalized because of multiple questions in target sample space,
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.0146 | 1.0 | 4758 | 1.6980 | 0.143 | 0.0705 | 0.1257 | 0.1384 |
...
| 1.1733 | 9.0 | 23790 | 1.6319 | 0.1404 | 0.0718 | 0.1239 | 0.1351 |
| 1.1225 | 10.0 | 28548 | 1.6476 | 0.1407 | 0.0716 | 0.1245 | 0.1356 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.0
|
huggingtweets/bretweinstein-ericrweinstein | huggingtweets | 2022-11-20T20:32:54Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-20T20:31:28Z | ---
language: en
thumbnail: http://www.huggingtweets.com/bretweinstein-ericrweinstein/1668976370447/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1405314351486144519/Uage5phF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/931641662538792961/h4d0n-Mr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Eric Weinstein & Bret Weinstein</div>
<div style="text-align: center; font-size: 14px;">@bretweinstein-ericrweinstein</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Eric Weinstein & Bret Weinstein.
| Data | Eric Weinstein | Bret Weinstein |
| --- | --- | --- |
| Tweets downloaded | 3249 | 3229 |
| Retweets | 31 | 551 |
| Short tweets | 300 | 223 |
| Tweets kept | 2918 | 2455 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y1x1qbzj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bretweinstein-ericrweinstein's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3sg9or4v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3sg9or4v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bretweinstein-ericrweinstein')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Yanjie24/pegasus-samsum | Yanjie24 | 2022-11-20T20:06:46Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-13T01:51:13Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6942 | 0.54 | 500 | 1.4832 |
| 1.4133 | 1.09 | 1000 | 1.4111 |
| 1.5088 | 1.63 | 1500 | 1.3778 |
| 1.4368 | 2.17 | 2000 | 1.3645 |
| 1.4041 | 2.72 | 2500 | 1.3587 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
huggingtweets/ericrweinstein | huggingtweets | 2022-11-20T20:05:38Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/ericrweinstein/1668974734642/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1405314351486144519/Uage5phF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Eric Weinstein</div>
<div style="text-align: center; font-size: 14px;">@ericrweinstein</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Eric Weinstein.
| Data | Eric Weinstein |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 31 |
| Short tweets | 300 |
| Tweets kept | 2918 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/22h73s5k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ericrweinstein's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/35jswvdg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/35jswvdg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ericrweinstein')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kormilitzin/en_core_spancat_med7_lg | kormilitzin | 2022-11-20T20:01:54Z | 2 | 1 | spacy | [
"spacy",
"en",
"license:mit",
"region:us"
] | null | 2022-11-20T20:00:14Z | ---
tags:
- spacy
language:
- en
license: mit
model-index:
- name: en_core_spancat_med7_lg
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_core_spancat_med7_lg` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `spancat` |
| **Components** | `tok2vec`, `spancat` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (8 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`spancat`** | `DOSAGE`, `MEDINFO`, `DRUG`, `STRENGTH`, `FREQUENCY`, `ROUTE`, `DURATION`, `FORM` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `SPANS_SC_F` | 85.21 |
| `SPANS_SC_P` | 91.52 |
| `SPANS_SC_R` | 79.71 |
| `TOK2VEC_LOSS` | 260.85 |
| `SPANCAT_LOSS` | 282817.13 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
``` |
huggingtweets/theallinpod | huggingtweets | 2022-11-20T19:34:12Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-20T19:31:58Z | ---
language: en
thumbnail: http://www.huggingtweets.com/theallinpod/1668972848736/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1281703300966969345/B8MN4HlO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The All-In Podcast 💧🐦</div>
<div style="text-align: center; font-size: 14px;">@theallinpod</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The All-In Podcast 💧🐦.
| Data | The All-In Podcast 💧🐦 |
| --- | --- |
| Tweets downloaded | 1976 |
| Retweets | 190 |
| Short tweets | 585 |
| Tweets kept | 1201 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/10sdjga6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theallinpod's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ol65ioa) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ol65ioa/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theallinpod')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gigabrain/cag | gigabrain | 2022-11-20T19:19:28Z | 112 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-20T19:10:27Z | ---
language: en
thumbnail: http://www.huggingtweets.com/doveywan-irenezhao_-layahheilpern/1668969714119/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1592314373317558274/kWBIBveR_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1569305276343369729/9tyrIoYq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1423875044598456321/SVjwd6Bb_400x400.jpg')">
</div>
</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Layah Heilpern & Dovey "Rug The Fiat" Wan & Irene Zhao</div>
<div style="text-align: center; font-size: 14px;">@doveywan-irenezhao_-layahheilpern</div>
</div>
## How does it work?
The model uses the following pipeline.

## Training data
The model was trained on tweets from Layah Heilpern & Dovey "Rug The Fiat" Wan & Irene Zhao.
| Data | Layah Heilpern | Dovey "Rug The Fiat" Wan | Irene Zhao |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3247 | 1945 |
| Retweets | 115 | 310 | 223 |
| Short tweets | 1453 | 269 | 417 |
| Tweets kept | 1681 | 2668 | 1305 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/38f27zgg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @doveywan-irenezhao_-layahheilpern's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zek1fxw0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zek1fxw0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cag')
generator("In crypto, ", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Gigabrain*
|
huggingtweets/chamath-davidsacks-friedberg | huggingtweets | 2022-11-20T19:15:10Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-20T19:13:12Z | ---
language: en
thumbnail: http://www.huggingtweets.com/chamath-davidsacks-friedberg/1668971705740/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1241949342967029762/CZO9M-WG_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1398157893774413825/vQ-FwRtP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1257066367892639744/Yh-QS3we_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">david friedberg & David Sacks & Chamath Palihapitiya</div>
<div style="text-align: center; font-size: 14px;">@chamath-davidsacks-friedberg</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from david friedberg & David Sacks & Chamath Palihapitiya.
| Data | david friedberg | David Sacks | Chamath Palihapitiya |
| --- | --- | --- | --- |
| Tweets downloaded | 910 | 3245 | 3249 |
| Retweets | 82 | 553 | 112 |
| Short tweets | 54 | 291 | 861 |
| Tweets kept | 774 | 2401 | 2276 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2jbjx03t/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chamath-davidsacks-friedberg's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/14pr3hxs) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/14pr3hxs/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chamath-davidsacks-friedberg')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
fernanda-dionello/autotrain-goodreads_without_bookid-2171169884 | fernanda-dionello | 2022-11-20T17:13:39Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:fernanda-dionello/autotrain-data-goodreads_without_bookid",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-20T17:04:02Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- fernanda-dionello/autotrain-data-goodreads_without_bookid
co2_eq_emissions:
emissions: 21.014243837592847
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2171169884
- CO2 Emissions (in grams): 21.0142
## Validation Metrics
- Loss: 0.815
- Accuracy: 0.666
- Macro F1: 0.454
- Micro F1: 0.666
- Weighted F1: 0.649
- Macro Precision: 0.465
- Micro Precision: 0.666
- Weighted Precision: 0.638
- Macro Recall: 0.454
- Micro Recall: 0.666
- Weighted Recall: 0.666
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/fernanda-dionello/autotrain-goodreads_without_bookid-2171169884
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("fernanda-dionello/autotrain-goodreads_without_bookid-2171169884", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("fernanda-dionello/autotrain-goodreads_without_bookid-2171169884", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
fernanda-dionello/autotrain-goodreads_without_bookid-2171169883 | fernanda-dionello | 2022-11-20T17:07:17Z | 100 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:fernanda-dionello/autotrain-data-goodreads_without_bookid",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-20T17:03:45Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- fernanda-dionello/autotrain-data-goodreads_without_bookid
co2_eq_emissions:
emissions: 7.7592453257413565
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2171169883
- CO2 Emissions (in grams): 7.7592
## Validation Metrics
- Loss: 1.024
- Accuracy: 0.579
- Macro F1: 0.360
- Micro F1: 0.579
- Weighted F1: 0.560
- Macro Precision: 0.383
- Micro Precision: 0.579
- Weighted Precision: 0.553
- Macro Recall: 0.353
- Micro Recall: 0.579
- Weighted Recall: 0.579
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/fernanda-dionello/autotrain-goodreads_without_bookid-2171169883
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("fernanda-dionello/autotrain-goodreads_without_bookid-2171169883", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("fernanda-dionello/autotrain-goodreads_without_bookid-2171169883", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
fernanda-dionello/autotrain-goodreads_without_bookid-2171169882 | fernanda-dionello | 2022-11-20T17:06:43Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:fernanda-dionello/autotrain-data-goodreads_without_bookid",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-20T17:03:44Z | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- fernanda-dionello/autotrain-data-goodreads_without_bookid
co2_eq_emissions:
emissions: 6.409243088343928
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2171169882
- CO2 Emissions (in grams): 6.4092
## Validation Metrics
- Loss: 0.950
- Accuracy: 0.586
- Macro F1: 0.373
- Micro F1: 0.586
- Weighted F1: 0.564
- Macro Precision: 0.438
- Micro Precision: 0.586
- Weighted Precision: 0.575
- Macro Recall: 0.399
- Micro Recall: 0.586
- Weighted Recall: 0.586
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/fernanda-dionello/autotrain-goodreads_without_bookid-2171169882
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("fernanda-dionello/autotrain-goodreads_without_bookid-2171169882", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("fernanda-dionello/autotrain-goodreads_without_bookid-2171169882", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
sd-concepts-library/iridescent-photo-style | sd-concepts-library | 2022-11-20T16:43:03Z | 0 | 11 | null | [
"license:mit",
"region:us"
] | null | 2022-11-02T18:03:35Z | ---
license: mit
---
### Iridescent Photo Style on Stable Diffusion
This is the 'iridescent-photo-style' concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







Here are images generated with this style:


 |
Yagorka/ddpm-butterflies-128 | Yagorka | 2022-11-20T16:37:42Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-23T19:57:41Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Yagorka/ddpm-butterflies-128/tensorboard?#scalars)
|
jonfreak/tvdino | jonfreak | 2022-11-20T16:27:24Z | 0 | 1 | null | [
"region:us"
] | null | 2022-11-20T16:14:27Z | Trained on 20 images, 2000 steps.
With TheLastBen fast-stable-diffusion (https://github.com/TheLastBen/fast-stable-diffusion)
use the token **tvdino**
 |
LaurentiuStancioiu/xlm-roberta-base-finetuned-panx-all | LaurentiuStancioiu | 2022-11-20T15:52:32Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-20T15:23:46Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Elitay/Reptilian | Elitay | 2022-11-20T15:12:39Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-20T02:22:58Z | ---
license: creativeml-openrail-m
---
Trained on "kobold", "lizardfolk", and "dragonborn". Using Dreambooth, trained for 6000, 10000, or 14000 steps. I recommend using the 14000 step model with a CFG 4-8. You may need to use the models that were trained for fewer steps if you're having difficulty getting certain elements in the image (e.g. hats).

You can also use a higher CFG if attempting to generate inked images. E.g: CFG 9 and "photo octane 3d render" in the negative prompt:
 |
dpkmnit/bert-finetuned-squad | dpkmnit | 2022-11-20T14:58:13Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-18T06:19:21Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dpkmnit/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dpkmnit/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7048
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 66549, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2092 | 0 |
| 0.7048 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.1
- Datasets 2.7.0
- Tokenizers 0.13.2
|
LaurentiuStancioiu/xlm-roberta-base-finetuned-panx-de-fr | LaurentiuStancioiu | 2022-11-20T14:23:38Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-20T13:54:03Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Harrier/Reinforce-CartPole-0 | Harrier | 2022-11-20T14:13:38Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-20T14:04:19Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 195.60 +/- 31.30
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
LaurentiuStancioiu/xlm-roberta-base-finetuned-panx-de | LaurentiuStancioiu | 2022-11-20T13:34:33Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-20T13:07:34Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
davidaponte/kd-distilBERT-clinc | davidaponte | 2022-11-20T13:34:09Z | 106 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-19T05:45:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: kd-distilBERT-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: train
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9129032258064517
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kd-distilBERT-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7752
- Accuracy: 0.9129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.3211 | 1.0 | 318 | 3.3313 | 0.7235 |
| 2.6568 | 2.0 | 636 | 1.9016 | 0.8452 |
| 1.5575 | 3.0 | 954 | 1.1668 | 0.8955 |
| 1.0094 | 4.0 | 1272 | 0.8619 | 0.9087 |
| 0.7914 | 5.0 | 1590 | 0.7752 | 0.9129 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Western1234/Modelop | Western1234 | 2022-11-20T12:55:18Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2022-11-20T12:53:42Z | ---
license: openrail
---
git lfs install
git clone https://huggingface.co/Western1234/Modelop |
Hudee/roberta-large-with-labeled-data-and-unlabeled-gab-reddit-semeval2023-task10-13300-labeled-sample | Hudee | 2022-11-20T12:42:37Z | 124 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-20T11:40:08Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-large-with-labeled-data-and-unlabeled-gab-reddit-semeval2023-task10-13300-labeled-sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-with-labeled-data-and-unlabeled-gab-reddit-semeval2023-task10-13300-labeled-sample
This model is a fine-tuned version of [HPL/roberta-large-unlabeled-gab-reddit-semeval2023-task10-57000sample](https://huggingface.co/HPL/roberta-large-unlabeled-gab-reddit-semeval2023-task10-57000sample) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9921 | 1.0 | 832 | 1.9311 |
| 1.9284 | 2.0 | 1664 | 1.8428 |
| 1.8741 | 3.0 | 2496 | 1.8364 |
| 1.816 | 4.0 | 3328 | 1.7889 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.10.3
|
hungngocphat01/Checkpoint_zaloAI_11_19_2022 | hungngocphat01 | 2022-11-20T11:59:05Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-20T11:53:29Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: Checkpoint_zaloAI_11_19_2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Checkpoint_zaloAI_11_19_2022
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3926
- eval_wer: 0.6743
- eval_runtime: 23.1283
- eval_samples_per_second: 39.865
- eval_steps_per_second: 5.016
- epoch: 25.07
- step: 26000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
youa/CreatTitle | youa | 2022-11-20T11:54:27Z | 1 | 0 | null | [
"pytorch",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-11-07T13:56:12Z | ---
license: bigscience-bloom-rail-1.0
---
|
daidv1112/distilbert-base-uncased-finetuned-squad | daidv1112 | 2022-11-20T11:09:48Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-20T08:07:02Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2071 | 1.0 | 5533 | 1.1445 |
| 0.9549 | 2.0 | 11066 | 1.1221 |
| 0.7506 | 3.0 | 16599 | 1.1476 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
sd-concepts-library/bored-ape-textual-inversion | sd-concepts-library | 2022-11-20T09:07:30Z | 0 | 3 | null | [
"license:mit",
"region:us"
] | null | 2022-11-20T09:07:27Z | ---
license: mit
---
### bored_ape_textual_inversion on Stable Diffusion
This is the `<bored_ape>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
OpenMatch/cocodr-base-msmarco-warmup | OpenMatch | 2022-11-20T08:26:41Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-11-20T08:20:01Z | ---
license: mit
---
---
license: mit
---
This model has been pretrained on BEIR corpus then finetuned on MS MARCO with BM25 warmup only, following the approach described in the paper **COCO-DR: Combating Distribution Shifts in Zero-Shot Dense Retrieval with Contrastive and Distributionally Robust Learning**. The associated GitHub repository is available here https://github.com/OpenMatch/COCO-DR.
This model is trained with BERT-base as the backbone with 110M hyperparameters. |
huggingtweets/iwriteok | huggingtweets | 2022-11-20T06:14:50Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: http://www.huggingtweets.com/iwriteok/1668924855688/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/598663964340301824/im3Wzn-o_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert Evans (The Only Robert Evans)</div>
<div style="text-align: center; font-size: 14px;">@iwriteok</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert Evans (The Only Robert Evans).
| Data | Robert Evans (The Only Robert Evans) |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 1269 |
| Short tweets | 142 |
| Tweets kept | 1807 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3hjcp2ib/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @iwriteok's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/wq4n95ia) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/wq4n95ia/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/iwriteok')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Jellywibble/dalio-bot-pretrain-finetune-restruct | Jellywibble | 2022-11-20T06:01:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-20T02:37:23Z | ---
tags:
- text-generation
library_name: transformers
---
## Model description
Dalio Bot Pre-trained on Principles, fine-tuned on handwritten examples.
Pre-trained model: Jellywibble/dalio-pretrained-book-bs4-seed1 (based-off OPT30B)
Fine-tuning dataset: Jellywibble/dalio_handwritten-conversations
## Model Parameters
- 4xA40 (eff. batch size = 4)
- base_mode_name Jellywibble/dalio-pretrained-book-bs4-seed1
- dataset_name Jellywibble/dalio_handwritten-conversations
- block size 500
- per_device_train_batch_size 1
- gradient_accumulation steps 1
- learning_rate 2e-6
- seed 28
- validation split percentage 20
- hellaswag_sample_size 100
## Metrics
- Hellaswag Perplexity: 29.9
- Eval acc: 57.1%
- Eval loss: 1.971
- wandb: https://wandb.ai/jellywibble/huggingface/runs/12lgyt20?workspace=user-jellywibble
- Checkpoint 10 selected and uploaded |
antonjeran/FAST-RIR | antonjeran | 2022-11-20T05:23:32Z | 0 | 1 | null | [
"arxiv:2110.04057",
"region:us"
] | null | 2022-11-20T05:22:22Z | # FAST-RIR: FAST NEURAL DIFFUSE ROOM IMPULSE RESPONSE GENERATOR (ICASSP 2022)
This is the official implementation of our neural-network-based fast diffuse room impulse response generator ([**FAST-RIR**](https://arxiv.org/pdf/2110.04057.pdf)) for generating room impulse responses (RIRs) for a given rectangular acoustic environment. Our model is inspired by [**StackGAN**](https://github.com/hanzhanggit/StackGAN-Pytorch) architecture. The audio examples and spectrograms of the generated RIRs are available [here](https://anton-jeran.github.io/FRIR/).
**NEWS : We have genaralized our FAST-RIR to generate RIRs for any 3D indoor scenes represented using meshes. Official code of our network [**MESH2IR**](https://anton-jeran.github.io/M2IR/) is available.**
## Requirements
```
Python3.6
Pytorch
python-dateutil
easydict
pandas
torchfile
gdown
librosa
soundfile
acoustics
wavefile
wavfile
pyyaml==5.4.1
pickle
```
## Embedding
Each normalized embedding is created as follows: If you are using our trained model, you may need to use extra parameter Correction(CRR).
```
Listener Position = LP
Source Position = SP
Room Dimension = RD
Reverberation Time = T60
Correction = CRR
CRR = 0.1 if 0.5<T60<0.6
CRR = 0.2 if T60>0.6
CRR = 0 otherwise
Embedding = ([LP_X,LP_Y,LP_Z,SP_X,SP_Y,SP_Z,RD_X,RD_Y,RD_Z,(T60+CRR)] /5) - 1
```
## Generete RIRs using trained model
Download the trained model using this command
```
source download_generate.sh
```
Create normalized embeddings list in pickle format. You can run following command to generate an example embedding list
```
python3 example1.py
```
Run the following command inside **code_new** to generate RIRs corresponding to the normalized embeddings list. You can find generated RIRs inside **code_new/Generated_RIRs**
```
python3 main.py --cfg cfg/RIR_eval.yml --gpu 0
```
## Range
Our trained NN-DAS is capable of generating RIRs with the following range accurately.
```
Room Dimension X --> 8m to 11m
Room Dimesnion Y --> 6m to 8m
Room Dimension Z --> 2.5m to 3.5m
Listener Position --> Any position within the room
Speaker Position --> Any position within the room
Reverberation time --> 0.2s to 0.7s
```
## Training the Model
Run the following command to download the training dataset we created using a [**Diffuse Acoustic Simulator**](https://github.com/GAMMA-UMD/pygsound). You also can train the model using your dataset.
```
source download_data.sh
```
Run the following command to train the model. You can pass what GPUs to be used for training as an input argument. In this example, I am using 2 GPUs.
```
python3 main.py --cfg cfg/RIR_s1.yml --gpu 0,1
```
## Related Works
1) [**IR-GAN: Room Impulse Response Generator for Far-field Speech Recognition (INTERSPEECH2021)**](https://github.com/anton-jeran/IR-GAN)
2) [**TS-RIR: Translated synthetic room impulse responses for speech augmentation (IEEE ASRU 2021)**](https://github.com/GAMMA-UMD/TS-RIR)
## Citations
If you use our **FAST-RIR** for your research, please consider citing
```
@INPROCEEDINGS{9747846,
author={Ratnarajah, Anton and Zhang, Shi-Xiong and Yu, Meng and Tang, Zhenyu and Manocha, Dinesh and Yu, Dong},
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Fast-Rir: Fast Neural Diffuse Room Impulse Response Generator},
year={2022},
volume={},
number={},
pages={571-575},
doi={10.1109/ICASSP43922.2022.9747846}}
```
Our work is inspired by
```
@inproceedings{han2017stackgan,
Author = {Han Zhang and Tao Xu and Hongsheng Li and Shaoting Zhang and Xiaogang Wang and Xiaolei Huang and Dimitris Metaxas},
Title = {StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks},
Year = {2017},
booktitle = {{ICCV}},
}
```
If you use our training dataset generated using [**Diffuse Acoustic Simulator**](https://github.com/GAMMA-UMD/pygsound) in your research, please consider citing
```
@inproceedings{9052932,
author={Z. {Tang} and L. {Chen} and B. {Wu} and D. {Yu} and D. {Manocha}},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Improving Reverberant Speech Training Using Diffuse Acoustic Simulation},
year={2020},
volume={},
number={},
pages={6969-6973},
}
```
|
TTian/deberta-classifier-feedback-1024 | TTian | 2022-11-20T04:58:44Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-20T03:18:22Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-classifier-feedback-1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-classifier-feedback-1024
This model is a fine-tuned version of [TTian/deberta-mlm-feedback-1024](https://huggingface.co/TTian/deberta-mlm-feedback-1024) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.038 | 0.04 | 10 | 0.8470 |
| 0.8858 | 0.08 | 20 | 0.7317 |
| 0.8166 | 0.13 | 30 | 0.8127 |
| 0.7791 | 0.17 | 40 | 0.8111 |
| 0.7977 | 0.21 | 50 | 0.7540 |
| 0.7815 | 0.25 | 60 | 0.7204 |
| 0.7467 | 0.3 | 70 | 0.7446 |
| 0.7525 | 0.34 | 80 | 0.7522 |
| 0.716 | 0.38 | 90 | 0.7542 |
| 0.7617 | 0.42 | 100 | 0.7095 |
| 0.7618 | 0.47 | 110 | 0.7147 |
| 0.7297 | 0.51 | 120 | 0.8648 |
| 0.7797 | 0.55 | 130 | 0.7150 |
| 0.7466 | 0.59 | 140 | 0.7360 |
| 0.745 | 0.64 | 150 | 0.6842 |
| 0.718 | 0.68 | 160 | 0.7408 |
| 0.7455 | 0.72 | 170 | 0.7029 |
| 0.7476 | 0.76 | 180 | 0.7106 |
| 0.695 | 0.81 | 190 | 0.6781 |
| 0.6603 | 0.85 | 200 | 0.7713 |
| 0.7763 | 0.89 | 210 | 0.7619 |
| 0.6858 | 0.93 | 220 | 0.7252 |
| 0.6567 | 0.97 | 230 | 0.7017 |
| 0.6529 | 1.02 | 240 | 0.7030 |
| 0.6752 | 1.06 | 250 | 0.6717 |
| 0.7078 | 1.1 | 260 | 0.6868 |
| 0.6428 | 1.14 | 270 | 0.6694 |
| 0.6173 | 1.19 | 280 | 0.7137 |
| 0.6753 | 1.23 | 290 | 0.7363 |
| 0.6326 | 1.27 | 300 | 0.6808 |
| 0.6241 | 1.31 | 310 | 0.6855 |
| 0.6717 | 1.36 | 320 | 0.6627 |
| 0.633 | 1.4 | 330 | 0.7079 |
| 0.6541 | 1.44 | 340 | 0.6475 |
| 0.5998 | 1.48 | 350 | 0.7008 |
| 0.7088 | 1.53 | 360 | 0.6558 |
| 0.6209 | 1.57 | 370 | 0.6536 |
| 0.6159 | 1.61 | 380 | 0.6805 |
| 0.6297 | 1.65 | 390 | 0.6617 |
| 0.6506 | 1.69 | 400 | 0.6459 |
| 0.6397 | 1.74 | 410 | 0.6450 |
| 0.6181 | 1.78 | 420 | 0.7158 |
| 0.6609 | 1.82 | 430 | 0.6336 |
| 0.6066 | 1.86 | 440 | 0.6232 |
| 0.6418 | 1.91 | 450 | 0.6272 |
| 0.6499 | 1.95 | 460 | 0.6268 |
| 0.6021 | 1.99 | 470 | 0.6431 |
| 0.5899 | 2.03 | 480 | 0.6395 |
| 0.5524 | 2.08 | 490 | 0.6278 |
| 0.5182 | 2.12 | 500 | 0.6690 |
| 0.5768 | 2.16 | 510 | 0.6400 |
| 0.5326 | 2.2 | 520 | 0.6386 |
| 0.5641 | 2.25 | 530 | 0.6759 |
| 0.5794 | 2.29 | 540 | 0.6483 |
| 0.5341 | 2.33 | 550 | 0.6273 |
| 0.5604 | 2.37 | 560 | 0.6393 |
| 0.529 | 2.42 | 570 | 0.6389 |
| 0.5433 | 2.46 | 580 | 0.6272 |
| 0.5574 | 2.5 | 590 | 0.6387 |
| 0.5279 | 2.54 | 600 | 0.6613 |
| 0.5066 | 2.58 | 610 | 0.6376 |
| 0.5235 | 2.63 | 620 | 0.6449 |
| 0.516 | 2.67 | 630 | 0.6285 |
| 0.5888 | 2.71 | 640 | 0.6391 |
| 0.5326 | 2.75 | 650 | 0.6226 |
| 0.5486 | 2.8 | 660 | 0.6373 |
| 0.5176 | 2.84 | 670 | 0.6272 |
| 0.5038 | 2.88 | 680 | 0.6235 |
| 0.5335 | 2.92 | 690 | 0.6266 |
| 0.557 | 2.97 | 700 | 0.6246 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Jellywibble/dalio-convo-finetune-restruct | Jellywibble | 2022-11-20T02:39:45Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-19T19:41:56Z | ---
tags:
- text-generation
library_name: transformers
---
## Model description
Based on Jellywibble/dalio-pretrained-book-bs4-seed1 which was pre-trained on the Dalio Principles Book
Finetuned on handwritten conversations Jellywibble/dalio_handwritten-conversations
## Dataset Used
Jellywibble/dalio_handwritten-conversations
## Training Parameters
- Deepspeed on 4xA40 GPUs
- Ensuring EOS token `<s>` appears only at the beginning of each 'This is a conversation where Ray ...'
- Gradient Accumulation steps = 1 (Effective batch size of 4)
- 2e-6 Learning Rate, AdamW optimizer
- Block size of 1000
- Trained for 1 Epoch (additional epochs yielded worse Hellaswag result)
## Metrics
- Hellaswag Perplexity: 29.83
- Eval accuracy: 58.1%
- Eval loss: 1.883
- Checkpoint 9 uploaded
- Wandb run: https://wandb.ai/jellywibble/huggingface/runs/157eehn9?workspace=user-jellywibble |
Alred/t5-small-finetuned-summarization-cnn-ver2 | Alred | 2022-11-20T02:38:15Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | 2022-11-20T00:53:44Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-small-finetuned-summarization-cnn-ver2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-summarization-cnn-ver2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0084
- Bertscore-mean-precision: 0.8859
- Bertscore-mean-recall: 0.8592
- Bertscore-mean-f1: 0.8721
- Bertscore-median-precision: 0.8855
- Bertscore-median-recall: 0.8578
- Bertscore-median-f1: 0.8718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bertscore-mean-precision | Bertscore-mean-recall | Bertscore-mean-f1 | Bertscore-median-precision | Bertscore-median-recall | Bertscore-median-f1 |
|:-------------:|:-----:|:----:|:---------------:|:------------------------:|:---------------------:|:-----------------:|:--------------------------:|:-----------------------:|:-------------------:|
| 2.0422 | 1.0 | 718 | 2.0139 | 0.8853 | 0.8589 | 0.8717 | 0.8857 | 0.8564 | 0.8715 |
| 1.9481 | 2.0 | 1436 | 2.0085 | 0.8863 | 0.8591 | 0.8723 | 0.8858 | 0.8577 | 0.8718 |
| 1.9231 | 3.0 | 2154 | 2.0084 | 0.8859 | 0.8592 | 0.8721 | 0.8855 | 0.8578 | 0.8718 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Jellywibble/dalio-principles-pretrain-v2 | Jellywibble | 2022-11-20T01:55:33Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-19T19:42:56Z | ---
tags:
- text-generation
library_name: transformers
---
## Model description
Based off facebook/opt-30b model, finetuned on chucked Dalio responses
## Dataset Used
Jellywibble/dalio-pretrain-book-dataset-v2
## Training Parameters
- Deepspeed on 4xA40 GPUs
- Ensuring EOS token `<s>` appears only at the beginning of each chunk
- Gradient Accumulation steps = 1 (Effective batch size of 4)
- 3e-6 Learning Rate, AdamW optimizer
- Block size of 800
- Trained for 1 Epoch (additional epochs yielded worse Hellaswag result)
## Metrics
- Hellaswag Perplexity: 30.2
- Eval accuracy: 49.8%
- Eval loss: 2.283
- Checkpoint 16 uploaded
- wandb run: https://wandb.ai/jellywibble/huggingface/runs/2vtr39rk?workspace=user-jellywibble |
monakth/bert-base-cased-finetuned-squadv2 | monakth | 2022-11-20T00:49:07Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-20T00:47:41Z | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-cased-finetuned-squadv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squadv
This model is a fine-tuned version of [monakth/bert-base-cased-finetuned-squad](https://huggingface.co/monakth/bert-base-cased-finetuned-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dvitel/h3 | dvitel | 2022-11-19T22:26:00Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"distigpt2",
"hearthstone",
"dataset:dvitel/hearthstone",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-19T01:53:19Z | ---
license: apache-2.0
tags:
- distigpt2
- hearthstone
metrics:
- bleu
- dvitel/codebleu
- exact_match
- chrf
datasets:
- dvitel/hearthstone
model-index:
- name: h0
results:
- task:
type: text-generation
name: Python Code Synthesis
dataset:
type: dvitel/hearthstone
name: HearthStone
split: test
metrics:
- type: exact_match
value: 0.30303030303030304
name: Exact Match
- type: bleu
value: 0.8850182403024257
name: BLEU
- type: dvitel/codebleu
value: 0.677852377992836
name: CodeBLEU
- type: chrf
value: 91.00848749530383
name: chrF
---
# h3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset.
[GitHub repo](https://github.com/dvitel/nlp-sem-parsing/blob/master/h3.py).
It achieves the following results on the evaluation set:
- Loss: 0.2782
- Exact Match: 0.2879
- Bleu: 0.9121
- Codebleu: 0.7482
- Ngram Match Score: 0.7504
- Weighted Ngram Match Score: 0.7583
- Syntax Match Score: 0.7673
- Dataflow Match Score: 0.7169
- Chrf: 93.1064
## Model description
DistilGPT2 fine-tuned on HearthStone dataset for 200 epochs. \
Related to [dvitel/h0](https://huggingface.co/dvitel/h0) but with preprocessing which anonymizes classes and function variables (Local renaming). \
[dvitel/h2](https://huggingface.co/dvitel/h2) implements global renaming where all names are removed. Global renaming showed worse results compared to local renaming.
Example of generated code with mistake on last eval iteration (EV L - gold labels, EV P - prediction):
```python
EV L class CLS0(MinionCard):
def __init__(self):
super().__init__('Darkscale Healer', 5, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, battlecry=Battlecry(Heal(2), CharacterSelector()))
def create_minion(self, v0):
return Minion(4, 5)
EV P class CLS0(MinionCard):
def __init__(self):
super().__init__('Darkscale Healer', 5, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, battlecry=Battlecry(Heal(2), CharacterSelector())
def create_minion(self, v0):
return Minion(4, 5)
EV L class CLS0(WeaponCard):
def __init__(self):
super().__init__('Fiery War Axe', 2, CHARACTER_CLASS.WARRIOR, CARD_RARITY.FREE)
def create_weapon(self, v0):
return Weapon(3, 2)
EV P class CLS0(WeaponCard):
def __init__(self):
super().__init__('Fiery War Axe', 2, CHARACTER_CLASS.WARRIOR, CARD_RARITY.FREE,
def create_weapon(self, v0):
return Weapon(3, 2)
EV L class CLS0(MinionCard):
def __init__(self):
super().__init__('Frostwolf Warlord', 5, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, battlecry=Battlecry(Give([Buff(ChangeAttack(Count(MinionSelector()))), Buff(ChangeHealth(Count(MinionSelector())))]), SelfSelector()))
def create_minion(self, v0):
return Minion(4, 4)
EV P class CLS0(MinionCard):
def __init__(self):
super().__init__('Frostwolf Warlord', 5, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, battlecry=Battlecry(Give([Buff(ChangeAttack(Count(MinionSelector(),), Buff(ChangeHealth(Count(MinionSelector()))))]),), SelfSelector()))
def create_minion(self, v0):
return Minion(4, 4)
EV L class CLS0(SpellCard):
def __init__(self):
super().__init__('Hellfire', 4, CHARACTER_CLASS.WARLOCK, CARD_RARITY.FREE)
def use(self, v0, v1):
super().use(v0, v1)
v2 = copy.copy(v1.other_player.minions)
v2.extend(v1.current_player.minions)
v2.append(v1.other_player.hero)
v2.append(v1.current_player.hero)
for v3 in v2:
v3.damage(v0.effective_spell_damage(3), self)
EV P class CLS0(SpellCard):
def __init__(self):
super().__init__('Hellfire', 4, CHARACTER_CLASS.WARLOCK, CARD_RARITY.FREE,
def use(self, v0, v1):
super().use(v0, v1)
v2 = copy.copy(v1.other_player.minions)
v2.extend(v1.current_player.minions)
for.append(v1.other_player.hero)
for.append(v1.other_player.hero)
for v3 in v2:
.damage(v0.effective_spell_damage(3), self)
```
## Intended uses & limitations
HearthStone card code synthesis.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Bleu | Codebleu | Ngram Match Score | Weighted Ngram Match Score | Syntax Match Score | Dataflow Match Score | Chrf |
|:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:--------:|:-----------------:|:--------------------------:|:------------------:|:--------------------:|:-------:|
| 0.8612 | 11.94 | 1600 | 0.2725 | 0.0455 | 0.8477 | 0.6050 | 0.6229 | 0.6335 | 0.6203 | 0.5431 | 88.7010 |
| 0.175 | 23.88 | 3200 | 0.2311 | 0.0909 | 0.8739 | 0.6304 | 0.6566 | 0.6656 | 0.6484 | 0.5508 | 90.7364 |
| 0.1036 | 35.82 | 4800 | 0.2172 | 0.1818 | 0.8930 | 0.6905 | 0.6976 | 0.7062 | 0.7172 | 0.6409 | 91.9702 |
| 0.0695 | 47.76 | 6400 | 0.2233 | 0.2424 | 0.8944 | 0.7017 | 0.7148 | 0.7232 | 0.7187 | 0.6499 | 92.0340 |
| 0.0482 | 59.7 | 8000 | 0.2407 | 0.2879 | 0.9046 | 0.7301 | 0.7387 | 0.7456 | 0.7475 | 0.6885 | 92.6219 |
| 0.0352 | 71.64 | 9600 | 0.2407 | 0.2424 | 0.9074 | 0.7255 | 0.7371 | 0.7448 | 0.7482 | 0.6718 | 92.8281 |
| 0.0262 | 83.58 | 11200 | 0.2596 | 0.3030 | 0.9061 | 0.7445 | 0.7415 | 0.7500 | 0.7774 | 0.7091 | 92.6737 |
| 0.0213 | 95.52 | 12800 | 0.2589 | 0.2879 | 0.9061 | 0.7308 | 0.7409 | 0.7488 | 0.7464 | 0.6873 | 92.7814 |
| 0.0164 | 107.46 | 14400 | 0.2679 | 0.2879 | 0.9096 | 0.7452 | 0.7510 | 0.7592 | 0.7626 | 0.7079 | 92.9900 |
| 0.0131 | 119.4 | 16000 | 0.2660 | 0.2879 | 0.9096 | 0.7447 | 0.7480 | 0.7564 | 0.7666 | 0.7079 | 93.0122 |
| 0.0116 | 131.34 | 17600 | 0.2669 | 0.2727 | 0.9092 | 0.7463 | 0.7445 | 0.7529 | 0.7684 | 0.7194 | 92.9256 |
| 0.0093 | 143.28 | 19200 | 0.2678 | 0.2879 | 0.9113 | 0.7531 | 0.7496 | 0.7581 | 0.7709 | 0.7336 | 93.0406 |
| 0.0083 | 155.22 | 20800 | 0.2728 | 0.2879 | 0.9103 | 0.7407 | 0.7462 | 0.7540 | 0.7702 | 0.6924 | 92.9302 |
| 0.0077 | 167.16 | 22400 | 0.2774 | 0.2879 | 0.9103 | 0.7449 | 0.7449 | 0.7532 | 0.7659 | 0.7156 | 92.9742 |
| 0.0069 | 179.1 | 24000 | 0.2774 | 0.2879 | 0.9120 | 0.7396 | 0.7463 | 0.7539 | 0.7633 | 0.6950 | 93.1057 |
| 0.0069 | 191.04 | 25600 | 0.2782 | 0.2879 | 0.9121 | 0.7482 | 0.7504 | 0.7583 | 0.7673 | 0.7169 | 93.1064 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/0xirenedao-irenezhao_ | huggingtweets | 2022-11-19T21:23:50Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-19T21:21:10Z | ---
language: en
thumbnail: http://www.huggingtweets.com/0xirenedao-irenezhao_/1668893025991/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1423875044598456321/SVjwd6Bb_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1491000379764785159/ogwaV9mU_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Irene Zhao & IreneDAO</div>
<div style="text-align: center; font-size: 14px;">@0xirenedao-irenezhao_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Irene Zhao & IreneDAO.
| Data | Irene Zhao | IreneDAO |
| --- | --- | --- |
| Tweets downloaded | 1942 | 463 |
| Retweets | 223 | 120 |
| Short tweets | 417 | 71 |
| Tweets kept | 1302 | 272 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/31392i24/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @0xirenedao-irenezhao_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/m6jcuxe9) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/m6jcuxe9/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/0xirenedao-irenezhao_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cyburn/silvery_trait | cyburn | 2022-11-19T20:47:34Z | 0 | 0 | null | [
"license:unknown",
"region:us"
] | null | 2022-11-19T20:40:37Z | ---
license: unknown
---
# Silvery Trait finetuned style Model
Produced from publicly available pictures in landscape, portrait and square format.
Using words found in `prompt_words.md` within your prompt will produce better results. Other words can be used also but will tend to produce "weaker" results. Combining the use of the Aesthetic Gradient file provided in the `easthetic_embeddings` folder can greatly enhance the results.
## Model info
The models included was trained on "multi-resolution" images.
## Using the model
* common subject prompt tokens: `<wathever>, by asd artstyle`
## Example prompts
`a sheep, symmetry, by asd artstyle`:
* without easthetic_embeddings
<img src="https://huggingface.co/cyburn/silvery_trait/resolve/main/1.jpg" alt="Picture." width="500"/>
* with easthetic_embeddings
<img src="https://huggingface.co/cyburn/silvery_trait/resolve/main/2.jpg" alt="Picture." width="500"/>
`crow, skull, symmetry, flower, feather, circle, by asd artstyle`
* without easthetic_embeddings
<img src="https://huggingface.co/cyburn/silvery_trait/resolve/main/3.jpg" alt="Picture." width="500"/>
* with easthetic_embeddings
<img src="https://huggingface.co/cyburn/silvery_trait/resolve/main/4.jpg" alt="Picture." width="500"/> |
Rajaram1996/Hubert_emotion | Rajaram1996 | 2022-11-19T20:10:41Z | 275 | 32 | transformers | [
"transformers",
"pytorch",
"hubert",
"speech",
"audio",
"HUBert",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-03-02T23:29:04Z | ---
inference: true
pipeline_tag: audio-classification
tags:
- speech
- audio
- HUBert
---
Working example of using pretrained model to predict emotion in local audio file
```
def predict_emotion_hubert(audio_file):
""" inspired by an example from https://github.com/m3hrdadfi/soxan """
from audio_models import HubertForSpeechClassification
from transformers import Wav2Vec2FeatureExtractor, AutoConfig
import torch.nn.functional as F
import torch
import numpy as np
from pydub import AudioSegment
model = HubertForSpeechClassification.from_pretrained("Rajaram1996/Hubert_emotion") # Downloading: 362M
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/hubert-base-ls960")
sampling_rate=16000 # defined by the model; must convert mp3 to this rate.
config = AutoConfig.from_pretrained("Rajaram1996/Hubert_emotion")
def speech_file_to_array(path, sampling_rate):
# using torchaudio...
# speech_array, _sampling_rate = torchaudio.load(path)
# resampler = torchaudio.transforms.Resample(_sampling_rate, sampling_rate)
# speech = resampler(speech_array).squeeze().numpy()
sound = AudioSegment.from_file(path)
sound = sound.set_frame_rate(sampling_rate)
sound_array = np.array(sound.get_array_of_samples())
return sound_array
sound_array = speech_file_to_array(audio_file, sampling_rate)
inputs = feature_extractor(sound_array, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to("cpu").float() for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{
"emo": config.id2label[i],
"score": round(score * 100, 1)}
for i, score in enumerate(scores)
]
return [row for row in sorted(outputs, key=lambda x:x["score"], reverse=True) if row['score'] != '0.0%'][:2]
```
```
result = predict_emotion_hubert("male-crying.mp3")
>>> result
[{'emo': 'male_sad', 'score': 91.0}, {'emo': 'male_fear', 'score': 4.8}]
```
|
kormilitzin/en_core_spancat_med7_trf | kormilitzin | 2022-11-19T18:54:29Z | 5 | 1 | spacy | [
"spacy",
"en",
"license:mit",
"region:us"
] | null | 2022-11-18T23:31:46Z | ---
tags:
- spacy
language:
- en
license: mit
model-index:
- name: en_core_spancat_med7_trf
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_core_spancat_med7_trf` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `transformer`, `spancat` |
| **Components** | `transformer`, `spancat` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (8 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`spancat`** | `DOSAGE`, `MEDINFO`, `DRUG`, `STRENGTH`, `FREQUENCY`, `ROUTE`, `DURATION`, `FORM` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `SPANS_SC_F` | 83.10 |
| `SPANS_SC_P` | 83.32 |
| `SPANS_SC_R` | 82.88 |
| `TRANSFORMER_LOSS` | 1176.39 |
| `SPANCAT_LOSS` | 36025.42 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
``` |
stephenhbarlow/biobert-base-cased-v1.2-multiclass-finetuned-PET2 | stephenhbarlow | 2022-11-19T18:53:28Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-19T16:45:29Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: biobert-base-cased-v1.2-multiclass-finetuned-PET2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-multiclass-finetuned-PET2
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8075
- Accuracy: 0.5673
- F1: 0.4253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0175 | 1.0 | 14 | 0.8446 | 0.5625 | 0.4149 |
| 0.8634 | 2.0 | 28 | 0.8075 | 0.5673 | 0.4253 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
kormilitzin/en_core_med7_trf | kormilitzin | 2022-11-19T18:51:54Z | 375 | 12 | spacy | [
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"region:us"
] | token-classification | 2022-03-02T23:29:05Z | ---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_core_med7_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8822157434
- name: NER Recall
type: recall
value: 0.925382263
- name: NER F Score
type: f_score
value: 0.9032835821
---
| Feature | Description |
| --- | --- |
| **Name** | `en_core_med7_trf` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DOSAGE`, `DRUG`, `DURATION`, `FORM`, `FREQUENCY`, `ROUTE`, `STRENGTH` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 90.33 |
| `ENTS_P` | 88.22 |
| `ENTS_R` | 92.54 |
| `TRANSFORMER_LOSS` | 2502627.06 |
| `NER_LOSS` | 114576.77 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
``` |
yunseokj/ddpm-butterflies-128 | yunseokj | 2022-11-19T18:20:57Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-19T17:31:45Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/yunseokj/ddpm-butterflies-128/tensorboard?#scalars)
|
Sebabrata/dof-dl-1 | Sebabrata | 2022-11-19T18:13:06Z | 49 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2022-11-19T14:52:29Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: dof-dl-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dof-dl-1
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
huggingtweets/kalousekm | huggingtweets | 2022-11-19T18:12:47Z | 109 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-19T18:11:38Z | ---
language: en
thumbnail: http://www.huggingtweets.com/kalousekm/1668881563935/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/796289819571843072/yg0FHZZD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Miroslav Kalousek🇺🇦🇨🇿</div>
<div style="text-align: center; font-size: 14px;">@kalousekm</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Miroslav Kalousek🇺🇦🇨🇿.
| Data | Miroslav Kalousek🇺🇦🇨🇿 |
| --- | --- |
| Tweets downloaded | 3252 |
| Retweets | 69 |
| Short tweets | 192 |
| Tweets kept | 2991 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ox04g0p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @kalousekm's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/jtp1suwc) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/jtp1suwc/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/kalousekm')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sd-concepts-library/ghibli-face | sd-concepts-library | 2022-11-19T17:52:39Z | 0 | 4 | null | [
"license:mit",
"region:us"
] | null | 2022-11-19T17:52:35Z | ---
license: mit
---
### ghibli-face on Stable Diffusion
This is the `<ghibli-face>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
Froddan/nekrofaerie | Froddan | 2022-11-19T17:51:30Z | 0 | 2 | null | [
"stable-diffusion",
"text-to-image",
"en",
"license:cc0-1.0",
"region:us"
] | text-to-image | 2022-11-19T15:06:11Z | ---
license: cc0-1.0
inference: false
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion fine tuned on art by [Nekro](https://www.artstation.com/nekro)
### Usage
Use by adding the keyword "nekrofaerie" to the prompt. The model was trained with the "faerie" classname, which can also be added to the prompt.
## Samples
The top 2 images are "pure", the rest could be mixed with other artists or modifiers. I hope it still gives you an idea of what kind of styles can be created with this model.
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/index.png" width="256px"/>
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/index2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/tmp04o1t4b_.png" width="256px"/>
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/tmp41igywg4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/tmpbkj8sqmh.png" width="256px"/>
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/tmphk34pib0.png" width="256px"/>
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/dog_octane.png" width="256px"/>
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/dog_octane2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/nekrofaerie/resolve/main/greg_mucha2.png" width="256px"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
CSAPS/premodel | CSAPS | 2022-11-19T17:15:24Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:lst20",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-19T11:18:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- lst20
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: premodel
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: lst20
type: lst20
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.8533733110439704
- name: Recall
type: recall
value: 0.8653846153846154
- name: F1
type: f1
value: 0.8593369935367294
- name: Accuracy
type: accuracy
value: 0.9477067610537897
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# premodel
This model is a fine-tuned version of [Geotrend/bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased) on the lst20 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1761
- Precision: 0.8534
- Recall: 0.8654
- F1: 0.8593
- Accuracy: 0.9477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Harrier/dqn-SpaceInvadersNoFrameskip-v4 | Harrier | 2022-11-19T15:53:13Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-19T15:52:33Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.50 +/- 186.61
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Harrier -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Harrier -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Harrier
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
vicky10011001/ddpm-butterflies-128 | vicky10011001 | 2022-11-19T15:36:49Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-19T12:14:52Z | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/vicky10011001/ddpm-butterflies-128/tensorboard?#scalars)
|
katboi01/rare-puppers | katboi01 | 2022-11-19T15:04:01Z | 186 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-11-19T15:03:49Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.89552241563797
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu
 |
Sebabrata/dof-bnk-stmt-1 | Sebabrata | 2022-11-19T14:09:42Z | 47 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2022-11-19T05:00:32Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: dof-bnk-stmt-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dof-bnk-stmt-1
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
LidoHon/q-FrozenLake-v1-4x4-noSlippery | LidoHon | 2022-11-19T12:41:09Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-11-19T12:21:52Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="LidoHon/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
NbAiLab/whisper | NbAiLab | 2022-11-19T10:46:08Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-11-07T11:29:35Z | ---
license: apache-2.0
---
# Whisper Finetuning
Whisper finetuning example script.
|
KubiakJakub01/finetuned-distilbert-base-uncased | KubiakJakub01 | 2022-11-19T10:45:52Z | 60 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-19T09:14:07Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: KubiakJakub01/finetuned-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# KubiakJakub01/finetuned-distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2767
- Validation Loss: 0.4326
- Train Accuracy: 0.8319
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1140, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.4680 | 0.4008 | 0.8378 | 0 |
| 0.3475 | 0.4017 | 0.8385 | 1 |
| 0.2767 | 0.4326 | 0.8319 | 2 |
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.9.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jonathanrichard13/pegasus-xsum-reddit-clean-4 | jonathanrichard13 | 2022-11-19T10:22:51Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:reddit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-19T07:21:12Z | ---
tags:
- generated_from_trainer
datasets:
- reddit
metrics:
- rouge
model-index:
- name: pegasus-xsum-reddit-clean-4
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: reddit
type: reddit
args: default
metrics:
- name: Rouge1
type: rouge
value: 27.7525
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-xsum-reddit-clean-4
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the reddit dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7697
- Rouge1: 27.7525
- Rouge2: 7.9823
- Rougel: 20.9276
- Rougelsum: 22.6678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.0594 | 1.0 | 1906 | 2.8489 | 27.9837 | 8.0824 | 20.9135 | 22.7261 |
| 2.861 | 2.0 | 3812 | 2.7793 | 27.8298 | 8.048 | 20.8653 | 22.6781 |
| 2.7358 | 3.0 | 5718 | 2.7697 | 27.7525 | 7.9823 | 20.9276 | 22.6678 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
AndrewZeng/S2KG-base | AndrewZeng | 2022-11-19T09:34:25Z | 108 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2210.08873",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-19T09:15:53Z | # Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems
We present our models for Track 2 of the SereTOD 2022 challenge, which is the first challenge of building semi-supervised and reinforced TOD systems on a large-scale real-world Chinese TOD dataset MobileCS. We build a knowledge-grounded dialog model, S2KG to formulate dialog history and local KB as input and predict the system response.
[This paper](https://arxiv.org/abs/2210.08873) has been accepted at the SereTOD 2022 Workshop, EMNLP 2022
## System Performance
Our system achieves the first place both in the automatic evaluation and human interaction, especially with higher BLEU (+7.64) and Success (+13.6%) than the second place. The evaluation results for both Track 1 and Track 2, which can be accessed via this [this link](https://docs.google.com/spreadsheets/d/1w28AKkG6Wjmuo15QlRlRyrnv859MT1ry0CHV8tFxY9o/edit#gid=0).
## S2KG for Generation
We release our S2KG-base model here. You can use this model for knowledge-grounded dialogue generation follow instructions [S2KG](https://github.com/Zeng-WH/S2KG).
|
mmiteva/distilbert-base-uncased-customized | mmiteva | 2022-11-19T08:46:43Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-11-18T09:58:38Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: mmiteva/distilbert-base-uncased-customized
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mmiteva/distilbert-base-uncased-customized
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3257
- Train End Logits Accuracy: 0.9017
- Train Start Logits Accuracy: 0.8747
- Validation Loss: 1.5040
- Validation End Logits Accuracy: 0.6988
- Validation Start Logits Accuracy: 0.6655
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 36885, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0773 | 0.7064 | 0.6669 | 1.1080 | 0.6973 | 0.6669 | 0 |
| 0.7660 | 0.7812 | 0.7433 | 1.1076 | 0.7093 | 0.6734 | 1 |
| 0.5586 | 0.8351 | 0.7988 | 1.2336 | 0.7039 | 0.6692 | 2 |
| 0.4165 | 0.8741 | 0.8434 | 1.3799 | 0.7034 | 0.6707 | 3 |
| 0.3257 | 0.9017 | 0.8747 | 1.5040 | 0.6988 | 0.6655 | 4 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.7.0
- Datasets 2.6.1
- Tokenizers 0.13.2
|
venetis/hf_train_output | venetis | 2022-11-19T08:26:06Z | 186 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:rock-glacier-dataset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-11-19T07:44:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- rock-glacier-dataset
metrics:
- accuracy
model-index:
- name: hf_train_output
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: rock-glacier-dataset
type: rock-glacier-dataset
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9258241758241759
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf_train_output
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the rock-glacier-dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3894
- Accuracy: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5619 | 0.55 | 50 | 0.5432 | 0.7692 |
| 0.4582 | 1.1 | 100 | 0.4435 | 0.8352 |
| 0.3548 | 1.65 | 150 | 0.3739 | 0.8599 |
| 0.217 | 2.2 | 200 | 0.2913 | 0.9093 |
| 0.1709 | 2.75 | 250 | 0.2619 | 0.9148 |
| 0.0919 | 3.3 | 300 | 0.2475 | 0.9148 |
| 0.0652 | 3.85 | 350 | 0.3275 | 0.8901 |
| 0.0495 | 4.4 | 400 | 0.2515 | 0.9093 |
| 0.0321 | 4.95 | 450 | 0.2878 | 0.9066 |
| 0.0247 | 5.49 | 500 | 0.2612 | 0.9148 |
| 0.017 | 6.04 | 550 | 0.2687 | 0.9176 |
| 0.0131 | 6.59 | 600 | 0.3062 | 0.9093 |
| 0.0113 | 7.14 | 650 | 0.2587 | 0.9231 |
| 0.0099 | 7.69 | 700 | 0.2815 | 0.9203 |
| 0.009 | 8.24 | 750 | 0.2675 | 0.9286 |
| 0.0084 | 8.79 | 800 | 0.2711 | 0.9286 |
| 0.0077 | 9.34 | 850 | 0.2663 | 0.9313 |
| 0.0073 | 9.89 | 900 | 0.3003 | 0.9258 |
| 0.0069 | 10.44 | 950 | 0.2758 | 0.9313 |
| 0.0064 | 10.99 | 1000 | 0.2999 | 0.9258 |
| 0.0061 | 11.54 | 1050 | 0.2931 | 0.9313 |
| 0.0057 | 12.09 | 1100 | 0.2989 | 0.9313 |
| 0.0056 | 12.64 | 1150 | 0.2974 | 0.9313 |
| 0.0053 | 13.19 | 1200 | 0.3099 | 0.9258 |
| 0.005 | 13.74 | 1250 | 0.3131 | 0.9313 |
| 0.0049 | 14.29 | 1300 | 0.3201 | 0.9258 |
| 0.0046 | 14.84 | 1350 | 0.3109 | 0.9313 |
| 0.0045 | 15.38 | 1400 | 0.3168 | 0.9313 |
| 0.0043 | 15.93 | 1450 | 0.3226 | 0.9231 |
| 0.0042 | 16.48 | 1500 | 0.3234 | 0.9231 |
| 0.0041 | 17.03 | 1550 | 0.3283 | 0.9258 |
| 0.0039 | 17.58 | 1600 | 0.3304 | 0.9258 |
| 0.0038 | 18.13 | 1650 | 0.3321 | 0.9231 |
| 0.0037 | 18.68 | 1700 | 0.3362 | 0.9231 |
| 0.0036 | 19.23 | 1750 | 0.3307 | 0.9286 |
| 0.0035 | 19.78 | 1800 | 0.3357 | 0.9231 |
| 0.0034 | 20.33 | 1850 | 0.3244 | 0.9313 |
| 0.0033 | 20.88 | 1900 | 0.3497 | 0.9231 |
| 0.0032 | 21.43 | 1950 | 0.3443 | 0.9231 |
| 0.0031 | 21.98 | 2000 | 0.3398 | 0.9286 |
| 0.003 | 22.53 | 2050 | 0.3388 | 0.9286 |
| 0.003 | 23.08 | 2100 | 0.3399 | 0.9286 |
| 0.0029 | 23.63 | 2150 | 0.3548 | 0.9231 |
| 0.0028 | 24.18 | 2200 | 0.3475 | 0.9286 |
| 0.0028 | 24.73 | 2250 | 0.3480 | 0.9286 |
| 0.0027 | 25.27 | 2300 | 0.3542 | 0.9231 |
| 0.0026 | 25.82 | 2350 | 0.3589 | 0.9231 |
| 0.0026 | 26.37 | 2400 | 0.3449 | 0.9286 |
| 0.0025 | 26.92 | 2450 | 0.3604 | 0.9231 |
| 0.0025 | 27.47 | 2500 | 0.3493 | 0.9286 |
| 0.0024 | 28.02 | 2550 | 0.3631 | 0.9258 |
| 0.0024 | 28.57 | 2600 | 0.3590 | 0.9258 |
| 0.0023 | 29.12 | 2650 | 0.3604 | 0.9258 |
| 0.0023 | 29.67 | 2700 | 0.3667 | 0.9258 |
| 0.0022 | 30.22 | 2750 | 0.3571 | 0.9286 |
| 0.0022 | 30.77 | 2800 | 0.3660 | 0.9258 |
| 0.0021 | 31.32 | 2850 | 0.3638 | 0.9286 |
| 0.0021 | 31.87 | 2900 | 0.3729 | 0.9258 |
| 0.0021 | 32.42 | 2950 | 0.3706 | 0.9258 |
| 0.002 | 32.97 | 3000 | 0.3669 | 0.9286 |
| 0.002 | 33.52 | 3050 | 0.3740 | 0.9258 |
| 0.002 | 34.07 | 3100 | 0.3693 | 0.9286 |
| 0.002 | 34.62 | 3150 | 0.3700 | 0.9286 |
| 0.0019 | 35.16 | 3200 | 0.3752 | 0.9258 |
| 0.0019 | 35.71 | 3250 | 0.3753 | 0.9258 |
| 0.0019 | 36.26 | 3300 | 0.3721 | 0.9286 |
| 0.0018 | 36.81 | 3350 | 0.3764 | 0.9258 |
| 0.0018 | 37.36 | 3400 | 0.3758 | 0.9258 |
| 0.0018 | 37.91 | 3450 | 0.3775 | 0.9258 |
| 0.0018 | 38.46 | 3500 | 0.3812 | 0.9258 |
| 0.0018 | 39.01 | 3550 | 0.3817 | 0.9258 |
| 0.0017 | 39.56 | 3600 | 0.3815 | 0.9258 |
| 0.0017 | 40.11 | 3650 | 0.3825 | 0.9258 |
| 0.0017 | 40.66 | 3700 | 0.3852 | 0.9258 |
| 0.0017 | 41.21 | 3750 | 0.3854 | 0.9258 |
| 0.0017 | 41.76 | 3800 | 0.3823 | 0.9258 |
| 0.0016 | 42.31 | 3850 | 0.3829 | 0.9258 |
| 0.0016 | 42.86 | 3900 | 0.3873 | 0.9258 |
| 0.0016 | 43.41 | 3950 | 0.3842 | 0.9258 |
| 0.0016 | 43.96 | 4000 | 0.3857 | 0.9258 |
| 0.0016 | 44.51 | 4050 | 0.3873 | 0.9258 |
| 0.0016 | 45.05 | 4100 | 0.3878 | 0.9258 |
| 0.0016 | 45.6 | 4150 | 0.3881 | 0.9258 |
| 0.0016 | 46.15 | 4200 | 0.3888 | 0.9258 |
| 0.0016 | 46.7 | 4250 | 0.3891 | 0.9258 |
| 0.0016 | 47.25 | 4300 | 0.3878 | 0.9258 |
| 0.0016 | 47.8 | 4350 | 0.3890 | 0.9258 |
| 0.0016 | 48.35 | 4400 | 0.3890 | 0.9258 |
| 0.0015 | 48.9 | 4450 | 0.3895 | 0.9258 |
| 0.0015 | 49.45 | 4500 | 0.3896 | 0.9258 |
| 0.0015 | 50.0 | 4550 | 0.3894 | 0.9258 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
robinhad/wav2vec2-xls-r-300m-crh | robinhad | 2022-11-19T08:15:07Z | 79 | 1 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"crh",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-19T08:03:35Z | ---
language:
- crh
license: mit
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-crh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-crh
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the custom Crimean Tatar dataset.
It achieves the following results on the evaluation set:
- Loss: 0.738475
- Wer: 0.4494
- Cer: 0.1254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 144
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Mohan515/t5-small-finetuned-medical | Mohan515 | 2022-11-19T07:56:25Z | 60 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-15T07:49:34Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Mohan515/t5-small-finetuned-medical
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Mohan515/t5-small-finetuned-medical
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8018
- Validation Loss: 0.5835
- Train Rouge1: 43.3783
- Train Rouge2: 35.1091
- Train Rougel: 41.6332
- Train Rougelsum: 42.5743
- Train Gen Len: 17.4718
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 0.8018 | 0.5835 | 43.3783 | 35.1091 | 41.6332 | 42.5743 | 17.4718 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AIGeorgeLi/distilbert-base-uncased-finetuned-emotion | AIGeorgeLi | 2022-11-19T07:43:40Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-10-10T02:35:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9249666906714753
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2271
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8554 | 1.0 | 250 | 0.3419 | 0.898 | 0.8943 |
| 0.2627 | 2.0 | 500 | 0.2271 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
faisito/xlm-roberta-base-finetuned-panx-it | faisito | 2022-11-19T07:09:50Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-19T06:55:14Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: train
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8222222222222223
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2532
- F1: 0.8222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8114 | 1.0 | 70 | 0.3235 | 0.7548 |
| 0.2825 | 2.0 | 140 | 0.2749 | 0.7913 |
| 0.1932 | 3.0 | 210 | 0.2532 | 0.8222 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
coderSounak/finetuned_twitter_targeted_insult_LSTM | coderSounak | 2022-11-19T07:04:24Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-19T07:02:35Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_twitter_targeted_insult_LSTM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_twitter_targeted_insult_LSTM
This model is a fine-tuned version of [LYTinn/lstm-finetuning-sentiment-model-3000-samples](https://huggingface.co/LYTinn/lstm-finetuning-sentiment-model-3000-samples) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6314
- Accuracy: 0.6394
- F1: 0.6610
- Precision: 0.6262
- Recall: 0.6998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
coderSounak/finetuned_twitter_hate_speech_LSTM | coderSounak | 2022-11-19T07:02:00Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-19T06:59:33Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_twitter_hate_speech_LSTM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_twitter_hate_speech_LSTM
This model is a fine-tuned version of [LYTinn/lstm-finetuning-sentiment-model-3000-samples](https://huggingface.co/LYTinn/lstm-finetuning-sentiment-model-3000-samples) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5748
- Accuracy: 0.6944
- F1: 0.7170
- Precision: 0.6734
- Recall: 0.7667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Subsets and Splits