modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
maxhilsdorf/distilbert-base-uncased-finetuned-emotion
|
maxhilsdorf
| 2022-04-03T12:08:54Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-01T15:32:54Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2991
- eval_accuracy: 0.91
- eval_f1: 0.9083
- eval_runtime: 3.258
- eval_samples_per_second: 613.873
- eval_steps_per_second: 9.822
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.14.1
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.10.3
|
Prinernian/distilbert-base-uncased-finetuned-emotion
|
Prinernian
| 2022-04-03T09:11:07Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-02T17:49:11Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2208
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8538 | 1.0 | 250 | 0.3317 | 0.904 | 0.8999 |
| 0.2599 | 2.0 | 500 | 0.2208 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.6
|
Asayaya/Upside_down_detector
|
Asayaya
| 2022-04-03T01:00:26Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-04-03T00:55:24Z
|
---
license: apache-2.0
---
# -*- coding: utf-8 -*-
'''
Original file is located at
https://colab.research.google.com/drive/1HrNm5UMZr2Zjmze_HKW799p6LAHM8BTa
'''
from google.colab import files
files.upload()
!pip install kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle datasets download 'shaunthesheep/microsoft-catsvsdogs-dataset'
!unzip microsoft-catsvsdogs-dataset
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
image_dir='/content/PetImages/Cat'
!mkdir train_folder
!mkdir test_folder
import os
path='/content/train_folder/'
dir='upside_down'
dir2='normal'
training_normal= os.path.join(path, dir2)
training_upside= os.path.join(path, dir)
os.mkdir(training_normal)
os.mkdir(training_upside)
#creating classes directories
path='/content/test_folder/'
dir='upside_down'
dir2='normal'
training_normal= os.path.join(path, dir2)
training_upside= os.path.join(path, dir)
os.mkdir(training_normal)
os.mkdir(training_upside)
#copying only the cat images to my train folder
fnames = ['{}.jpg'.format(i) for i in range(2000)]
for fname in fnames:
src = os.path.join('/content/PetImages/Cat', fname)
dst = os.path.join('/content/train_folder/normal', fname)
shutil.copyfile(src, dst)
import os
import shutil
fnames = ['{}.jpg'.format(i) for i in range(2000, 4000)]
for fname in fnames:
src = os.path.join('/content/PetImages/Cat', fname)
dst = os.path.join('/content/test_folder/normal', fname)
shutil.copyfile(src, dst)
from scipy import ndimage, misc
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import imageio
import os
import cv2
#inverting Training Images
outPath = '/content/train_folder/upside_down'
path ='/content/train_folder/normal'
# iterate through the names of contents of the folder
for image_path in os.listdir(path):
# create the full input path and read the file
input_path = os.path.join(path, image_path)
image_to_rotate =plt.imread(input_path)
# rotate the image
rotated = np.flipud(image_to_rotate)
# create full output path, 'example.jpg'
# becomes 'rotate_example.jpg', save the file to disk
fullpath = os.path.join(outPath, 'rotated_'+image_path)
imageio.imwrite(fullpath, rotated)
#nverting images for Validation
outPath = '/content/test_folder/upside_down'
path ='/content/test_folder/normal'
# iterate through the names of contents of the folder
for image_path in os.listdir(path):
# create the full input path and read the file
input_path = os.path.join(path, image_path)
image_to_rotate =plt.imread(input_path)
# rotate the image
rotated = np.flipud(image_to_rotate)
# create full output path, 'example.jpg'
# becomes 'rotate_example.jpg', save the file to disk
fullpath = os.path.join(outPath, 'rotated_'+image_path)
imageio.imwrite(fullpath, rotated)
ima='/content/train_folder/inverted/rotated_1001.jpg'
image=plt.imread(ima)
plt.imshow(image)
# visualize the the figure
plt.show()
train_dir='/content/train_folder'
train_gen=ImageDataGenerator(rescale=1./255)
train_images= train_gen.flow_from_directory(
train_dir,
target_size=(250,250),
batch_size=50,
class_mode='binary'
)
validation_dir='/content/test_folder'
test_gen=ImageDataGenerator(rescale=1./255)
test_images= test_gen.flow_from_directory(
validation_dir,
target_size=(250,250),
batch_size=50,
class_mode='binary'
)
model=tf.keras.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(250,250,3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(optimizer=RMSprop(learning_rate=0.001), loss=tf.keras.losses.BinaryCrossentropy(), metrics=['acc'])
history=model.fit(train_images, validation_data=test_images, epochs=5, steps_per_epoch=40 )
|
vicl/canine-s-finetuned-cola
|
vicl
| 2022-04-02T23:01:51Z
| 3
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"canine",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-02T22:29:20Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: canine-s-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.059386434587477076
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-s-finetuned-cola
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6653
- Matthews Correlation: 0.0594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6132 | 1.0 | 535 | 0.6289 | 0.0 |
| 0.6062 | 2.0 | 1070 | 0.6179 | 0.0 |
| 0.6122 | 3.0 | 1605 | 0.6160 | 0.0 |
| 0.5939 | 4.0 | 2140 | 0.6159 | 0.0 |
| 0.5721 | 5.0 | 2675 | 0.6653 | 0.0594 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/stephencurry30
|
huggingtweets
| 2022-04-02T22:43:52Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z
|
---
language: en
thumbnail: http://www.huggingtweets.com/stephencurry30/1648939428122/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484233608793518081/tOID8aXq_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Stephen Curry</div>
<div style="text-align: center; font-size: 14px;">@stephencurry30</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Stephen Curry.
| Data | Stephen Curry |
| --- | --- |
| Tweets downloaded | 3190 |
| Retweets | 384 |
| Short tweets | 698 |
| Tweets kept | 2108 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2n8n86da/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @stephencurry30's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24mjh4p6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24mjh4p6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/stephencurry30')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
vicl/distilbert-base-uncased-finetuned-stsb
|
vicl
| 2022-04-02T22:24:08Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-02T22:08:33Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: distilbert-base-uncased-finetuned-stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8636303639161342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-stsb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5644
- Pearson: 0.8666
- Spearmanr: 0.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|
| No log | 1.0 | 360 | 0.6366 | 0.8537 | 0.8516 |
| 1.0464 | 2.0 | 720 | 0.6171 | 0.8632 | 0.8626 |
| 0.4002 | 3.0 | 1080 | 0.6082 | 0.8663 | 0.8643 |
| 0.4002 | 4.0 | 1440 | 0.5644 | 0.8666 | 0.8636 |
| 0.2479 | 5.0 | 1800 | 0.5780 | 0.8654 | 0.8624 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
mp6kv/paper_feedback_intent
|
mp6kv
| 2022-04-02T21:42:38Z
| 7
| 1
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-02T04:37:40Z
|
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: paper_feedback_intent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paper_feedback_intent
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3621
- Accuracy: 0.9302
- Precision: 0.9307
- Recall: 0.9302
- F1: 0.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9174 | 1.0 | 11 | 0.7054 | 0.7907 | 0.7903 | 0.7907 | 0.7861 |
| 0.6917 | 2.0 | 22 | 0.4665 | 0.8140 | 0.8134 | 0.8140 | 0.8118 |
| 0.4276 | 3.0 | 33 | 0.3326 | 0.9070 | 0.9065 | 0.9070 | 0.9041 |
| 0.2656 | 4.0 | 44 | 0.3286 | 0.9070 | 0.9065 | 0.9070 | 0.9041 |
| 0.1611 | 5.0 | 55 | 0.3044 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.1025 | 6.0 | 66 | 0.3227 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.0799 | 7.0 | 77 | 0.3216 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.0761 | 8.0 | 88 | 0.3529 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.0479 | 9.0 | 99 | 0.3605 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
| 0.0358 | 10.0 | 110 | 0.3621 | 0.9302 | 0.9307 | 0.9302 | 0.9297 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
vocab-transformers/distilbert-mlm-best
|
vocab-transformers
| 2022-04-02T21:18:53Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-02T21:18:48Z
|
distilbert-base-uncased trained for 680K steps (lowest loss on dev dataset) with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
|
vocab-transformers/distilbert-mlm-750k
|
vocab-transformers
| 2022-04-02T21:15:27Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-02T21:15:23Z
|
distilbert-base-uncased trained for 750K steps with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
|
vocab-transformers/distilbert-mlm-500k
|
vocab-transformers
| 2022-04-02T21:12:46Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-04-02T21:12:40Z
|
distilbert-base-uncased trained for 500K steps with batch size 64 on C4, MSMARCO, Wikipedia, S2ORC, News
|
celine98/canine-c-finetuned-sst2
|
celine98
| 2022-04-02T19:11:13Z
| 10
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"canine",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-24T14:40:39Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: canine-c-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8486238532110092
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-c-finetuned-sst2
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6025
- Accuracy: 0.8486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.9121586874695155e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3415 | 1.0 | 2105 | 0.4196 | 0.8280 |
| 0.2265 | 2.0 | 4210 | 0.4924 | 0.8211 |
| 0.1439 | 3.0 | 6315 | 0.5726 | 0.8337 |
| 0.0974 | 4.0 | 8420 | 0.6025 | 0.8486 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
somosnlp-hackathon-2022/paraphrase-spanish-distilroberta
|
somosnlp-hackathon-2022
| 2022-04-02T18:33:17Z
| 4,608
| 15
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"es",
"dataset:hackathon-pln-es/parallel-sentences",
"arxiv:2004.09813",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-30T17:58:23Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- es
datasets:
- hackathon-pln-es/parallel-sentences
widget:
- text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos."
- text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario."
- text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos."
- text: "Queda descartada la huelga aunque no cobremos lo que queramos."
---
# paraphrase-spanish-distilroberta
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
We follow a **teacher-student** transfer learning approach to train an `bertin-roberta-base-spanish` model using parallel EN-ES sentence pairs.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Este es un ejemplo", "Cada oración es transformada"]
model = SentenceTransformer('hackathon-pln-es/paraphrase-spanish-distilroberta')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['Este es un ejemplo", "Cada oración es transformada']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
model = AutoModel.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
# Normalize embeddings
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Evaluation Results
Similarity Evaluation on STS-2017.es-en.txt and STS-2017.es-es.txt (translated manually for evaluation purposes)
We measure the semantic textual similarity (STS) between sentence pairs in different languages:
### ES-ES
| cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.8495 | 0.8579 | 0.8675 | 0.8474 | 0.8676 | 0.8478 | 0.8277 | 0.8258 |
### ES-EN
| cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman |
| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |
0.8344 | 0.8448 | 0.8279 | 0.8168 | 0.8282 | 0.8159 | 0.8083 | 0.8145 |
------
## Intended uses
Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
## Background
This model is a bilingual Spanish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([paraphrase-mpnet-base-v2](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models)) as a teacher model, and the pretrained Spanish [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) as the student model.
We developped this model during the
[Hackathon 2022 NLP - Spanish](https://somosnlp.org/hackathon),
organized by hackathon-pln-es Organization.
### Training data
We use the concatenation from multiple datasets with sentence pairs (EN-ES).
We could check out the dataset that was used during training: [parallel-sentences](https://huggingface.co/datasets/hackathon-pln-es/parallel-sentences)
| Dataset |
|--------------------------------------------------------|
| AllNLI - ES (SNLI + MultiNLI)|
| EuroParl |
| JW300 |
| News Commentary |
| Open Subtitles |
| TED 2020 |
| Tatoeba |
| WikiMatrix |
## Authors
- [Anibal Pérez](https://huggingface.co/Anarpego),
- [Emilio Tomás Ariza](https://huggingface.co/medardodt),
- [Lautaro Gesuelli Pinto](https://huggingface.co/lautaro)
- [Mauricio Mazuecos](https://huggingface.co/mmazuecos)
|
shwetha/distilbert-base-uncased-finetuned-squad
|
shwetha
| 2022-04-02T17:11:25Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-04-02T14:51:59Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 5.9198 |
| No log | 2.0 | 4 | 5.7019 |
| No log | 3.0 | 6 | 5.5925 |
### Framework versions
- Transformers 4.11.0
- Pytorch 1.10.2+cpu
- Datasets 2.0.0
- Tokenizers 0.10.3
|
JustAdvanceTechonology/medical_notes_mulitilingual
|
JustAdvanceTechonology
| 2022-04-02T16:37:24Z
| 4
| 0
|
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-02T11:06:15Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: JustAdvanceTechonology/medical_notes_mulitilingual
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JustAdvanceTechonology/medical_notes_mulitilingual
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.7536
- Validation Loss: 6.1397
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 1209, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 11.2097 | 6.1454 | 0 |
| 8.7069 | 6.1880 | 1 |
| 8.7350 | 6.1834 | 2 |
| 8.7021 | 6.1364 | 3 |
| 8.7385 | 6.2117 | 4 |
| 8.7318 | 6.2004 | 5 |
| 8.7487 | 6.1531 | 6 |
| 8.7536 | 6.1397 | 7 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.5.0
- Datasets 2.0.0
- Tokenizers 0.10.1
|
jaygala24/finetuned-vit-base-patch16-224-upside-down-detector
|
jaygala24
| 2022-04-02T15:24:57Z
| 79
| 0
|
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"accelerator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-04-02T08:42:45Z
|
---
license: apache-2.0
tags:
- accelerator
metrics:
- accuracy
model-index:
- name: finetuned-vit-base-patch16-224-upside-down-detector
results: []
widget:
- src: https://huggingface.co/jaygala24/finetuned-vit-base-patch16-224-upside-down-detector/resolve/main/original.jpg
example_title: original
- src: https://huggingface.co/jaygala24/finetuned-vit-base-patch16-224-upside-down-detector/resolve/main/upside_down.jpg
example_title: upside_down
---
# finetuned-vit-base-patch16-224-upside-down-detector
This model is a fine-tuned version of [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the custom image orientation dataset adapted from the [beans](https://huggingface.co/datasets/beans) dataset. It achieves the following results on the evaluation set:
- Accuracy: 0.8947
## Training and evaluation data
The custom dataset for image orientation adapted from [beans](https://huggingface.co/datasets/beans) dataset contains a total of 2,590 image samples with 1,295 original and 1,295 upside down. The model was fine-tuned on the train subset and evaluated on validation and test subsets. The dataset splits are listed below:
| Split | # examples |
|:----------:|:----------:|
| train | 2068 |
| validation | 133 |
| test | 128 |
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-04
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32
- num_epochs: 5
### Training results
| Epoch | Accuracy |
|:----------:|:----------:|
| 0 | 0.8609 |
| 1 | 0.8835 |
| 2 | 0.8571 |
| 3 | 0.8941 |
| 4 | 0.8941 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0+cu111
- Pytorch/XLA 1.9
- Datasets 2.0.0
- Tokenizers 0.12.0
|
notexist/ttt2
|
notexist
| 2022-04-02T15:09:26Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-02T14:54:08Z
|
---
license: apache-2.0
---
|
yaswanth/distilbert-base-uncased_fakenews_identification
|
yaswanth
| 2022-04-02T13:18:07Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T06:10:15Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased_fakenews_identification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fakenews_identification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the below dataset.
https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset
It achieves the following results on the evaluation set:
- Loss: 0.0059
- Accuracy: 0.999
- F1: 0.9990
## Label Description
LABEL_0 - Fake News
LABEL_1 - Real News
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0014 | 1.0 | 1000 | 0.0208 | 0.9965 | 0.9965 |
| 0.0006 | 2.0 | 2000 | 0.0041 | 0.9994 | 0.9994 |
| 0.0006 | 3.0 | 3000 | 0.0044 | 0.9992 | 0.9993 |
| 0.0 | 4.0 | 4000 | 0.0059 | 0.999 | 0.9990 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Sam4669/distilbert-base-uncased-finetuned-emotion
|
Sam4669
| 2022-04-02T13:16:26Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-02T13:00:45Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9232158277556175
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2317
- Accuracy: 0.923
- F1: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8528 | 1.0 | 250 | 0.3332 | 0.897 | 0.8929 |
| 0.26 | 2.0 | 500 | 0.2317 | 0.923 | 0.9232 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
anuragshas/en-hi-transliteration
|
anuragshas
| 2022-04-02T12:24:03Z
| 0
| 1
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-04-02T11:50:28Z
|
---
license: apache-2.0
---
## Dataset
[NEWS2018 DATASET_04, Task ID: M-EnHi](http://workshop.colips.org/news2018/dataset.html)
## Notebooks
- `xmltodict.ipynb` contains the code to convert the `xml` files to `json` for training
- `training_script.ipynb` contains the code for training and inference. It is a modified version of https://github.com/AI4Bharat/IndianNLP-Transliteration/blob/master/NoteBooks/Xlit_TrainingSetup_condensed.ipynb
## Predictions
`pred_test.json` contains top-10 predictions on the validation set of the dataset
## Evaluation Scores on validation set
TOP 10 SCORES FOR 1000 SAMPLES
|Metrics | Score |
|-----------|-----------|
|ACC | 0.703000|
|Mean F-score| 0.949289|
|MRR | 0.486549|
|MAP_ref | 0.381000|
TOP 5 SCORES FOR 1000 SAMPLES:
|Metrics | Score |
|-----------|-----------|
|ACC |0.621000|
|Mean F-score |0.937985|
|MRR |0.475033|
|MAP_ref |0.381000|
TOP 3 SCORES FOR 1000 SAMPLES:
|Metrics | Score |
|-----------|-----------|
|ACC |0.560000|
|Mean F-score |0.927025|
|MRR |0.461333|
|MAP_ref |0.381000|
TOP 2 SCORES FOR 1000 SAMPLES:
|Metrics | Score |
|-----------|-----------|
|ACC | 0.502000|
|Mean F-score | 0.913697|
|MRR | 0.442000|
|MAP_ref | 0.381000|
TOP 1 SCORES FOR 1000 SAMPLES:
|Metrics | Score |
|-----------|-----------|
|ACC | 0.382000|
|Mean F-score | 0.881272|
|MRR | 0.382000|
|MAP_ref | 0.380500|
|
huggingtweets/sanjabh
|
huggingtweets
| 2022-04-02T12:14:56Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-02T12:13:25Z
|
---
language: en
thumbnail: http://www.huggingtweets.com/sanjabh/1648901691950/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484080880222351360/FtDB2j4B_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lucid Dreams</div>
<div style="text-align: center; font-size: 14px;">@sanjabh</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lucid Dreams.
| Data | Lucid Dreams |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 373 |
| Short tweets | 137 |
| Tweets kept | 2740 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2s7tzf32/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sanjabh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1cl1cjnx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1cl1cjnx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sanjabh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DMetaSoul/sbert-chinese-general-v2-distill
|
DMetaSoul
| 2022-04-02T09:58:33Z
| 15
| 6
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"semantic-search",
"chinese",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-04-02T09:58:18Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- semantic-search
- chinese
---
# DMetaSoul/sbert-chinese-general-v2-distill
此模型是之前[开源通用语义匹配模型](https://huggingface.co/DMetaSoul/sbert-chinese-general-v2)的蒸馏版本(仅4层 BERT),适用于**通用语义匹配**场景,从效果来看该模型在各种任务上**泛化能力更好且编码速度更快**。
离线训练好的大模型如果直接用于线上推理,对计算资源有苛刻的需求,而且难以满足业务环境对延迟、吞吐量等性能指标的要求,这里我们使用蒸馏手段来把大模型轻量化。从 12 层 BERT 蒸馏为 4 层后,模型参数量缩小到 44%,大概 latency 减半、throughput 翻倍、精度下降 6% 左右(具体结果详见下文评估小节)。
# Usage
## 1. Sentence-Transformers
通过 [sentence-transformers](https://www.SBERT.net) 框架来使用该模型,首先进行安装:
```
pip install -U sentence-transformers
```
然后使用下面的代码来载入该模型并进行文本表征向量的提取:
```python
from sentence_transformers import SentenceTransformer
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v2-distill')
embeddings = model.encode(sentences)
print(embeddings)
```
## 2. HuggingFace Transformers
如果不想使用 [sentence-transformers](https://www.SBERT.net) 的话,也可以通过 HuggingFace Transformers 来载入该模型并进行文本向量抽取:
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["我的儿子!他猛然间喊道,我的儿子在哪儿?", "我的儿子呢!他突然喊道,我的儿子在哪里?"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v2-distill')
model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v2-distill')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation
这里主要跟蒸馏前对应的 teacher 模型作了对比:
*性能:*
| | Teacher | Student | Gap |
| ---------- | --------------------- | ------------------- | ----- |
| Model | BERT-12-layers (102M) | BERT-4-layers (45M) | 0.44x |
| Cost | 23s | 12s | -47% |
| Latency | 38ms | 20ms | -47% |
| Throughput | 418 sentence/s | 791 sentence/s | 1.9x |
*精度:*
| | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | **Avg** |
| -------------- | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | ------- |
| **Teacher** | 77.19% | 72.59% | 36.79% | 76.91% | 49.62% | 16.24% | 63.15% | 56.07% |
| **Student** | 76.49% | 73.33% | 26.46% | 64.26% | 46.02% | 11.83% | 52.45% | 50.12% |
| **Gap** (abs.) | - | - | - | - | - | - | - | -5.95% |
*基于1万条数据测试,GPU设备是V100,batch_size=16,max_seq_len=256*
## Citing & Authors
E-mail: [email protected]
|
Chikashi/t5-small-finetuned-wikihow_3epoch
|
Chikashi
| 2022-04-02T07:42:15Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikihow",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-01T21:20:33Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikihow
metrics:
- rouge
model-index:
- name: t5-small-finetuned-wikihow_3epoch
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wikihow
type: wikihow
args: all
metrics:
- name: Rouge1
type: rouge
value: 25.5784
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikihow_3epoch
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikihow dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5163
- Rouge1: 25.5784
- Rouge2: 8.9929
- Rougel: 21.5345
- Rougelsum: 24.9382
- Gen Len: 18.384
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.9421 | 0.25 | 5000 | 2.6545 | 23.2336 | 7.5502 | 19.5899 | 22.5521 | 18.4076 |
| 2.8411 | 0.51 | 10000 | 2.6103 | 24.3524 | 8.2068 | 20.5238 | 23.6679 | 18.2606 |
| 2.7983 | 0.76 | 15000 | 2.5836 | 24.8169 | 8.4826 | 20.8765 | 24.1686 | 18.3211 |
| 2.7743 | 1.02 | 20000 | 2.5627 | 24.9904 | 8.5625 | 21.0344 | 24.3416 | 18.3786 |
| 2.7452 | 1.27 | 25000 | 2.5508 | 25.1497 | 8.6872 | 21.152 | 24.4751 | 18.3524 |
| 2.7353 | 1.53 | 30000 | 2.5384 | 25.2909 | 8.7408 | 21.2344 | 24.629 | 18.4453 |
| 2.7261 | 1.78 | 35000 | 2.5322 | 25.3748 | 8.7802 | 21.312 | 24.7191 | 18.3754 |
| 2.7266 | 2.03 | 40000 | 2.5265 | 25.4095 | 8.8915 | 21.3871 | 24.7685 | 18.4013 |
| 2.706 | 2.29 | 45000 | 2.5211 | 25.4372 | 8.8926 | 21.4124 | 24.7902 | 18.3776 |
| 2.7073 | 2.54 | 50000 | 2.5176 | 25.4925 | 8.9668 | 21.5103 | 24.8608 | 18.4303 |
| 2.703 | 2.8 | 55000 | 2.5163 | 25.5784 | 8.9929 | 21.5345 | 24.9382 | 18.384 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
huggingtweets/percybotshelley
|
huggingtweets
| 2022-04-02T05:27:46Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-02T05:27:39Z
|
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/780200431859269633/kXZwDd_Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Romantic Poetry Bot</div>
<div style="text-align: center; font-size: 14px;">@percybotshelley</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Romantic Poetry Bot.
| Data | Romantic Poetry Bot |
| --- | --- |
| Tweets downloaded | 3205 |
| Retweets | 0 |
| Short tweets | 20 |
| Tweets kept | 3185 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1bj4pakr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @percybotshelley's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yfs8v92) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yfs8v92/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/percybotshelley')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
satoshiz01/Flipped_CIFAR10_vision
|
satoshiz01
| 2022-04-02T05:09:07Z
| 0
| 0
| null |
[
"region:us"
] | null | 2022-04-02T03:30:55Z
|
**Google Colab Notebook link:**
https://colab.research.google.com/drive/1iA8nvb93VLcrDfIt17AOIHnkVdLSNcW_?usp=sharing
This repo contains files for defining and creating a simple convolutional network for
classifying/detecting the orientation of CIFAR-10 images (either normal orientation or flipped upside down/180 degrees).
The following files are in this repo:
Coding_Challenge_for_Fatima_Fellowship.ipynb -- a copy of the Google Collab notebook with the code/output/writeup
best_model.pth -- dictionary of best model stats/weights found during training
cifar10flip_trn.pt -- saved training dataset of ~50% flipped CIFAR10 images
cifar10flip_tst.pt -- saved training dataset of ~50% flipped CIFAR10 images
image_examples.png -- an array of example imags from flipped CIFAR10 dataset
write-up -- write up of data processing, model results, and potential improvements (also in Google Colab)
wrong_predictions.zip -- a zip file of PNG images that were incorrectly classified by my model
(each file name provide information on the image's prediction, true label, and its class)
|
nikhil6041/wav2vec2-commonvoice-hindi
|
nikhil6041
| 2022-04-02T04:48:26Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-31T04:27:46Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-commonvoice-hindi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-commonvoice-hindi
This model is a fine-tuned version of [theainerd/Wav2Vec2-large-xlsr-hindi](https://huggingface.co/theainerd/Wav2Vec2-large-xlsr-hindi) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9825
- Wer: 0.6763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 20.0 | 100 | 0.8801 | 0.6754 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
BigSalmon/Points4
|
BigSalmon
| 2022-04-02T03:04:08Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-02T02:57:31Z
|
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Points4")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/Points4")
```
```
- moviepass to return
- this summer
- swooped up by
- original co-founder stacy spikes
text: the re-launch of moviepass is set to transpire this summer, ( rescued at the hands of / under the stewardship of / spearheaded by ) its founding father, stacy spikes.
***
- middle schools do not have recess
- should get back to doing it
- amazing for communication
- and getting kids to move around
text: a casualty of the education reform craze, recess has been excised from middle schools. this is tragic, for it is instrumental in honing children's communication skills and encouraging physical activity.
***
-
```
It should also be able to do all that this can: https://huggingface.co/BigSalmon/InformalToFormalLincoln27
Keywords to sentences or sentence.
|
youssefadarrab/TP_NLP_SNLI_Adarrab_Baziz_Malige
|
youssefadarrab
| 2022-04-02T00:40:26Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-01T21:11:05Z
|
# CentraleSupelec - Natural language processing
# Practical session n°7
## Natural Language Inferencing (NLI):
(NLI) is a classical NLP (Natural Language Processing) problem that involves taking two sentences (the premise and the hypothesis ), and deciding how they are related (if the premise *entails* the hypothesis, *contradicts* it, or *neither*).
Ex:
| Premise | Label | Hypothesis |
| --- | --- | --- |
| A man inspects the uniform of a figure in some East Asian country. | contradiction | The man is sleeping. |
| An older and younger man smiling. | neutral | Two men are smiling and laughing at the cats playing on the floor. |
| A soccer game with multiple males playing. | entailment | Some men are playing a sport. |
### Stanford NLI (SNLI) corpus
In this labwork, I propose to use the Stanford NLI (SNLI) corpus ( https://nlp.stanford.edu/projects/snli/ ), available in the *Datasets* library by Huggingface.
from datasets import load_dataset
snli = load_dataset("snli")
#Removing sentence pairs with no label (-1)
snli = snli.filter(lambda example: example['label'] != -1)
## Quick summary of the model
This is the model from : Youssef Adarrab, Othmane Baziz and Alain Malige
- Fist we import the corpus and do some visualization
- Second we apply DistilBert for sequence classification
- We illustrate through our work the code used for training, to obtain better results, one should run the training on more epochs
|
JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en
|
JustAdvanceTechonology
| 2022-04-02T00:07:29Z
| 4
| 0
|
transformers
|
[
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-31T10:16:30Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JustAdvanceTechonology/medical_research_dataset_marian-finetuned-kde4-fr-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6429
- Validation Loss: 0.8071
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 17733, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6423 | 0.8071 | 0 |
| 0.6424 | 0.8071 | 1 |
| 0.6429 | 0.8071 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.5.0
- Datasets 2.0.0
- Tokenizers 0.10.1
|
lgris/bp500-xlsr
|
lgris
| 2022-04-01T20:33:47Z
| 15
| 1
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"hf-asr-leaderboard",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"arxiv:2012.03411",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z
|
---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
- hf-asr-leaderboard
model-index:
- name: bp400-xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice
type: common_voice
args: pt
metrics:
- name: Test WER
type: wer
value: 13.6
license: apache-2.0
---
# bp500-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus;
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt);
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control;
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 93.9h | -- | 5.4h |
| Common Voice | 37.6h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h |
| SID | 5.0h | -- | 1.0h |
| VoxForge | 2.8h | -- | 0.1h |
| Total | 437.2h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/file/d/1J8aR1ltDLQFe-dVrGuyxoRm2uyJjCWgf/view?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_500 (demonstration below) | 0.051 | 0.136 | 0.032 | 0.118 | 0.095 | 0.248 | 0.082 | 0.108 |
| bp\_500 + 4-gram (demonstration below) | 0.032 | 0.097 | 0.022 | 0.114 | 0.125 | 0.246 | 0.065 | 0.100 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|não há um departamento de mediadores independente das federações e das agremiações|não há um **dearamento** de mediadores independente das federações e das **agrebiações**|
|mas que bodega|**masque** bodega|
|a cortina abriu o show começou|a cortina abriu o **chô** começou|
|por sorte havia uma passadeira|**busote avinhoa** **passadeiro**|
|estou maravilhada está tudo pronto|**stou** estou maravilhada está tudo pronto|
## Demonstration
```python
MODEL_NAME = "lgris/bp500-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
```python
%cd bp_dataset
```
/content/bp_dataset
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.05159097808687998
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.13659981509705973
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.03196969696969697
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.1178481066463896
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.09544588416964224
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24868046340420813
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.08246076839826841
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.03222801788375573
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.09713866021093655
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.022310606060606065
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.11408590958696524
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.12502797252979136
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24603179403904793
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.06542207792207791
|
lgris/bp400-xlsr
|
lgris
| 2022-04-01T20:31:02Z
| 91
| 3
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"PyTorch",
"hf-asr-leaderboard",
"dataset:common_voice",
"dataset:mls",
"dataset:cetuc",
"dataset:lapsbm",
"dataset:voxforge",
"dataset:tedx",
"dataset:sid",
"arxiv:2107.11414",
"arxiv:2012.03411",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z
|
---
language: pt
datasets:
- common_voice
- mls
- cetuc
- lapsbm
- voxforge
- tedx
- sid
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- speech
- PyTorch
- hf-asr-leaderboard
model-index:
- name: bp400-xlsr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER
type: wer
value: 14.0
license: apache-2.0
---
# bp400-xlsr: Wav2vec 2.0 with Brazilian Portuguese (BP) Dataset
**Paper:** https://arxiv.org/abs/2107.11414
This is a the demonstration of a fine-tuned Wav2vec model for Brazilian Portuguese using the following datasets:
- [CETUC](http://www02.smt.ufrj.br/~igor.quintanilha/alcaim.tar.gz): contains approximately 145 hours of Brazilian Portuguese speech distributed among 50 male and 50 female speakers, each pronouncing approximately 1,000 phonetically balanced sentences selected from the [CETEN-Folha](https://www.linguateca.pt/cetenfolha/) corpus.
- [Common Voice 7.0](https://commonvoice.mozilla.org/pt): is a project proposed by Mozilla Foundation with the goal to create a wide open dataset in different languages. In this project, volunteers donate and validate speech using the [oficial site](https://commonvoice.mozilla.org/pt).
- [Lapsbm](https://github.com/falabrasil/gitlab-resources): "Falabrasil - UFPA" is a dataset used by the Fala Brasil group to benchmark ASR systems in Brazilian Portuguese. Contains 35 speakers (10 females), each one pronouncing 20 unique sentences, totalling 700 utterances in Brazilian Portuguese. The audios were recorded in 22.05 kHz without environment control.
- [Multilingual Librispeech (MLS)](https://arxiv.org/abs/2012.03411): a massive dataset available in many languages. The MLS is based on audiobook recordings in public domain like [LibriVox](https://librivox.org/). The dataset contains a total of 6k hours of transcribed data in many languages. The set in Portuguese [used in this work](http://www.openslr.org/94/) (mostly Brazilian variant) has approximately 284 hours of speech, obtained from 55 audiobooks read by 62 speakers.
- [Multilingual TEDx](http://www.openslr.org/100): a collection of audio recordings from TEDx talks in 8 source languages. The Portuguese set (mostly Brazilian Portuguese variant) contains 164 hours of transcribed speech.
- [Sidney](https://igormq.github.io/datasets/) (SID): contains 5,777 utterances recorded by 72 speakers (20 women) from 17 to 59 years old with fields such as place of birth, age, gender, education, and occupation;
- [VoxForge](http://www.voxforge.org/): is a project with the goal to build open datasets for acoustic models. The corpus contains approximately 100 speakers and 4,130 utterances of Brazilian Portuguese, with sample rates varying from 16kHz to 44.1kHz.
These datasets were combined to build a larger Brazilian Portuguese dataset. All data was used for training except Common Voice dev/test sets, that were used for validation/test respectively. We also made test sets for all the gathered datasets.
| Dataset | Train | Valid | Test |
|--------------------------------|-------:|------:|------:|
| CETUC | 93.9h | -- | 5.4h |
| Common Voice | 37.6h | 8.9h | 9.5h |
| LaPS BM | 0.8h | -- | 0.1h |
| MLS | 161.0h | -- | 3.7h |
| Multilingual TEDx (Portuguese) | 144.2h | -- | 1.8h |
| SID | 5.0h | -- | 1.0h |
| VoxForge | 2.8h | -- | 0.1h |
| Total | 437.2h | 8.9h | 21.6h |
The original model was fine-tuned using [fairseq](https://github.com/pytorch/fairseq). This notebook uses a converted version of the original one. The link to the original fairseq model is available [here](https://drive.google.com/drive/folders/1eRUExXRF2XK8JxUjIzbLBkLa5wuR3nig?usp=sharing).
#### Summary
| | CETUC | CV | LaPS | MLS | SID | TEDx | VF | AVG |
|----------------------|---------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
| bp\_400 (demonstration below) | 0.052 | 0.140 | 0.074 | 0.117 | 0.121 | 0.245 | 0.118 | 0.124 |
| bp\_400 + 3-gram | 0.033 | 0.095 | 0.046 | 0.123 | 0.112 | 0.212 | 0.123 | 0.106 |
| bp\_400 + 4-gram (demonstration below) | **0.030** | 0.096 | 0.043 | **0.106** | 0.118 | 0.229 | **0.117** | **0.105** |
| bp\_400 + 5-gram | 0.033 | 0.094 | 0.043 | 0.123 | **0.111** | **0.210** | 0.123 | **0.105** |
| bp\_400 + Transf. | 0.032 | **0.092** | **0.036** | 0.130 | 0.115 | 0.215 | 0.125 | 0.106 |
#### Transcription examples
| Text | Transcription |
|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
|alguém sabe a que horas começa o jantar | alguém sabe a que horas **começo** jantar |
|lila covas ainda não sabe o que vai fazer no fundo|**lilacovas** ainda não sabe o que vai fazer no fundo|
|que tal um pouco desse bom spaghetti|**quetá** um pouco **deste** bom **ispaguete**|
|hong kong em cantonês significa porto perfumado|**rongkong** **en** **cantones** significa porto perfumado|
|vamos hackear esse problema|vamos **rackar** esse problema|
|apenas a poucos metros há uma estação de ônibus|apenas **ha** poucos metros **á** uma estação de ônibus|
|relâmpago e trovão sempre andam juntos|**relampagotrevão** sempre andam juntos|
## Demonstration
```python
MODEL_NAME = "lgris/bp400-xlsr"
```
### Imports and dependencies
```python
%%capture
!pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio===0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install datasets
!pip install jiwer
!pip install transformers
!pip install soundfile
!pip install pyctcdecode
!pip install https://github.com/kpu/kenlm/archive/master.zip
```
```python
import jiwer
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
from pyctcdecode import build_ctcdecoder
import torch
import re
import sys
```
### Helpers
```python
chars_to_ignore_regex = '[\,\?\.\!\;\:\"]' # noqa: W605
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = speech.squeeze(0).numpy()
batch["sampling_rate"] = 16_000
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
batch["target"] = batch["sentence"]
return batch
```
```python
def calc_metrics(truths, hypos):
wers = []
mers = []
wils = []
for t, h in zip(truths, hypos):
try:
wers.append(jiwer.wer(t, h))
mers.append(jiwer.mer(t, h))
wils.append(jiwer.wil(t, h))
except: # Empty string?
pass
wer = sum(wers)/len(wers)
mer = sum(mers)/len(mers)
wil = sum(wils)/len(wils)
return wer, mer, wil
```
```python
def load_data(dataset):
data_files = {'test': f'{dataset}/test.csv'}
dataset = load_dataset('csv', data_files=data_files)["test"]
return dataset.map(map_to_array)
```
### Model
```python
class STT:
def __init__(self,
model_name,
device='cuda' if torch.cuda.is_available() else 'cpu',
lm=None):
self.model_name = model_name
self.model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
self.processor = Wav2Vec2Processor.from_pretrained(model_name)
self.vocab_dict = self.processor.tokenizer.get_vocab()
self.sorted_dict = {
k.lower(): v for k, v in sorted(self.vocab_dict.items(),
key=lambda item: item[1])
}
self.device = device
self.lm = lm
if self.lm:
self.lm_decoder = build_ctcdecoder(
list(self.sorted_dict.keys()),
self.lm
)
def batch_predict(self, batch):
features = self.processor(batch["speech"],
sampling_rate=batch["sampling_rate"][0],
padding=True,
return_tensors="pt")
input_values = features.input_values.to(self.device)
attention_mask = features.attention_mask.to(self.device)
with torch.no_grad():
logits = self.model(input_values, attention_mask=attention_mask).logits
if self.lm:
logits = logits.cpu().numpy()
batch["predicted"] = []
for sample_logits in logits:
batch["predicted"].append(self.lm_decoder.decode(sample_logits))
else:
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = self.processor.batch_decode(pred_ids)
return batch
```
### Download datasets
```python
%%capture
!gdown --id 1HFECzIizf-bmkQRLiQD0QVqcGtOG5upI
!mkdir bp_dataset
!unzip bp_dataset -d bp_dataset/
```
### Tests
```python
stt = STT(MODEL_NAME)
```
#### CETUC
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.05159104708285062
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.14031426198658084
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.07432133838383838
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.11678793514817509
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.12152357273433984
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.24666815906766504
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.11873106060606062
### Tests with LM
```python
!rm -rf ~/.cache
!gdown --id 1GJIKseP5ZkTbllQVgOL98R4yYAcIySFP # trained with wikipedia
stt = STT(MODEL_NAME, lm='pt-BR-wiki.word.4-gram.arpa')
# !gdown --id 1dLFldy7eguPtyJj5OAlI4Emnx0BpFywg # trained with bp
# stt = STT(MODEL_NAME, lm='pt-BR.word.4-gram.arpa')
```
### Cetuc
```python
ds = load_data('cetuc_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CETUC WER:", wer)
```
CETUC WER: 0.030266462438593742
#### Common Voice
```python
ds = load_data('commonvoice_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("CV WER:", wer)
```
CV WER: 0.09577710237417715
#### LaPS
```python
ds = load_data('lapsbm_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Laps WER:", wer)
```
Laps WER: 0.043617424242424235
#### MLS
```python
ds = load_data('mls_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("MLS WER:", wer)
```
MLS WER: 0.10642133314350002
#### SID
```python
ds = load_data('sid_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("Sid WER:", wer)
```
Sid WER: 0.11839021001747055
#### TEDx
```python
ds = load_data('tedx_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("TEDx WER:", wer)
```
TEDx WER: 0.22929952467810416
#### VoxForge
```python
ds = load_data('voxforge_dataset')
result = ds.map(stt.batch_predict, batched=True, batch_size=8)
wer, mer, wil = calc_metrics(result["sentence"], result["predicted"])
print("VoxForge WER:", wer)
```
VoxForge WER: 0.11716314935064935
|
birgermoell/psst-libri960_big
|
birgermoell
| 2022-04-01T20:17:17Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-04-01T19:05:31Z
|
pssteval INFO: ASR metrics for split `valid` FER: 9.8% PER: 20.9%
|
FrankCorrigan/test-model
|
FrankCorrigan
| 2022-04-01T17:54:00Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-04-01T01:46:45Z
|
---
license: apache-2.0
---
|
vicl/canine-c-finetuned-cola
|
vicl
| 2022-04-01T17:38:35Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"canine",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-01T17:13:12Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: canine-c-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0990441507705203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# canine-c-finetuned-cola
This model is a fine-tuned version of [google/canine-c](https://huggingface.co/google/canine-c) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6246
- Matthews Correlation: 0.0990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.6142 | 1.0 | 535 | 0.6268 | 0.0 |
| 0.607 | 2.0 | 1070 | 0.6234 | 0.0 |
| 0.6104 | 3.0 | 1605 | 0.6226 | 0.0 |
| 0.5725 | 4.0 | 2140 | 0.6246 | 0.0990 |
| 0.5426 | 5.0 | 2675 | 0.6866 | 0.0495 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
McGill-NLP/bart-qg-nq-checkpoint
|
McGill-NLP
| 2022-04-01T17:35:04Z
| 26
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.13461",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-01T16:32:49Z
|
---
license: cc-by-4.0
---
# BART-base fine-tuned on NaturalQuestions for **Question Generation**
[BART Model](https://arxiv.org/pdf/1910.13461.pdf) fine-tuned on [Google NaturalQuestions](https://ai.google.com/research/NaturalQuestions/) for **Question Generation** by treating long answer as input, and question as output.
## Details of BART
The **BART** model was presented in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by *Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer* in Here the abstract:
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.
## Details of the downstream task (QG) - Dataset 📚 🧐
Dataset: ```NaturalQuestions``` from Google (https://ai.google.com/research/NaturalQuestions/)
| Dataset | Split | # samples |
| -------- | ----- | --------- |
| NaturalQuestions | train | 97650 |
| NaturalQuestions | valid | 10850 |
## Model fine-tuning 🏋️
The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/QG/train.py)
## Model in Action 🚀
```python
from transformers import AutoModel, BartTokenizer
#Load the tokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
#Load the model
model = AutoModelForSeq2SeqLM.from_pretrained("McGill-NLP/bart-qg-nq-checkpoint")
```
## Citation
If you want to cite this model you can use this:
```bibtex
@inproceedings{kulshreshtha-etal-2021-back,
title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
author = "Kulshreshtha, Devang and
Belfer, Robert and
Serban, Iulian Vlad and
Reddy, Siva",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.566",
pages = "7064--7078",
abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
}
```
> Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
ahmedzaky91/Fatima-Fake_news_calssifier
|
ahmedzaky91
| 2022-04-01T16:54:24Z
| 0
| 0
| null |
[
"region:us"
] | null | 2022-04-01T00:00:39Z
|
## This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on Fake and real dataset on kaggle
## The following hyperparameters were used during training:
learning_rate: 5e-05
train_batch_size: 8
num_epochs: 2
|
avialfont/ner-dummy-model
|
avialfont
| 2022-04-01T14:59:22Z
| 5
| 0
|
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-01T10:59:27Z
|
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: ner-dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ner-dummy-model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
somosnlp-hackathon-2022/es_tweets_laboral
|
somosnlp-hackathon-2022
| 2022-04-01T14:50:40Z
| 1
| 1
|
spacy
|
[
"spacy",
"text-classification",
"es",
"region:us"
] |
text-classification
| 2022-04-01T13:48:09Z
|
---
tags:
- spacy
- text-classification
language: es
widget:
- text: "todos merecemos un salario justo"
---
## es_tweets_laboral ##
Modelo creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
|
notexist/ttt
|
notexist
| 2022-04-01T13:16:50Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-01T12:45:30Z
|
---
license: apache-2.0
---
|
bmichele/poetry-generation-firstline-mbart-ws-fi-sorted
|
bmichele
| 2022-04-01T13:03:49Z
| 0
| 0
| null |
[
"pytorch",
"region:us"
] | null | 2022-04-01T12:58:00Z
|
TODO: This is still a demo model, the file does not match with the model card!!!
# poetry-generation-firstline-mbart-ws-fi-sorted
* `nextline`: generates the first poem line from keywords
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `ws`: trained on Wikisource data
* `fi`: Finnish language
* `sorted`: the order of input keywords matter when generating candidates
|
bharatR/up_down
|
bharatR
| 2022-04-01T12:38:05Z
| 0
| 0
| null |
[
"classification",
"en",
"dataset:cifar10-custom",
"region:us"
] | null | 2022-04-01T12:19:00Z
|
---
language: en
tags:
- classification
datasets:
- cifar10-custom
metrics:
- accuracy
---
# Up-Down Classification
This repo has the weights of resnet-18 model training on cifar-10 custom data, where some images are made upside down, and the goal is to predict the orientation of the image(0/1 classification task).
|
bmichele/poetry-generation-nextline-mbart-ws-fi-single
|
bmichele
| 2022-04-01T11:51:32Z
| 0
| 0
| null |
[
"pytorch",
"region:us"
] | null | 2022-04-01T11:35:07Z
|
# poetry-generation-nextline-mbart-ws-fi-single
* `nextline`: generates a poem line from previous line(s)
* `mbart`: base model is [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
* `ws`: trained on Wikisource data
* `fi`: Finnish language
* `single`: uses only last poem line as input for generation
|
Basedino/GPT-RO
|
Basedino
| 2022-04-01T07:47:41Z
| 0
| 0
| null |
[
"license:gpl-3.0",
"region:us"
] | null | 2022-03-31T08:19:30Z
|
---
license: gpl-3.0
---
So i made this model because i had nothing to do. it's gpt 2 124m finetuned to a bunch of italian recipes.
I made it using aitextgen, so you can use that to play with the model easily.
|
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-10
|
yy642
| 2022-04-01T06:04:00Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T23:51:06Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-mnli-rte-wnli-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mnli-rte-wnli-10
This model is a fine-tuned version of [yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5](https://huggingface.co/yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5876
- Accuracy: 0.9206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0641 | 1.0 | 16558 | 0.4528 | 0.9138 |
| 0.0479 | 2.0 | 33116 | 0.5116 | 0.9153 |
| 0.0363 | 3.0 | 49674 | 0.5660 | 0.9138 |
| 0.0244 | 4.0 | 66232 | 0.5876 | 0.9206 |
| 0.0145 | 5.0 | 82790 | 0.6156 | 0.9192 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Yaxin/xlm-roberta-base-amazon-en-es-fr-mlm
|
Yaxin
| 2022-04-01T05:28:33Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"dataset:Yaxin/amazon_reviews_multi",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-31T14:56:00Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- Yaxin/amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-amazon-en-es-fr-mlm
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: Yaxin/amazon_reviews_multi
type: Yaxin/amazon_reviews_multi
metrics:
- name: Accuracy
type: accuracy
value: 0.6951035447140035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-amazon-en-es-fr-mlm
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the Yaxin/amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3936
- Accuracy: 0.6951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
emre/distilgpt2-pretrained-tr-10e
|
emre
| 2022-03-31T22:10:43Z
| 4
| 0
|
transformers
|
[
"transformers",
"jax",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-31T21:59:43Z
|
---
license: apache-2.0
---
|
arjundd/dosma-models
|
arjundd
| 2022-03-31T21:39:54Z
| 0
| 0
| null |
[
"mri",
"knee",
"segmentation",
"en",
"region:us"
] | null | 2022-03-31T18:30:03Z
|
---
language: en
tags:
- mri
- knee
- segmentation
---
# DOSMA models
These models are those that are made publicly available in the [DOSMA](https://github.com/ad12/DOSMA).
More information on these models can be found in the [documentation](https://dosma.readthedocs.io/en/latest/models.html).
## Citation
If you use any models, please cite any reference for the model in addition to the DOSMA reference below:
```
@inproceedings{desai2019dosma,
title={DOSMA: A deep-learning, open-source framework for musculoskeletal MRI analysis},
author={Desai, Arjun D and Barbieri, Marco and Mazzoli, Valentina and Rubin, Elka and Black, Marianne S and Watkins, Lauren E and Gold, Garry E and Hargreaves, Brian A and Chaudhari, Akshay S},
booktitle={Proc 27th Annual Meeting ISMRM, Montreal},
pages={1135},
year={2019}
}
```
|
abdusah/aradia-ctc-hubert-ft
|
abdusah
| 2022-03-31T20:56:27Z
| 14
| 0
|
transformers
|
[
"transformers",
"pytorch",
"hubert",
"automatic-speech-recognition",
"abdusahmbzuai/arabic_speech_massive_300hrs",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-31T08:14:31Z
|
---
tags:
- automatic-speech-recognition
- abdusahmbzuai/arabic_speech_massive_300hrs
- generated_from_trainer
model-index:
- name: aradia-ctc-hubert-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aradia-ctc-hubert-ft
This model is a fine-tuned version of [/l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft](https://huggingface.co//l/users/abdulwahab.sahyoun/aradia/aradia-ctc-hubert-ft) on the ABDUSAHMBZUAI/ARABIC_SPEECH_MASSIVE_300HRS - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8536
- Wer: 0.3737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.43 | 100 | 3.6934 | 1.0 |
| No log | 0.87 | 200 | 3.0763 | 1.0 |
| No log | 1.3 | 300 | 2.9737 | 1.0 |
| No log | 1.74 | 400 | 2.5734 | 1.0 |
| 5.0957 | 2.17 | 500 | 1.1900 | 0.9011 |
| 5.0957 | 2.61 | 600 | 0.9726 | 0.7572 |
| 5.0957 | 3.04 | 700 | 0.8960 | 0.6209 |
| 5.0957 | 3.48 | 800 | 0.7851 | 0.5515 |
| 5.0957 | 3.91 | 900 | 0.7271 | 0.5115 |
| 1.0312 | 4.35 | 1000 | 0.7053 | 0.4955 |
| 1.0312 | 4.78 | 1100 | 0.6823 | 0.4737 |
| 1.0312 | 5.22 | 1200 | 0.6768 | 0.4595 |
| 1.0312 | 5.65 | 1300 | 0.6635 | 0.4488 |
| 1.0312 | 6.09 | 1400 | 0.6602 | 0.4390 |
| 0.6815 | 6.52 | 1500 | 0.6464 | 0.4310 |
| 0.6815 | 6.95 | 1600 | 0.6455 | 0.4394 |
| 0.6815 | 7.39 | 1700 | 0.6630 | 0.4312 |
| 0.6815 | 7.82 | 1800 | 0.6521 | 0.4126 |
| 0.6815 | 8.26 | 1900 | 0.6282 | 0.4284 |
| 0.544 | 8.69 | 2000 | 0.6248 | 0.4178 |
| 0.544 | 9.13 | 2100 | 0.6510 | 0.4104 |
| 0.544 | 9.56 | 2200 | 0.6527 | 0.4013 |
| 0.544 | 10.0 | 2300 | 0.6511 | 0.4064 |
| 0.544 | 10.43 | 2400 | 0.6734 | 0.4061 |
| 0.4478 | 10.87 | 2500 | 0.6756 | 0.4145 |
| 0.4478 | 11.3 | 2600 | 0.6727 | 0.3990 |
| 0.4478 | 11.74 | 2700 | 0.6619 | 0.4007 |
| 0.4478 | 12.17 | 2800 | 0.6614 | 0.4019 |
| 0.4478 | 12.61 | 2900 | 0.6695 | 0.4004 |
| 0.3919 | 13.04 | 3000 | 0.6778 | 0.3966 |
| 0.3919 | 13.48 | 3100 | 0.6872 | 0.3971 |
| 0.3919 | 13.91 | 3200 | 0.6882 | 0.3945 |
| 0.3919 | 14.35 | 3300 | 0.7177 | 0.4010 |
| 0.3919 | 14.78 | 3400 | 0.6888 | 0.4043 |
| 0.3767 | 15.22 | 3500 | 0.7124 | 0.4202 |
| 0.3767 | 15.65 | 3600 | 0.7276 | 0.4120 |
| 0.3767 | 16.09 | 3700 | 0.7265 | 0.4034 |
| 0.3767 | 16.52 | 3800 | 0.7392 | 0.4077 |
| 0.3767 | 16.95 | 3900 | 0.7403 | 0.3965 |
| 0.3603 | 17.39 | 4000 | 0.7445 | 0.4016 |
| 0.3603 | 17.82 | 4100 | 0.7579 | 0.4012 |
| 0.3603 | 18.26 | 4200 | 0.7225 | 0.3963 |
| 0.3603 | 18.69 | 4300 | 0.7355 | 0.3951 |
| 0.3603 | 19.13 | 4400 | 0.7482 | 0.3925 |
| 0.3153 | 19.56 | 4500 | 0.7723 | 0.3972 |
| 0.3153 | 20.0 | 4600 | 0.7469 | 0.3898 |
| 0.3153 | 20.43 | 4700 | 0.7800 | 0.3944 |
| 0.3153 | 20.87 | 4800 | 0.7827 | 0.3897 |
| 0.3153 | 21.3 | 4900 | 0.7935 | 0.3914 |
| 0.286 | 21.74 | 5000 | 0.7984 | 0.3750 |
| 0.286 | 22.17 | 5100 | 0.7945 | 0.3830 |
| 0.286 | 22.61 | 5200 | 0.8011 | 0.3775 |
| 0.286 | 23.04 | 5300 | 0.7978 | 0.3824 |
| 0.286 | 23.48 | 5400 | 0.8161 | 0.3833 |
| 0.2615 | 23.91 | 5500 | 0.7823 | 0.3858 |
| 0.2615 | 24.35 | 5600 | 0.8312 | 0.3863 |
| 0.2615 | 24.78 | 5700 | 0.8427 | 0.3819 |
| 0.2615 | 25.22 | 5800 | 0.8432 | 0.3802 |
| 0.2615 | 25.65 | 5900 | 0.8286 | 0.3794 |
| 0.2408 | 26.09 | 6000 | 0.8224 | 0.3824 |
| 0.2408 | 26.52 | 6100 | 0.8228 | 0.3823 |
| 0.2408 | 26.95 | 6200 | 0.8324 | 0.3795 |
| 0.2408 | 27.39 | 6300 | 0.8564 | 0.3744 |
| 0.2408 | 27.82 | 6400 | 0.8629 | 0.3774 |
| 0.2254 | 28.26 | 6500 | 0.8545 | 0.3778 |
| 0.2254 | 28.69 | 6600 | 0.8492 | 0.3767 |
| 0.2254 | 29.13 | 6700 | 0.8511 | 0.3751 |
| 0.2254 | 29.56 | 6800 | 0.8491 | 0.3753 |
| 0.2254 | 30.0 | 6900 | 0.8536 | 0.3737 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ghees/FatimeFellowship
|
ghees
| 2022-03-31T20:47:24Z
| 0
| 0
| null |
[
"region:us"
] | null | 2022-03-31T20:45:21Z
|
Preprocessing before feeding to model
```
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('paraphrase-MiniLM-L6-v2', device='cuda')
...
embeddings = model.encode([text])
return embeddings[0]
```
|
osanseviero/test_model_bertmesh
|
osanseviero
| 2022-03-31T20:35:05Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-31T19:47:46Z
|
---
license: apache-2.0
---
# WellcomeBertMesh
WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications.
# Model description
The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model.
WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies.
We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs.
The model achieves 63% micro f1 with a 0.5 threshold for all labels.
The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger
# How to use
⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models.
You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`.
```
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Wellcome/WellcomeBertMesh"
)
model = AutoModel.from_pretrained(
"Wellcome/WellcomeBertMesh",
trust_remote_code=True
)
text = "This grant is about malaria and not about HIV."
inputs = tokenizer([text], padding="max_length")
labels = model(**inputs, return_labels=True)
print(labels)
```
You can inspect the model code if you navigate to the files and see `model.py`.
|
WENGSYX/Deberta-Chinese-Large
|
WENGSYX
| 2022-03-31T20:08:59Z
| 56
| 16
|
transformers
|
[
"transformers",
"pytorch",
"deberta",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
# Deberta-Chinese
本项目,基于微软开源的Deberta模型,在中文领域进行预训练。开源本模型,旨在为其他人提供更多预训练语言模型选择。
本预训练模型,基于WuDaoCorpora语料库预训练而成。WuDaoCorpora是北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑“悟道”大模型项目研究。
使用WWM与n-gramMLM 等预训练方法进行预训练。
| 预训练模型 | 学习率 | batchsize | 设备 | 语料库 | 时间 | 优化器 |
| --------------------- | ------ | --------- | ------ | ------ | ---- | ------ |
| Deberta-Chinese-Large | 1e-5 | 512 | 2*3090 | 200G | 14天 | AdamW |
### 加载与使用
依托于huggingface-transformers
```
tokenizer = BertTokenizer.from_pretrained("WENGSYX/Deberta-Chinese-Large")
model = AutoModel.from_pretrained("WENGSYX/Deberta-Chinese-Large")
```
#### 注意,请使用BertTokenizer加载中文词表
|
rkoushikroy2/upside_down_resnet50
|
rkoushikroy2
| 2022-03-31T18:18:13Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-31T17:58:51Z
|
---
license: apache-2.0
---
|
israfelsr/UpsideDownClassifier
|
israfelsr
| 2022-03-31T17:06:27Z
| 0
| 0
| null |
[
"region:us"
] | null | 2022-03-31T15:41:33Z
|
# UpsideDownClassifier
This classifier was trained using the [auto-cats-and-dogs](https://huggingface.co/datasets/nateraw/auto-cats-and-dogs) dataset. It was trained over 5 epochs using a pretrained resent18.
The configuration for the model was
```
config = {
"batch_size": 64,
"num_epochs": 5,
"lr": 0.005,
"betas": (0.9, 0.999),
"eps": 1e-6,
"lr": 8e-3,
"do_eval": True
}
```
## Traning Plots
We can see in the figures below the training plots for accuracy and the loss in both, training and validation sets.
### Accuracy Plot

### Loss Plot

## Some Results
Evaluating on the Test Set, we obtain:
- Accuracy = 0.9696
A batch with some missclassifications can be seen in the picture below.

|
rahulacj/bertweet-base-finetuned-sentiment-analysis
|
rahulacj
| 2022-03-31T16:21:16Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T09:42:31Z
|
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-finetuned-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-sentiment-analysis
This model is a fine-tuned version of [cardiffnlp/bertweet-base-sentiment](https://huggingface.co/cardiffnlp/bertweet-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8458
- Accuracy: 0.6426
- F1: 0.6397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8904 | 1.0 | 630 | 0.8509 | 0.6381 | 0.6340 |
| 0.7655 | 2.0 | 1260 | 0.8345 | 0.6579 | 0.6559 |
| 0.66 | 3.0 | 1890 | 0.9199 | 0.6548 | 0.6514 |
| 0.447 | 4.0 | 2520 | 1.0324 | 0.6429 | 0.6417 |
| 0.3585 | 5.0 | 3150 | 1.1234 | 0.6452 | 0.6424 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
eren23/pneumonia-bielefeld-dl-course
|
eren23
| 2022-03-31T15:55:27Z
| 61
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-27T12:17:21Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: pneumonia-bielefeld-dl-course
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8456632494926453
---
# pneumonia-bielefeld-dl-course
This registry contains the model for making pneumonia predictions and was prepared for
Bielefeld University Deep Learning course homework.
The code used for this implementation mostly comes from here: https://github.com/nateraw/huggingpics it was a ready pipeline for model fine-tuning with huggingface and PyTorch Lightning for another dataset.
|
Nonem100/Test-Model
|
Nonem100
| 2022-03-31T15:19:38Z
| 62
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-31T15:19:30Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Test-Model
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9017857313156128
---
# Test-Model
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cotton candy

#### hamburger

#### hot dog

#### nachos

#### popcorn

|
Edresson/wav2vec2-large-xlsr-coraa-portuguese
|
Edresson
| 2022-03-31T13:28:43Z
| 632
| 15
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"pt",
"portuguese-speech-corpus",
"hf-asr-leaderboard",
"PyTorch",
"dataset:CORAA",
"arxiv:2110.15731",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z
|
---
language: pt
datasets:
- CORAA
metrics:
- wer
tags:
- audio
- speech
- wav2vec2
- pt
- portuguese-speech-corpus
- automatic-speech-recognition
- hf-asr-leaderboard
- speech
- PyTorch
license: apache-2.0
model-index:
- name: Edresson Casanova XLSR Wav2Vec2 Large 53 Portuguese
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: CORAA
type: CORAA
args: pt
metrics:
- name: Test CORAA WER
type: wer
value: 25.26
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7
type: mozilla-foundation/common_voice_7_0
args: pt
metrics:
- name: Test WER on Common Voice 7
type: wer
value: 20.08
---
# Wav2vec 2.0 trained with CORAA Portuguese Dataset
This a the demonstration of a fine-tuned Wav2vec model for Portuguese using the following [CORAA dataset](https://github.com/nilc-nlp/CORAA)
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-xlsr-coraa-portuguese")
```
# Results
For the results check the [CORAA article](https://arxiv.org/abs/2110.15731)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
|
Visual-Attention-Network/van-small
|
Visual-Attention-Network
| 2022-03-31T12:45:49Z
| 90
| 1
|
transformers
|
[
"transformers",
"pytorch",
"van",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2202.09741",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-16T15:05:40Z
|
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Van
Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification).
Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, VanForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base")
>>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van).
|
Visual-Attention-Network/van-tiny
|
Visual-Attention-Network
| 2022-03-31T12:45:47Z
| 173
| 2
|
transformers
|
[
"transformers",
"pytorch",
"van",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2202.09741",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-16T15:05:02Z
|
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Van
Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification).
Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, VanForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base")
>>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van).
|
Visual-Attention-Network/van-large
|
Visual-Attention-Network
| 2022-03-31T12:45:46Z
| 122
| 1
|
transformers
|
[
"transformers",
"pytorch",
"van",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2202.09741",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-09T18:03:37Z
|
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Van
Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification).
Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, VanForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base")
>>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van).
|
Visual-Attention-Network/van-base
|
Visual-Attention-Network
| 2022-03-31T12:45:44Z
| 185
| 1
|
transformers
|
[
"transformers",
"pytorch",
"van",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2202.09741",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-16T15:06:37Z
|
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Van
Van model trained on imagenet-1k. It was introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [this repository](https://github.com/Visual-Attention-Network/VAN-Classification).
Disclaimer: The team releasing Van did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
This paper introduces a new attention layer based on convolution operations able to capture both local and distant relationships. This is done by combining normal and large kernel convolution layers. The latter uses a dilated convolution to capture distant correlations.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=van) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, VanForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("Visual-Attention-Network/van-base")
>>> model = VanForImageClassification.from_pretrained("Visual-Attention-Network/van-base")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
tabby, tabby cat
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/van).
|
Khalsuu/2nd-wav2vec2-l-xls-r-300m-turkish-test
|
Khalsuu
| 2022-03-31T12:09:32Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-31T08:45:25Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: 2nd-wav2vec2-l-xls-r-300m-turkish-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2nd-wav2vec2-l-xls-r-300m-turkish-test
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6019
- Wer: 0.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0522 | 3.67 | 400 | 0.7773 | 0.7296 |
| 0.5369 | 7.34 | 800 | 0.6282 | 0.5888 |
| 0.276 | 11.01 | 1200 | 0.5998 | 0.5330 |
| 0.1725 | 14.68 | 1600 | 0.5859 | 0.4908 |
| 0.1177 | 18.35 | 2000 | 0.6019 | 0.4444 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
nikhil6041/wav2vec2-commonvoice-tamil
|
nikhil6041
| 2022-03-31T09:24:01Z
| 18
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:mit",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-31T04:00:23Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-commonvoice-tamil
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-commonvoice-tamil
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-tamil-tam-250](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-tamil-tam-250) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3415
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.384 | 1.69 | 200 | 3.3400 | 1.0 |
| 3.3085 | 3.39 | 400 | 3.3609 | 1.0 |
| 3.3008 | 5.08 | 600 | 3.3331 | 1.0 |
| 3.2852 | 6.78 | 800 | 3.3492 | 1.0 |
| 3.2908 | 8.47 | 1000 | 3.3318 | 1.0 |
| 3.2865 | 10.17 | 1200 | 3.3501 | 1.0 |
| 3.2826 | 11.86 | 1400 | 3.3403 | 1.0 |
| 3.2875 | 13.56 | 1600 | 3.3335 | 1.0 |
| 3.2899 | 15.25 | 1800 | 3.3311 | 1.0 |
| 3.2755 | 16.95 | 2000 | 3.3617 | 1.0 |
| 3.2877 | 18.64 | 2200 | 3.3317 | 1.0 |
| 3.2854 | 20.34 | 2400 | 3.3560 | 1.0 |
| 3.2878 | 22.03 | 2600 | 3.3332 | 1.0 |
| 3.2766 | 23.73 | 2800 | 3.3317 | 1.0 |
| 3.2943 | 25.42 | 3000 | 3.3737 | 1.0 |
| 3.2845 | 27.12 | 3200 | 3.3347 | 1.0 |
| 3.2765 | 28.81 | 3400 | 3.3415 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
snehatyagi/wav2vec2_test
|
snehatyagi
| 2022-03-31T07:21:45Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-26T09:11:57Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_test
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 91.1661
- Wer: 0.5714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 11.9459 | 100.0 | 100 | 46.9901 | 1.0 |
| 3.2175 | 200.0 | 200 | 73.0950 | 1.0 |
| 1.8117 | 300.0 | 300 | 78.4884 | 0.6735 |
| 1.3694 | 400.0 | 400 | 84.0168 | 0.6327 |
| 1.1392 | 500.0 | 500 | 85.2083 | 0.5918 |
| 0.979 | 600.0 | 600 | 88.9109 | 0.5918 |
| 0.8917 | 700.0 | 700 | 89.0310 | 0.5918 |
| 0.8265 | 800.0 | 800 | 90.0659 | 0.6122 |
| 0.769 | 900.0 | 900 | 91.8476 | 0.5714 |
| 0.7389 | 1000.0 | 1000 | 91.1661 | 0.5714 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.6
|
davidmasip/racism
|
davidmasip
| 2022-03-31T06:56:46Z
| 26
| 0
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"es",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-16T18:23:46Z
|
---
license: cc
language: es
widget:
- text: "Me cae muy bien."
example_title: "Non-racist example"
- text: "Unos menas agreden a una mujer."
example_title: "Racist example"
---
Model to predict whether a given text is racist or not:
* `LABEL_0` output indicates non-racist text
* `LABEL_1` output indicates racist text
Usage:
```python
from transformers import pipeline
RACISM_MODEL = "davidmasip/racism"
racism_analysis_pipe = pipeline("text-classification",
model=RACISM_MODEL, tokenizer=RACISM_MODEL)
results = racism_analysis_pipe("Unos menas agreden a una mujer.")
def clean_labels(results):
for result in results:
label = "Non-racist" if results["label"] == "LABEL_0" else "Racist"
result["label"] = label
clean_labels(results)
print(results)
```
|
unjustify/autotrain-IWant-689220804
|
unjustify
| 2022-03-31T06:46:48Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"unk",
"dataset:unjustify/autotrain-data-IWant",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-31T06:09:55Z
|
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- unjustify/autotrain-data-IWant
co2_eq_emissions: 39.40549299946679
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 689220804
- CO2 Emissions (in grams): 39.40549299946679
## Validation Metrics
- Loss: 2.0426149368286133
- Rouge1: 54.9813
- Rouge2: 44.923
- RougeL: 54.0399
- RougeLsum: 54.2553
- Gen Len: 16.6211
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/unjustify/autotrain-IWant-689220804
```
|
unjustify/autotrain-commonsence-689620825
|
unjustify
| 2022-03-31T06:38:08Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:unjustify/autotrain-data-commonsence",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-31T06:18:51Z
|
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- unjustify/autotrain-data-commonsence
co2_eq_emissions: 20.656741915705204
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 689620825
- CO2 Emissions (in grams): 20.656741915705204
## Validation Metrics
- Loss: 0.7315372824668884
- Accuracy: 0.6354949675117849
- Precision: 0.63792194092827
- Recall: 0.6191451241361658
- AUC: 0.6912165223485615
- F1: 0.6283932978308872
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/unjustify/autotrain-commonsence-689620825
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("unjustify/autotrain-commonsence-689620825", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("unjustify/autotrain-commonsence-689620825", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
ai4bharat/MultiIndicParaphraseGeneration
|
ai4bharat
| 2022-03-31T06:21:30Z
| 19
| 1
|
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"paraphrase-generation",
"multilingual",
"nlp",
"indicnlp",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"dataset:ai4bharat/IndicParaphrase",
"arxiv:2203.05437",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-16T17:37:59Z
|
---
tags:
- paraphrase-generation
- multilingual
- nlp
- indicnlp
datasets:
- ai4bharat/IndicParaphrase
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- mit
---
# MultiIndicParaphraseGeneration
This repository contains the [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint finetuned on the 11 languages of [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) dataset. For finetuning details,
see the [paper](https://arxiv.org/abs/2203.05437).
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for decoding. </li>
<li> Trained on large Indic language corpora (5.53 million sentences). </li>
<li> All languages, have been represented in Devanagari script to encourage transfer learning among the related languages. </li>
</ul>
## Using this model in `transformers`
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("दिल्ली यूनिवर्सिटी देश की प्रसिद्ध यूनिवर्सिटी में से एक है. </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model_output=model.generate(inp, use_cache=True,no_repeat_ngram_size=3,encoder_no_repeat_ngram_size=3, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2hi>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # दिल्ली विश्वविद्यालय देश की प्रमुख विश्वविद्यालयों में शामिल है।
# Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library.
```
# Note:
If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
## Benchmarks
Scores on the `IndicParaphrase` test sets are as follows:
Language | BLEU / Self-BLEU / iBLEU
---------|----------------------------
as | 1.66 / 2.06 / 0.54
bn | 11.57 / 1.69 / 7.59
gu | 22.10 / 2.76 / 14.64
hi | 27.29 / 2.87 / 18.24
kn | 15.40 / 2.98 / 9.89
ml | 10.57 / 1.70 / 6.89
mr | 20.38 / 2.20 / 13.61
or | 19.26 / 2.10 / 12.85
pa | 14.87 / 1.35 / 10.00
ta | 18.52 / 2.88 / 12.10
te | 16.70 / 3.34 / 10.69
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437"
}
```
|
yy642/bert-base-uncased-finetuned-mnli-rte-wnli-5
|
yy642
| 2022-03-31T02:22:21Z
| 15
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T20:09:38Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-mnli-rte-wnli-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mnli-rte-wnli-5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4400
- Accuracy: 0.9209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2253 | 1.0 | 16558 | 0.2346 | 0.9139 |
| 0.1667 | 2.0 | 33116 | 0.2973 | 0.9143 |
| 0.1207 | 3.0 | 49674 | 0.3361 | 0.9203 |
| 0.0553 | 4.0 | 66232 | 0.4400 | 0.9209 |
| 0.033 | 5.0 | 82790 | 0.5175 | 0.9203 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lazyturtl/roomclassifier
|
lazyturtl
| 2022-03-31T01:09:57Z
| 2,692
| 16
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-31T01:09:48Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: roomclassifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9402984976768494
---
# roomclassifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bathroom

#### Bedroom

#### DinningRoom

#### Kitchen

#### Laundry room

#### Livingroom

|
michiyasunaga/BioLinkBERT-large
|
michiyasunaga
| 2022-03-31T00:54:57Z
| 4,470
| 33
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"exbert",
"linkbert",
"biolinkbert",
"fill-mask",
"question-answering",
"text-classification",
"token-classification",
"en",
"dataset:pubmed",
"arxiv:2203.15827",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-08T06:20:38Z
|
---
license: apache-2.0
language: en
datasets:
- pubmed
tags:
- bert
- exbert
- linkbert
- biolinkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
widget:
- text: "Sunitinib is a tyrosine kinase inhibitor"
---
## BioLinkBERT-large
BioLinkBERT-large model pretrained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts along with citation link information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT).
This model achieves state-of-the-art performance on several biomedical NLP benchmarks such as [BLURB](https://microsoft.github.io/BLURB/) and [MedQA-USMLE](https://github.com/jind11/MedQA).
## Model description
LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document.
LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval).
## Intended uses & limitations
The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification.
You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text).
### How to use
To use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/BioLinkBERT-large')
model = AutoModel.from_pretrained('michiyasunaga/BioLinkBERT-large')
inputs = tokenizer("Sunitinib is a tyrosine kinase inhibitor", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases.
## Evaluation results
When fine-tuned on downstream tasks, LinkBERT achieves the following results.
**Biomedical benchmarks ([BLURB](https://microsoft.github.io/BLURB/), [MedQA](https://github.com/jind11/MedQA), [MMLU](https://github.com/hendrycks/test), etc.):** BioLinkBERT attains new state-of-the-art.
| | BLURB score | PubMedQA | BioASQ | MedQA-USMLE |
| ---------------------- | -------- | -------- | ------- | -------- |
| PubmedBERT-base | 81.10 | 55.8 | 87.5 | 38.1 |
| **BioLinkBERT-base** | **83.39** | **70.2** | **91.4** | **40.0** |
| **BioLinkBERT-large** | **84.30** | **72.2** | **94.8** | **44.6** |
| | MMLU-professional medicine |
| ---------------------- | -------- |
| GPT-3 (175 params) | 38.7 |
| UnifiedQA (11B params) | 43.2 |
| **BioLinkBERT-large (340M params)** | **50.7** |
## Citation
If you find LinkBERT useful in your project, please cite the following:
```bibtex
@InProceedings{yasunaga2022linkbert,
author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang},
title = {LinkBERT: Pretraining Language Models with Document Links},
year = {2022},
booktitle = {Association for Computational Linguistics (ACL)},
}
```
|
michiyasunaga/LinkBERT-base
|
michiyasunaga
| 2022-03-31T00:38:32Z
| 847
| 7
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"exbert",
"linkbert",
"fill-mask",
"question-answering",
"text-classification",
"token-classification",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2203.15827",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-08T07:21:51Z
|
---
license: apache-2.0
language: en
datasets:
- wikipedia
- bookcorpus
tags:
- bert
- exbert
- linkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
---
## LinkBERT-base
LinkBERT-base model pretrained on English Wikipedia articles along with hyperlink information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT).
## Model description
LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document.
LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval).
## Intended uses & limitations
The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification.
You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text).
### How to use
To use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/LinkBERT-base')
model = AutoModel.from_pretrained('michiyasunaga/LinkBERT-base')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases.
## Evaluation results
When fine-tuned on downstream tasks, LinkBERT achieves the following results.
**General benchmarks ([MRQA](https://github.com/mrqa/MRQA-Shared-Task-2019) and [GLUE](https://gluebenchmark.com/)):**
| | HotpotQA | TriviaQA | SearchQA | NaturalQ | NewsQA | SQuAD | GLUE |
| ---------------------- | -------- | -------- | -------- | -------- | ------ | ----- | -------- |
| | F1 | F1 | F1 | F1 | F1 | F1 | Avg score |
| BERT-base | 76.0 | 70.3 | 74.2 | 76.5 | 65.7 | 88.7 | 79.2 |
| **LinkBERT-base** | **78.2** | **73.9** | **76.8** | **78.3** | **69.3** | **90.1** | **79.6** |
| BERT-large | 78.1 | 73.7 | 78.3 | 79.0 | 70.9 | 91.1 | 80.7 |
| **LinkBERT-large** | **80.8** | **78.2** | **80.5** | **81.0** | **72.6** | **92.7** | **81.1** |
## Citation
If you find LinkBERT useful in your project, please cite the following:
```bibtex
@InProceedings{yasunaga2022linkbert,
author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang},
title = {LinkBERT: Pretraining Language Models with Document Links},
year = {2022},
booktitle = {Association for Computational Linguistics (ACL)},
}
```
|
hoangbinhmta99/wav2vec-NCKH-2022
|
hoangbinhmta99
| 2022-03-31T00:28:52Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"feature-extraction",
"audio",
"speech",
"Transformer",
"automatic-speech-recognition",
"vi",
"dataset:vivos",
"dataset:common_voice",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-30T04:39:46Z
|
---
language: vi
datasets:
- vivos
- common_voice
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- audio
- speech
- Transformer
license: cc-by-nc-4.0
model-index:
- name: Wav2vec2 NCKH Vietnamese 2022
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: No
---
Convert from model .pt to transformer
Link: https://huggingface.co/tommy19970714/wav2vec2-base-960h
Bash:
```bash
pip install transformers[sentencepiece]
pip install fairseq -U
git clone https://github.com/huggingface/transformers.git
cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py .
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt -O ./wav2vec_small.pt
mkdir dict
wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt
mkdir outputs
python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
--pytorch_dump_folder_path ./outputs --checkpoint_path ./finetuned/wav2vec_small.pt
--dict_path ./dict/dict.ltr.txt --not_finetuned
```
# install and upload model
```
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
git lfs install
sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/hoangbinhmta99/wav2vec-demo
ls
cd wav2vec-demo/
git status
git add .
git commit -m "First model version"
git config --global user.email [yourname]
git config --global user.name [yourpass]
git commit -m "First model version"
git push
```
|
michiyasunaga/LinkBERT-large
|
michiyasunaga
| 2022-03-31T00:27:01Z
| 1,297
| 11
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"exbert",
"linkbert",
"fill-mask",
"question-answering",
"text-classification",
"token-classification",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:2203.15827",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-08T01:42:14Z
|
---
license: apache-2.0
language: en
datasets:
- wikipedia
- bookcorpus
tags:
- bert
- exbert
- linkbert
- feature-extraction
- fill-mask
- question-answering
- text-classification
- token-classification
---
## LinkBERT-large
LinkBERT-large model pretrained on English Wikipedia articles along with hyperlink information. It is introduced in the paper [LinkBERT: Pretraining Language Models with Document Links (ACL 2022)](https://arxiv.org/abs/2203.15827). The code and data are available in [this repository](https://github.com/michiyasunaga/LinkBERT).
## Model description
LinkBERT is a transformer encoder (BERT-like) model pretrained on a large corpus of documents. It is an improvement of BERT that newly captures **document links** such as hyperlinks and citation links to include knowledge that spans across multiple documents. Specifically, it was pretrained by feeding linked documents into the same language model context, besides a single document.
LinkBERT can be used as a drop-in replacement for BERT. It achieves better performance for general language understanding tasks (e.g. text classification), and is also particularly effective for **knowledge-intensive** tasks (e.g. question answering) and **cross-document** tasks (e.g. reading comprehension, document retrieval).
## Intended uses & limitations
The model can be used by fine-tuning on a downstream task, such as question answering, sequence classification, and token classification.
You can also use the raw model for feature extraction (i.e. obtaining embeddings for input text).
### How to use
To use the model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('michiyasunaga/LinkBERT-large')
model = AutoModel.from_pretrained('michiyasunaga/LinkBERT-large')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning, you can use [this repository](https://github.com/michiyasunaga/LinkBERT) or follow any other BERT fine-tuning codebases.
## Evaluation results
When fine-tuned on downstream tasks, LinkBERT achieves the following results.
**General benchmarks ([MRQA](https://github.com/mrqa/MRQA-Shared-Task-2019) and [GLUE](https://gluebenchmark.com/)):**
| | HotpotQA | TriviaQA | SearchQA | NaturalQ | NewsQA | SQuAD | GLUE |
| ---------------------- | -------- | -------- | -------- | -------- | ------ | ----- | -------- |
| | F1 | F1 | F1 | F1 | F1 | F1 | Avg score |
| BERT-base | 76.0 | 70.3 | 74.2 | 76.5 | 65.7 | 88.7 | 79.2 |
| **LinkBERT-base** | **78.2** | **73.9** | **76.8** | **78.3** | **69.3** | **90.1** | **79.6** |
| BERT-large | 78.1 | 73.7 | 78.3 | 79.0 | 70.9 | 91.1 | 80.7 |
| **LinkBERT-large** | **80.8** | **78.2** | **80.5** | **81.0** | **72.6** | **92.7** | **81.1** |
## Citation
If you find LinkBERT useful in your project, please cite the following:
```bibtex
@InProceedings{yasunaga2022linkbert,
author = {Michihiro Yasunaga and Jure Leskovec and Percy Liang},
title = {LinkBERT: Pretraining Language Models with Document Links},
year = {2022},
booktitle = {Association for Computational Linguistics (ACL)},
}
```
|
yinde/fatimah_fake_news_bert
|
yinde
| 2022-03-30T22:41:12Z
| 16
| 1
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T20:54:21Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fatimah_fake_news_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fatimah_fake_news_bert
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on [Fake and real dataset on kaggle ]([distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english))
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3298 | 0.06 | 200 | 0.0094 | 0.9987 |
| 0.0087 | 0.11 | 400 | 0.0091 | 0.9988 |
| 0.0126 | 0.17 | 600 | 0.0132 | 0.9965 |
| 0.0081 | 0.22 | 800 | 0.0100 | 0.9987 |
| 0.0132 | 0.28 | 1000 | 0.0086 | 0.9990 |
| 0.0131 | 0.33 | 1200 | 0.0070 | 0.9986 |
| 0.0086 | 0.39 | 1400 | 0.0079 | 0.9990 |
| 0.0041 | 0.45 | 1600 | 0.0057 | 0.9991 |
| 0.0069 | 0.5 | 1800 | 0.0083 | 0.9989 |
| 0.0052 | 0.56 | 2000 | 0.0043 | 0.9993 |
| 0.0 | 0.61 | 2200 | 0.0047 | 0.9993 |
| 0.003 | 0.67 | 2400 | 0.0052 | 0.9994 |
| 0.0126 | 0.72 | 2600 | 0.0028 | 0.9997 |
| 0.0047 | 0.78 | 2800 | 0.0018 | 0.9996 |
| 0.0 | 0.84 | 3000 | 0.0027 | 0.9996 |
| 0.0001 | 0.89 | 3200 | 0.0029 | 0.9996 |
| 0.0079 | 0.95 | 3400 | 0.0010 | 0.9998 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
UBC-NLP/MARBERTv2
|
UBC-NLP
| 2022-03-30T21:52:31Z
| 3,124
| 8
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"Arabic BERT",
"MSA",
"Twitter",
"Masked Langauge Model",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language:
- ar
tags:
- Arabic BERT
- MSA
- Twitter
- Masked Langauge Model
widget:
- text: "اللغة العربية هي لغة [MASK]."
---
<img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="30%" height="30%" align="right"/>
**MARBERTv2** is one of three models described in our **ACL 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://aclanthology.org/2021.acl-long.551.pdf)**.
We find that results with ARBERT and MARBERT on QA are not competitive, a clear discrepancy from what we have observed thus far on other tasksWe hypothesize this is because the two models are pre-trained with a sequence length of only 128, which does not allow them to sufficiently capture both a question and its likely answer within the same sequence window during the pre-training.
To rectify this, we further pre-train the stronger model, MARBERT, on the same MSA data as ARBERT in addition to AraNews dataset but with a bigger sequence length of 512 tokens for 40 epochs. We call this
further pre-trained model **MARBERTv2**, noting it has **29B tokens**. MARBERTv2 acquires best performance on all but one test set, where XLM-RLarge marginally outperforms us (only in F1).
For more information, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert).
# BibTex
If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{abdul-mageed-etal-2021-arbert,
title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
author = "Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.551",
doi = "10.18653/v1/2021.acl-long.551",
pages = "7088--7105",
abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
|
mrm8488/biomedtra-small-es
|
mrm8488
| 2022-03-30T21:07:50Z
| 3
| 2
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"pretraining",
"Spanish",
"Electra",
"Bio",
"Medical",
"es",
"dataset:cowese",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
---
language: es
tags:
- Spanish
- Electra
- Bio
- Medical
datasets:
- cowese
---
## 🦠 BIOMEDtra 🏥
**BIOMEDtra** (small) is an Electra like model (discriminator in this case) trained on [Spanish Biomedical Crawled Corpus](https://zenodo.org/record/5510033#.Yhdk1ZHMLJx).
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Training details
The model was trained using the Electra base code for 3 days on 1 GPU (Tesla V100 16GB).
## Dataset details
The largest Spanish biomedical and heath corpus to date gathered from a massive Spanish health domain crawler over more than 3,000 URLs were downloaded and preprocessed. The collected data have been preprocessed to produce the **CoWeSe** (Corpus Web Salud Español) resource, a large-scale and high-quality corpus intended for biomedical and health NLP in Spanish.
## Model details ⚙
|Param| # Value|
|-----|--------|
|Layers| 12 |
|Hidden | 256 |
|Params| 14M |
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.9561|
|Precision| 0.808|
|Recall | 0.531 |
|AUC | 0.949|
## Benchmarks 🔨
WIP 🚧
## How to use the discriminator in `transformers`
```py
from transformers import ElectraForPreTraining, ElectraTokenizerFast
import torch
discriminator = ElectraForPreTraining.from_pretrained("mrm8488/biomedtra-small-es")
tokenizer = ElectraTokenizerFast.from_pretrained("mrm8488/biomedtra-small-es")
sentence = "Los españoles tienden a sufir déficit de vitamina c"
fake_sentence = "Los españoles tienden a déficit sufrir de vitamina c"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % prediction, end="") for prediction in predictions.tolist()]
```
## Acknowledgments
TBA
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2022biomedtra,
title={Spanish BioMedical Electra (small)},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/biomedtra-small-es},
year={2022}
}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
mrm8488/legalectra-small-spanish
|
mrm8488
| 2022-03-30T21:06:31Z
| 41
| 3
|
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"Spanish",
"Electra",
"Legal",
"es",
"dataset:Spanish-legal-corpora",
"arxiv:1406.2661",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z
|
---
language: es
tags:
- Spanish
- Electra
- Legal
datasets:
- Spanish-legal-corpora
---
## LEGALECTRA ⚖️
**LEGALECTRA** (small) is an Electra like model (discriminator in this case) trained on [A collection of corpora of Spanish legal domain](https://zenodo.org/record/5495529#.YZItp3vMLJw).
As mentioned in the original [paper](https://openreview.net/pdf?id=r1xMH1BtvB):
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
For a detailed description and experimental results, please refer the paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
## Training details
The model was trained using the Electra base code for 3 days on 1 Tesla V100 16GB.
## Model details ⚙
|Param| # Value|
|-----|--------|
|Layers| 12 |
|Hidden | 256 |
|Params| 14M |
## Evaluation metrics (for discriminator) 🧾
|Metric | # Score |
|-------|---------|
|Accuracy| 0.955|
|Precision| 0.790|
|AUC | 0.971|
## Benchmarks 🔨
WIP 🚧
## How to use the discriminator in `transformers`
TBA
## Acknowledgments
TBA
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2022legalectra,
title={Spanish Legal Electra (small)},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/legalectra-small-spanish},
year={2022}
}
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
vlsb/autotrain-security-text-classification-albert-688320769
|
vlsb
| 2022-03-30T20:59:32Z
| 15
| 2
|
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain",
"unk",
"dataset:vlsb/autotrain-data-security-text-classification-albert",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T20:55:59Z
|
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vlsb/autotrain-data-security-text-classification-albert
co2_eq_emissions: 3.670416179055797
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 688320769
- CO2 Emissions (in grams): 3.670416179055797
## Validation Metrics
- Loss: 0.3046899139881134
- Accuracy: 0.8826530612244898
- Precision: 0.9181818181818182
- Recall: 0.8782608695652174
- AUC: 0.9423510466988727
- F1: 0.8977777777777778
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-text-classification-albert-688320769
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-text-classification-albert-688320769", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
vlsb/autotrain-security-texts-classification-distilroberta-688220764
|
vlsb
| 2022-03-30T20:56:57Z
| 13
| 2
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"unk",
"dataset:vlsb/autotrain-data-security-texts-classification-distilroberta",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T20:54:56Z
|
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- vlsb/autotrain-data-security-texts-classification-distilroberta
co2_eq_emissions: 2.0817207656772445
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 688220764
- CO2 Emissions (in grams): 2.0817207656772445
## Validation Metrics
- Loss: 0.3055502772331238
- Accuracy: 0.9030612244897959
- Precision: 0.9528301886792453
- Recall: 0.8782608695652174
- AUC: 0.9439076757917337
- F1: 0.9140271493212669
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/vlsb/autotrain-security-texts-classification-distilroberta-688220764
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("vlsb/autotrain-security-texts-classification-distilroberta-688220764", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("vlsb/autotrain-security-texts-classification-distilroberta-688220764", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
mrm8488/longformer-base-4096-spanish
|
mrm8488
| 2022-03-30T20:36:36Z
| 49
| 16
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"Long documents",
"longformer",
"bertin",
"spanish",
"es",
"dataset:spanish_large_corpus",
"arxiv:2004.05150",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z
|
---
language:
- es
license: mit
widget:
- text: "Manuel Romero ha creado con el equipo de BERTIN un modelo que procesa documentos <mask> largos."
tags:
- Long documents
- longformer
- bertin
- spanish
datasets:
- spanish_large_corpus
---
# longformer-base-4096-spanish
## [Longformer](https://arxiv.org/abs/2004.05150) is a Transformer model for long documents.
`longformer-base-4096` is a BERT-like model started from the RoBERTa checkpoint (**BERTIN** in this case) and pre-trained for *MLM* on long documents (from BETO's `all_wikis`). It supports sequences of length up to 4,096!
**Longformer** uses a combination of a sliding window (*local*) attention and *global* attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
This model was made following the research done by [Iz Beltagy and Matthew E. Peters and Arman Cohan](https://arxiv.org/abs/2004.05150).
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{mromero2022longformer-base-4096-spanish,
title={Spanish LongFormer by Manuel Romero},
author={Romero, Manuel},
publisher={Hugging Face},
journal={Hugging Face Hub},
howpublished={\url{https://huggingface.co/mrm8488/longformer-base-4096-spanish}},
year={2022}
}
```
|
misterekole/upside_down_detector
|
misterekole
| 2022-03-30T19:58:07Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-30T19:47:20Z
|
---
license: apache-2.0
---
Upside down detection model for Fatima Fellowship Coding Challenge 2022
|
horsbug98/Part_2_XLM_Model_E1
|
horsbug98
| 2022-03-30T18:29:46Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-16T17:32:47Z
|
---
license: mit
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: debug_xlm_task2_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debug_xlm_task2_1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa secondary_task dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
waboucay/camembert-base-finetuned-xnli_fr
|
waboucay
| 2022-03-30T17:47:05Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-11T08:54:07Z
|
---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 89.2 | 87.6 |
| test | 88.9 | 87.4 |
|
manu/lilt-camembert-base
|
manu
| 2022-03-30T14:49:30Z
| 5
| 1
|
transformers
|
[
"transformers",
"pytorch",
"liltrobertalike",
"fill-mask",
"token-classification",
"fr",
"dataset:iit-cdip",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-28T13:16:58Z
|
---
language:
- fr
tags:
- token-classification
- fill-mask
license: mit
datasets:
- iit-cdip
---
This model is the combined camembert-base model, with the pretrained lilt checkpoint from the paper "LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding".
Original repository: https://github.com/jpWang/LiLT
To use it, it is necessary to fork the modeling and configuration files from the original repository, and load the pretrained model from the corresponding classes (LiLTRobertaLikeConfig, LiLTRobertaLikeForRelationExtraction, LiLTRobertaLikeForTokenClassification, LiLTRobertaLikeModel).
They can also be preloaded with the AutoConfig/model factories as such:
```python
from transformers import AutoModelForTokenClassification, AutoConfig
from path_to_custom_classes import (
LiLTRobertaLikeConfig,
LiLTRobertaLikeForRelationExtraction,
LiLTRobertaLikeForTokenClassification,
LiLTRobertaLikeModel
)
def patch_transformers():
AutoConfig.register("liltrobertalike", LiLTRobertaLikeConfig)
AutoModel.register(LiLTRobertaLikeConfig, LiLTRobertaLikeModel)
AutoModelForTokenClassification.register(LiLTRobertaLikeConfig, LiLTRobertaLikeForTokenClassification)
# etc...
```
To load the model, it is then possible to use:
```python
# patch_transformers() must have been executed beforehand
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = AutoModel.from_pretrained("manu/lilt-camembert-base")
model = AutoModelForTokenClassification.from_pretrained("manu/lilt-camembert-base") # to be fine-tuned on a token classification task
```
|
GioReg/ita1
|
GioReg
| 2022-03-30T14:42:06Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-28T20:17:13Z
|
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ita1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ita1
This model is a fine-tuned version of [m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0](https://huggingface.co/m-polignano-uniba/bert_uncased_L-12_H-768_A-12_italian_alb3rt0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5892
- Accuracy: 0.776
- F1: 0.5912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
javilonso/classificationEsp1_Attraction
|
javilonso
| 2022-03-30T13:25:38Z
| 6
| 0
|
transformers
|
[
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-23T15:27:21Z
|
---
tags:
- generated_from_keras_callback
model-index:
- name: classificationEsp1_Attraction
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# classificationEsp1_Attraction
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
shalpin87/dialoGPT-homer-simpson
|
shalpin87
| 2022-03-30T13:06:45Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"arxiv:1911.00536",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-29T20:28:40Z
|
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## dialogGPT-homer-simpson
This model has been fine tuned with the entire scripts of Homer Simpson from the T.V. show The Simpsons
It will give some nice answers seemingly from Homers brain in the Simpsons Universe during single turn conversation, letting you chat to Homer Simpson
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Who are you? |
| HomerBot | Homer Simpson .|
|User | What is your favorite Restaurant ? |
| HomerBot | Moes Tavern. |
|User | Have you ever been in a band?! |
| HomerBot | no. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use Multi-Turn
#### NOTE: Multi-Turn seems to be broken, after a few exchanges the output will mostly be exclamation marks.
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("shalpin87/dialoGPT-homer-simpson")
model = AutoModelForCausalLM.from_pretrained("shalpin87/dialoGPT-homer-simpson")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoG-PT-HomerBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
### How to use Single Turn
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("shalpin87/dialoGPT-homer-simpson")
model = AutoModelForCausalLM.from_pretrained("shalpin87/dialoGPT-homer-simpson")
questions = [
"What is your name?",
"Who are you?",
"Where do you work?",
"Who really killed Mr Burns?",
"Have you ever stolen from the Kwik-E-Mart?",
"Did you kill Frank Grimes?",
"Who was the worst member of the Be Sharps?",
"Hey where did Barney go?",
"What is your favorite bar to have a beer?",
"What is the best beer in Springfield?",
"Is Bart working for the Mob?",
"I think there was an incident in sector 7 G",
"Is Ned Flanders house okay?",
"Oh my god it's Sideshow Bob",
"What is a Flaming Moe?",
"What is happening to Apu?",
"Who quit the band?"
]
# Let's chat
for step in range(len(questions)):
print("***************************************")
print("Q. {}".format(questions[step]))
new_user_input_ids = tokenizer.encode(questions[step] + tokenizer.eos_token, return_tensors='pt')
# new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
bot_input_ids = new_user_input_ids
num_return_seqs=1
chat_history_ids = model.generate(
bot_input_ids,
max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=200,
top_p=0.55,
temperature = 0.85,
num_return_sequences=num_return_seqs
)
botname = "HomerBot"
for i in range(0,num_return_seqs):
print("{}:{}: {}".format(i, botname, tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][i], skip_special_tokens=True)))
chat_history_ids = []
```
Sample Output
```
***************************************
Q. What is your name?
0:HomerBot: Homer Simpson.
***************************************
Q. Who are you?
0:HomerBot: Homer Simpson.
***************************************
Q. Where do you work?
0:HomerBot: I work at a nuclear power plant.
***************************************
Q. Who really killed Mr Burns?
0:HomerBot: I dunno. Some guy named Ot-to.
***************************************
Q. Have you ever stolen from the Kwik-E-Mart?
0:HomerBot: No.
***************************************
Q. Did you kill Frank Grimes?
0:HomerBot: No.
***************************************
Q. Who was the worst member of the Be Sharps?
0:HomerBot: Homer Simpson, sir.
***************************************
Q. Hey where did Barney go?
0:HomerBot: He's been eating that thing for a week.
***************************************
Q. What is your favorite bar to have a beer?
0:HomerBot: The Flanders' Club.
***************************************
Q. What is the best beer in Springfield?
0:HomerBot: Pepi!
***************************************
Q. Is Bart working for the Mob?
0:HomerBot: I think he's trying to tell us something.
***************************************
Q. I think there was an incident in sector 7 G
0:HomerBot: Uh, let's not jump to any conclusions. The dog didn't run away.
```
|
dennishauser/distilbert-base-uncased-finetuned-emotion
|
dennishauser
| 2022-03-30T12:23:53Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T13:57:42Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2128
- Accuracy: 0.7597
- F1: 0.6574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.3846 | 1.0 | 243 | 1.2627 | 0.7598 | 0.6561 |
| 1.0463 | 2.0 | 486 | 1.2128 | 0.7597 | 0.6574 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
peterhsu/distilbert-base-uncased-finetuned-squad-d5716d28
|
peterhsu
| 2022-03-30T12:22:49Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-29T09:35:05Z
|
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mimicheng/codeparrot-ds-sample-2ep-29mar
|
mimicheng
| 2022-03-30T09:50:15Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-30T03:41:46Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds-sample-2ep-29mar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-2ep-29mar
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2585 | 1.86 | 5000 | 1.6283 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.2+cpu
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Peltarion/xlm-roberta-longformer-base-4096
|
Peltarion
| 2022-03-30T09:23:58Z
| 75
| 8
|
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"longformer",
"multilingual",
"dataset:wikitext",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z
|
---
tags:
- longformer
language: multilingual
license: apache-2.0
datasets:
- wikitext
---
## XLM-R Longformer Model
XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus.
The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r).
Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps.
## How to Use
The model can be used as expected to fine-tune on a downstream task.
For instance for QA.
```python
import torch
from transformers import AutoModel, AutoTokenizer
MAX_SEQUENCE_LENGTH = 4096
MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096"
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
max_length=MAX_SEQUENCE_LENGTH,
padding="max_length",
truncation=True,
)
model = AutoModelForQuestionAnswering.from_pretrained(
MODEL_NAME_OR_PATH,
max_length=MAX_SEQUENCE_LENGTH,
)
```
## Training Procedure
The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information
```sh
wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
unzip wikitext-103-raw-v1.zip
export DATA_DIR=./wikitext-103-raw
scripts/run_long_lm.py \
--model_name_or_path xlm-roberta-base \
--model_name xlm-roberta-to-longformer \
--output_dir ./output \
--logging_dir ./logs \
--val_file_path $DATA_DIR/wiki.valid.raw \
--train_file_path $DATA_DIR/wiki.train.raw \
--seed 42 \
--max_pos 4096 \
--adam_epsilon 1e-8 \
--warmup_steps 500 \
--learning_rate 3e-5 \
--weight_decay 0.01 \
--max_steps 6000 \
--evaluate_during_training \
--logging_steps 50 \
--eval_steps 50 \
--save_steps 6000 \
--max_grad_norm 1.0 \
--per_device_eval_batch_size 2 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 64 \
--overwrite_output_dir \
--fp16 \
--do_train \
--do_eval
```
|
Aureliano/electra-if
|
Aureliano
| 2022-03-30T09:07:27Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tf",
"electra",
"feature-extraction",
"en",
"arxiv:1406.2661",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-11T15:40:21Z
|
---
language: en
license: apache-2.0
---
## ELECTRA for IF
**ELECTRA** is a method for self-supervised language representation learning. They are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf).
For a detailed description and experimental results, please refer to the original paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
This repository contains a small ELECTRA discriminator finetuned on a corpus of interactive fiction commands labelled with the WordNet synset offset of the verb in the sentence. The original dataset has been collected from the list of action in the walkthroughs for the game included in the [Jericho](https://github.com/microsoft/jericho) framework and manually annotated. For more information visit https://github.com/aporporato/electra and https://github.com/aporporato/jericho-corpora.
## How to use the discriminator in `transformers`
(Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
```python
import math
import numpy as np
import tensorflow as tf
from datasets import load_metric, Dataset, DatasetDict
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, create_optimizer
from transformers.keras_callbacks import KerasMetricCallback
# This example shows how this model can be used:
# you should finetune the model of your specific corpus if commands, bigger than this
dict_train = {
"idx": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18",
"19", "20"],
"sentence": ["e", "get pen", "drop book", "x paper", "i", "south", "get paper", "drop the pen", "x book",
"inventory", "n", "get the book", "drop paper", "look at Pen", "inv", "g", "s", "get sandwich",
"drop sandwich", "x sandwich", "agin"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04",
"drop.v.01", "examine.v.02", "inventory.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02",
"inventory.v.01", "repeat.v.01", "travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "repeat.v.01"]
}
dict_val = {
"idx": ["0", "1", "2", "3", "4", "5"],
"sentence": ["w", "get shield", "drop sword", "x spikes", "i", "repeat"],
"label": ["travel.v.01", "take.v.04", "drop.v.01", "examine.v.02", "inventory.v.01", "repeat.v.01"]
}
raw_train_dataset = Dataset.from_dict(dict_train)
raw_val_dataset = Dataset.from_dict(dict_val)
raw_dataset = DatasetDict()
raw_dataset["train"] = raw_train_dataset
raw_dataset["val"] = raw_val_dataset
raw_dataset = raw_dataset.class_encode_column("label")
print(raw_dataset)
print(raw_dataset["train"].features)
print(raw_dataset["val"].features)
print(raw_dataset["train"][1])
label2id = {}
id2label = {}
for i, l in enumerate(raw_dataset["train"].features["label"].names):
label2id[l] = i
id2label[i] = l
discriminator = TFAutoModelForSequenceClassification.from_pretrained("Aureliano/electra-if",
label2id=label2id,
id2label=id2label)
tokenizer = AutoTokenizer.from_pretrained("Aureliano/electra-if")
tokenize_function = lambda example: tokenizer(example["sentence"], truncation=True)
pre_tokenizer_columns = set(raw_dataset["train"].features)
encoded_dataset = raw_dataset.map(tokenize_function, batched=True)
tokenizer_columns = list(set(encoded_dataset["train"].features) - pre_tokenizer_columns)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
batch_size = len(encoded_dataset["train"])
tf_train_dataset = encoded_dataset["train"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
tf_validation_dataset = encoded_dataset["val"].to_tf_dataset(
columns=tokenizer_columns,
label_cols=["labels"],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
num_epochs = 25
batches_per_epoch = math.ceil(len(encoded_dataset["train"]) / batch_size)
total_train_steps = int(batches_per_epoch * num_epochs)
optimizer, schedule = create_optimizer(
init_lr=5e-5, num_warmup_steps=total_train_steps // 5, num_train_steps=total_train_steps
)
metric = load_metric("accuracy")
def compute_metrics(eval_predictions):
logits, labels = eval_predictions
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_dataset)
callbacks = [metric_callback]
discriminator.compile(optimizer=optimizer, loss=loss, metrics=["sparse_categorical_accuracy"])
discriminator.fit(
tf_train_dataset,
epochs=num_epochs,
validation_data=tf_validation_dataset,
callbacks=callbacks
)
print("Evaluate on test data")
results = discriminator.evaluate(tf_validation_dataset)
print("test loss, test acc:", results)
text = "i"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'inventory.v.01' (-> "make or include in an itemized record or report"), but probably only with a better finetuning dataset
text = "get lamp"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'take.v.04' (-> "get into one's hands, take physically"), but probably only with a better finetuning dataset
text = "w"
encoded_input = tokenizer(text, return_tensors='tf')
output = discriminator(encoded_input)
prediction = tf.nn.softmax(output["logits"][0], -1)
label = id2label[tf.math.argmax(prediction).numpy()]
print("\n", text, ":", label,
"\n") # ideally 'travel.v.01' (-> "change location; move, travel, or proceed, also metaphorically"), but probably only with a better finetuning dataset
```
|
loulou/distilbert-base-uncased-finetuned-emotion
|
loulou
| 2022-03-30T04:57:58Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-22T04:55:48Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9221931901873676
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2285
- Accuracy: 0.922
- F1: 0.9222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8366 | 1.0 | 250 | 0.3212 | 0.9025 | 0.8990 |
| 0.2588 | 2.0 | 500 | 0.2285 | 0.922 | 0.9222 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
lazyturtl/roomidentifier
|
lazyturtl
| 2022-03-30T04:10:41Z
| 89
| 3
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-30T04:10:32Z
|
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: roomidentifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9375
---
# roomidentifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bathroom

#### Bedroom

#### DinningRoom

#### Kitchen

#### LivingRoom

|
samayash/finetuning-financial-news-sentiment
|
samayash
| 2022-03-30T03:36:40Z
| 4
| 3
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-30T03:27:02Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-financial-news-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-financial-news-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3345
- Accuracy: 0.8751
- F1: 0.8751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio
|
scasutt
| 2022-03-30T03:35:01Z
| 3
| 0
|
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-29T11:30:40Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53_toy_train_data_masked_audio
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_masked_audio
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6445
- Wer: 0.4938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3761 | 1.05 | 250 | 3.4022 | 0.9954 |
| 3.0858 | 2.1 | 500 | 3.4684 | 0.9954 |
| 2.6302 | 3.15 | 750 | 1.7989 | 0.9865 |
| 1.1292 | 4.2 | 1000 | 0.8558 | 0.7355 |
| 0.8371 | 5.25 | 1250 | 0.7319 | 0.6621 |
| 0.5992 | 6.3 | 1500 | 0.6848 | 0.6147 |
| 0.5189 | 7.35 | 1750 | 0.6522 | 0.5742 |
| 0.454 | 8.4 | 2000 | 0.6601 | 0.5531 |
| 0.3896 | 9.45 | 2250 | 0.6138 | 0.5439 |
| 0.3678 | 10.5 | 2500 | 0.6436 | 0.5320 |
| 0.3232 | 11.55 | 2750 | 0.5920 | 0.5174 |
| 0.2926 | 12.6 | 3000 | 0.6615 | 0.5107 |
| 0.3041 | 13.65 | 3250 | 0.6311 | 0.5015 |
| 0.2882 | 14.7 | 3500 | 0.6182 | 0.5004 |
| 0.2868 | 15.75 | 3750 | 0.6266 | 0.4943 |
| 0.2508 | 16.81 | 4000 | 0.6587 | 0.4965 |
| 0.2563 | 17.86 | 4250 | 0.6634 | 0.4939 |
| 0.2213 | 18.91 | 4500 | 0.6441 | 0.4925 |
| 0.2255 | 19.96 | 4750 | 0.6445 | 0.4938 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
efederici/sentence-it5-base
|
efederici
| 2022-03-29T23:09:01Z
| 35
| 4
|
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"t5",
"feature-extraction",
"sentence-similarity",
"transformers",
"it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-29T19:57:59Z
|
---
pipeline_tag: sentence-similarity
language:
- it
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# sentence-IT5-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a T5 ([IT5](https://huggingface.co/gsarti/it5-base)) base model. It is trained on a dataset made from question/context pairs ([squad-it](https://github.com/crux82/squad-it)), tags/news-article pairs, headline/text pairs ([change-it](https://huggingface.co/datasets/gsarti/change_it)) and on [stsb](https://huggingface.co/datasets/stsb_multi_mt/viewer/it/train).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
model = SentenceTransformer('efederici/sentence-IT5-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Questo è un esempio di frase", "Questo è un ulteriore esempio"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('efederici/sentence-IT5-base')
model = AutoModel.from_pretrained('efederici/sentence-IT5-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': None, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
|
espnet/bur_openslr80_hubert
|
espnet
| 2022-03-29T22:19:50Z
| 0
| 0
| null |
[
"region:us"
] | null | 2022-03-28T22:04:54Z
|
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Mar 21 22:59:35 UTC 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `7ae4efd81778436a98b822483e8123adba6aa430`
- Commit date: `Tue Mar 15 20:11:18 2022 -0400`
## asr_train_asr_hubert_transformer_adam_specaug_raw_bpe150
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|4227|39.1|50.4|10.5|6.1|67.0|99.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|33345|82.2|7.6|10.1|3.6|21.4|99.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_batch_size1_lm_lm_train_lm_bpe150_valid.loss.ave_asr_model_valid.acc.best/bur_test|480|18237|70.7|17.7|11.6|2.5|31.8|99.8|
|
krinal214/augmented
|
krinal214
| 2022-03-29T16:58:16Z
| 6
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-29T15:02:50Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: augmented
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# augmented
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5104
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0609 | 1.0 | 9787 | 0.5104 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
gabitoo1234/autotrain-mut_all_text-680820343
|
gabitoo1234
| 2022-03-29T16:09:31Z
| 5
| 0
|
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"es",
"dataset:gabitoo1234/autotrain-data-mut_all_text",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-29T14:22:14Z
|
---
tags: autotrain
language: es
widget:
- text: "I love AutoTrain 🤗"
datasets:
- gabitoo1234/autotrain-data-mut_all_text
co2_eq_emissions: 115.48848403681228
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 680820343
- CO2 Emissions (in grams): 115.48848403681228
## Validation Metrics
- Loss: 0.3041240870952606
- Accuracy: 0.9462770369425126
- Macro F1: 0.7836898686625933
- Micro F1: 0.9462770369425126
- Weighted F1: 0.9449148298990091
- Macro Precision: 0.8344505891491089
- Micro Precision: 0.9462770369425126
- Weighted Precision: 0.9451247372908952
- Macro Recall: 0.7568785255994025
- Micro Recall: 0.9462770369425126
- Weighted Recall: 0.9462770369425126
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/gabitoo1234/autotrain-mut_all_text-680820343
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("gabitoo1234/autotrain-mut_all_text-680820343", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.