modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
โ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
โ | likes
float64 0
712
โ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
armgabrielyan/video-summarization | e487ff551cb932519b3608acbed9c9c1beb00b9e | 2022-05-22T07:18:33.000Z | [
"pytorch",
"vision-encoder-decoder",
"transformers"
]
| null | false | armgabrielyan | null | armgabrielyan/video-summarization | 11 | 1 | transformers | 11,300 | Entry not found |
Nanatan/distilbert-base-uncased-finetuned-emotion | 5de7035e1f0a2013e92e91df1871e51c0d29d24e | 2022-05-22T21:34:43.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Nanatan | null | Nanatan/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,301 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215313247415522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.9215
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.798 | 1.0 | 250 | 0.3098 | 0.899 | 0.8956 |
| 0.2422 | 2.0 | 500 | 0.2169 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
wonscha/my-awesome-model | 7ab8a634d6a41e49e00437f6cf4fcf789e2baa9c | 2022-05-23T04:44:48.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:yelp_review_full",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | wonscha | null | wonscha/my-awesome-model | 11 | null | transformers | 11,302 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: my-awesome-model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.559
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5680
- Accuracy: 0.559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.1345 | 0.523 |
| No log | 2.0 | 250 | 1.5381 | 0.539 |
| No log | 3.0 | 375 | 1.5680 | 0.559 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.10.3
|
jimypbr/t5-base-test | 5656783dfb89fcaf48162bb9158f0b5ad436fbb9 | 2022-05-25T12:02:55.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | jimypbr | null | jimypbr/t5-base-test | 11 | null | transformers | 11,303 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: t5-base-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-summarization
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the cnn_dailymail 3.0.0 dataset.
## Model description
More information needed
## Intended uses & limitations
This is a work in progress. Please don't use these weights. :)
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 256
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 5.0
- training precision: Mixed Precision
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrm8488/t5-small-finetuned-qgsquad-qgen | 756ac4db6a4cf80974046f6080c6e6f0ee47be5a | 2022-05-24T17:20:27.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:qg_squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | mrm8488 | null | mrm8488/t5-small-finetuned-qgsquad-qgen | 11 | null | transformers | 11,304 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- qg_squad
model-index:
- name: t5-small-finetuned-qgsquad-qgen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-qgsquad-qgen
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the qg_squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4039
- Rouge4 Precision: 0.0931
- Rouge4 Recall: 0.0834
- Rouge4 Fmeasure: 0.0843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge4 Precision | Rouge4 Recall | Rouge4 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.4325 | 1.0 | 4733 | 0.3960 | 0.0984 | 0.0867 | 0.0889 |
| 0.4137 | 2.0 | 9466 | 0.3863 | 0.1061 | 0.0946 | 0.0963 |
| 0.3914 | 3.0 | 14199 | 0.3806 | 0.1051 | 0.0938 | 0.0955 |
| 0.3946 | 4.0 | 18932 | 0.3786 | 0.1084 | 0.097 | 0.0986 |
| 0.3857 | 5.0 | 23665 | 0.3784 | 0.1101 | 0.0991 | 0.1007 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Yah216/Arabic_poem_meter_classification | 01776dfc0299695a5041b480212b0ce2f388b9e2 | 2022-05-26T17:45:51.000Z | [
"pytorch",
"bert",
"text-classification",
"ar",
"transformers"
]
| text-classification | false | Yah216 | null | Yah216/Arabic_poem_meter_classification | 11 | null | transformers | 11,305 | ---
language: ar
widget:
- text: "ููุง ูุจู ู
ู ุฐููุฑู ุญุจูุจ ูู
ูุฒูู ุจุณููุทู ุงูููููู ุจููู ุงูุฏููุฎูู ูุญูููู
ูู"
- text: "ุณููู ูููุจู ุบูุฏุงุฉู ุณููุง ููุซุงุจุง ููุนูููู ุนููู ุงูุฌูู
ุงูู ูููู ุนูุชุงุจุง"
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 913229914
- CO2 Emissions (in grams): 1.8892280988467902
## Validation Metrics
- Loss: 1.0592747926712036
- Accuracy: 0.6535535147098981
- Macro F1: 0.46508274468173677
- Micro F1: 0.6535535147098981
- Weighted F1: 0.6452975497424681
- Macro Precision: 0.6288501119526966
- Micro Precision: 0.6535535147098981
- Weighted Precision: 0.6818087199275457
- Macro Recall: 0.3910156950920188
- Micro Recall: 0.6535535147098981
- Weighted Recall: 0.6535535147098981
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Yah216/autotrain-poem_meter_classification-913229914
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Yah216/autotrain-poem_meter_classification-913229914", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Yah216/autotrain-poem_meter_classification-913229914", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
chrisvinsen/xlsr-wav2vec2-final-1-lm-2 | 66f307d2a59a857523d9db682a61e537346869a0 | 2022-06-01T22:29:23.000Z | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | false | chrisvinsen | null | chrisvinsen/xlsr-wav2vec2-final-1-lm-2 | 11 | null | transformers | 11,306 | Indonli dataset --> Train + Validation + Test
WER : 0.216
WER with LM: 0.151 |
Jrico1981/sentiment-classification | f2648625e3255ef2f972bd646cb889effe030396 | 2022-05-28T14:32:09.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jrico1981 | null | Jrico1981/sentiment-classification | 11 | null | transformers | 11,307 | welcome to my sentiment classification model
model trained with the bert-base-uncased base to classify the sentiment of customers who respond to the satisfaction survey. The sentiments that it classifies are positive (1) and negative (0). |
GioReg/bertMULTINEGsentiment | 2734d1d2d3da1af872cd858f788455da3cb01586 | 2022-05-29T13:05:08.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | GioReg | null | GioReg/bertMULTINEGsentiment | 11 | null | transformers | 11,308 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bertMULTINEGsentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertMULTINEGsentiment
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
wanghao2023/uganda-labor-market-interview-text-classification | ccd60f95571696614d565998bab5a3be0f7e71b5 | 2022-05-29T23:26:31.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"transformers",
"license:mit"
]
| text-classification | false | wanghao2023 | null | wanghao2023/uganda-labor-market-interview-text-classification | 11 | null | transformers | 11,309 | ---
language: en
license: mit
---
# Uganda Labor Market Interview Text Classification
This model is a fine-tuned [Roberta base model](https://huggingface.co/roberta-base) using text transcripts of interviews between Vocational Training Institutes (VTI) students and their successful alumni in Uganda on the subject of the labor market.
## Model description
There are 6 categories in total. In the training data, a sentence can get classified as more than one topic. I classify a sentence using the following criteria:
info: information about the job market, working conditions, salaries, and what to expect at work. Also alumn's and student's current situation in the job market, career plans, and past experience. Note if the alumn mentions using strategies in her/his experience, I also classify the sentence as a strategy.
tip: tips for how to behave and improve ourselves while at work. The majority of tips involve being disciplined, humble, treating colleagues and clients well so that you can learn, and not involving in illegal stuff. If the alumni mention doing so increases the chance of getting jobs, I also classify the sentence as a strategy.
strategy: tips that help students get a better chance of getting hired or getting a better job. Including how to search for companies, what kind of companies to apply for, how to write and submit applications, when and how many companies to apply for, how to behave during interviews, how to get jobs through different channels, and making and maintaining connections, and general advice on how to improve job-related abilities. Also tips for starting your own business, including saving for capital, finding locations, business models, purchasing apparatuses, and attracting and treating clients.
motivation: General advice of being confident, patient, persistent, engaged, optimistic, etc in the job market. Note if the alumni mention that advice in a particular context, for example "during an interview you need to show that you are a patient person," or "when doing your work you need to be patient," I will also classify these sentences as strategy and tip respectively.
referral: referring students to companies and individuals, or affirmative answers to the student's request for connection.
neutral: Introductions, exchanging contacts, pure technical stuff, conversations about school or exams that are not related to getting jobs, miscellaneous conversations that do not belong to the 5 topics above, and those whose meaning is unclear due to language improficiency or translation issues.
### How to use
You can use this model directly with a pipeline for text classification:
```python
>>> from transformers import pipeline
>>> pipe = pipeline("text-classification", model= "wanghao2023/uganda-labor-market-interview-text-classification", tokenizer = "wanghao2023/uganda-labor-market-interview-text-classification", return_all_scores = True)
>>> pipe("if they think you know too much, they won't teach you.")
[[{'label': 'is_info', 'score': 0.18128268420696259},
{'label': 'is_tip', 'score': 0.5684323310852051},
{'label': 'is_strategy', 'score': 0.22818608582019806},
{'label': 'is_motivation', 'score': 0.03250108286738396},
{'label': 'is_neutral', 'score': 0.05972086638212204},
{'label': 'is_referral', 'score': 0.013502764515578747}]]
```
### Limitations and bias
The classification of a sentence is heavily based on the context. For example, "be patient" can be classified as tip and/or strategy and/or motivation depending on which occasion the alumna asks the students to be patient. If the alumna asks the student to be patient during the interview, it's strategy; if the alumna asks the student to be patient while at work, then it's tip; if no specific context is given, then it's motivation.
## Evaluation results
This model achieves the following results when tested on the validation dataset (multilabel, threshold = 0.3). There is a huge room for improvement but it performs much better than a dice roll at least:
| F1 | Roc Auc | Accuracy |
|:----:|:----:|:----:|
| 0.655779 | 0.799979 | 0.552670 | |
sahn/distilbert-base-uncased-finetuned-imdb | 5fcff2b437b90a0501b9adbddf451d0b26f03bf0 | 2022-05-30T04:41:23.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | sahn | null | sahn/distilbert-base-uncased-finetuned-imdb | 11 | null | transformers | 11,310 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9294
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2214
- Accuracy: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2435 | 1.0 | 1250 | 0.2186 | 0.917 |
| 0.1495 | 2.0 | 2500 | 0.2214 | 0.9294 |
| 0.0829 | 3.0 | 3750 | 0.4892 | 0.8918 |
| 0.0472 | 4.0 | 5000 | 0.5189 | 0.8976 |
| 0.0268 | 5.0 | 6250 | 0.5478 | 0.8996 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
apple/mobilevit-x-small | 1f463474d8900c5cddd0129044b8b31cc2a7e511 | 2022-06-02T10:55:15.000Z | [
"pytorch",
"coreml",
"mobilevit",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.02178",
"transformers",
"vision",
"license:other"
]
| image-classification | false | apple | null | apple/mobilevit-x-small | 11 | null | transformers | 11,311 | ---
license: other
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# MobileViT (extra small-sized model)
MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE).
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-x-small")
model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-x-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes.
## Training procedure
### Preprocessing
Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping.
To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320).
At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256.
Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
### Pretraining
The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
## Evaluation results
| Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL |
|------------------|-------------------------|-------------------------|-----------|-------------------------------------------------|
| MobileViT-XXS | 69.0 | 88.9 | 1.3 M | https://huggingface.co/apple/mobilevit-xx-small |
| **MobileViT-XS** | **74.8** | **92.3** | **2.3 M** | https://huggingface.co/apple/mobilevit-x-small |
| MobileViT-S | 78.4 | 94.1 | 5.6 M | https://huggingface.co/apple/mobilevit-small |
### BibTeX entry and citation info
```bibtex
@inproceedings{vision-transformer,
title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author = {Sachin Mehta and Mohammad Rastegari},
year = {2022},
URL = {https://arxiv.org/abs/2110.02178}
}
```
|
Yuliya-HV/distilbert-base-uncased-finetuned-emotion-tweets | ea5d8e84192fd6f99b9917bc9d0c56de71f6f77f | 2022-05-30T18:39:27.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Yuliya-HV | null | Yuliya-HV/distilbert-base-uncased-finetuned-emotion-tweets | 11 | null | transformers | 11,312 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-tweets
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9355
- name: F1
type: f1
value: 0.9358599960917737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-tweets
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1572
- Accuracy: 0.9355
- F1: 0.9359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.1672 | 0.932 | 0.9320 |
| No log | 2.0 | 500 | 0.1572 | 0.9355 | 0.9359 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
RANG012/SENATOR-Scaled | e4fa97a447462dacff3f3a5ebc2a9d05d374bb2c | 2022-06-01T08:10:34.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | RANG012 | null | RANG012/SENATOR-Scaled | 11 | null | transformers | 11,313 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: SENATOR-Scaled
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.89
- name: F1
type: f1
value: 0.8897795591182365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SENATOR-Scaled
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2670
- Accuracy: 0.89
- F1: 0.8898
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Abderrahim2/bert-finetuned-Age | 205ca2b61761091787d472a1021a385e45f43924 | 2022-06-02T16:37:58.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | Abderrahim2 | null | Abderrahim2/bert-finetuned-Age | 11 | 1 | transformers | 11,314 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-Age
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-Age
This model is a fine-tuned version of [dbmdz/bert-base-french-europeana-cased](https://huggingface.co/dbmdz/bert-base-french-europeana-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4642
- F1: 0.7254
- Roc Auc: 0.7940
- Accuracy: 0.7249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.4564 | 1.0 | 965 | 0.4642 | 0.7254 | 0.7940 | 0.7254 |
| 0.4443 | 2.0 | 1930 | 0.4662 | 0.7254 | 0.7940 | 0.7254 |
| 0.4388 | 3.0 | 2895 | 0.4628 | 0.7254 | 0.7940 | 0.7254 |
| 0.4486 | 4.0 | 3860 | 0.4642 | 0.7254 | 0.7940 | 0.7249 |
| 0.4287 | 5.0 | 4825 | 0.4958 | 0.7214 | 0.7907 | 0.7150 |
| 0.4055 | 6.0 | 5790 | 0.5325 | 0.6961 | 0.7715 | 0.6782 |
| 0.3514 | 7.0 | 6755 | 0.5588 | 0.6586 | 0.7443 | 0.6223 |
| 0.3227 | 8.0 | 7720 | 0.5944 | 0.6625 | 0.7470 | 0.6295 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Jeevesh8/init_bert_ft_qqp-58 | adb129bcb03fe426a47c15db5171aac20fa634f8 | 2022-06-02T12:39:36.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-58 | 11 | null | transformers | 11,315 | Entry not found |
Jeevesh8/init_bert_ft_qqp-56 | 5d6a7eb9d10e20374bfc95d431e63ef6d99965c1 | 2022-06-02T12:39:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-56 | 11 | null | transformers | 11,316 | Entry not found |
Jeevesh8/init_bert_ft_qqp-51 | 5336d616552fb1360f6e1ef4421fad176c787a95 | 2022-06-02T12:39:27.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/init_bert_ft_qqp-51 | 11 | null | transformers | 11,317 | Entry not found |
yanekyuk/berturk-128k-keyword-discriminator | 37f253be49edbe087d827ffa0b58eced6fe8cf13 | 2022-06-05T12:54:08.000Z | [
"pytorch",
"bert",
"token-classification",
"tr",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | yanekyuk | null | yanekyuk/berturk-128k-keyword-discriminator | 11 | null | transformers | 11,318 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
language:
- tr
widget:
- text: "ฤฐngiltere'de dรผzenlenen Avrupa Tekvando ve Para Tekvando ลampiyonasฤฑโnda millรฎ tekvandocular 5 altฤฑn, 2 gรผmรผล ve 4 bronz olmak รผzere 11, millรฎ para tekvandocular ise 4 altฤฑn, 3 gรผmรผล ve 1 bronz olmak รผzere 8 madalya kazanarak takฤฑm halinde Avrupa ลampiyonu oldu."
- text: "Fรผme somon dedik ama aslฤฑnda lox salamuralanmฤฑล somon anlamฤฑna geliyor, fรผme etme opsiyonel. Lox bagel, 1930'larda Eggs Benedict furyasฤฑnda New Yorklu Yahudi cemaati tarafฤฑndan koลer bir alternatif olarak รงฤฑkan bir lezzet. Gรผnรผmรผzde benim hangover yรผreฤim dรขhil dรผnyanฤฑn birรงok yerinde enfes bir kahvaltฤฑ sandviรงi."
- text: "Tรผrkiye'de son aylarda sฤฑklฤฑkla tartฤฑลฤฑlan konut satฤฑลฤฑ karลฤฑlฤฑฤฤฑnda yabancฤฑlara vatandaลlฤฑk verilmesi konusunu beyin gรถรงรผ kapsamฤฑnda ele almak mรผmkรผn. Daha รถnce 250 bin dolar olan vatandaลlฤฑk bedeli yรผkselen tepkiler รผzerine 400 bin dolara รงฤฑkarฤฑlmฤฑลtฤฑ. Tรผrkiye'den gรถรง eden iyi eฤitimli kiลilerin , gittikleri รผlkelerde 250 bin dolar tutarฤฑnda yabancฤฑ yatฤฑrฤฑma denk olduฤu gรถz รถnรผne alฤฑndฤฑฤฤฑnda nitelikli insan gรผcรผnรผn yabancฤฑlara konut karลฤฑlฤฑฤฤฑnda satฤฑlan vatandaลlฤฑk bedelin eล olduฤunu gรถrรผyoruz. Yurt dฤฑลฤฑna giden her bir vatandaลฤฑn yรผksek teknolojili katma deฤer รผreten sektรถrlere yapacaฤฤฑ katkฤฑlar gรถz รถnรผnde bulundurulduฤunda bu aรงฤฑฤฤฑn inลaat sektรถrรผyle kapatฤฑldฤฑฤฤฑnฤฑ da gรถrรผyoruz. Beyin gรถรงรผ konusunda sadece ekonomik perspektiften bakฤฑldฤฑฤฤฑnda bile kฤฑsa vadeli dรถviz kaynaฤฤฑ yaratmak iรงin kullanฤฑlan vatandaลlฤฑk satฤฑลฤฑ yerine beyin gรถรงรผnรผ รถnleyecek รถnlemler alฤฑnmasฤฑnฤฑn รผlkemize รงok daha faydalฤฑ olacaฤฤฑ sonucunu รงฤฑkarฤฑyoruz."
- text: "Tรผrkiyeโde resmรฎ verilere gรถre, 15 ve daha yukarฤฑ yaลtaki kiลilerde mevsim etkisinden arฤฑndฤฑrฤฑlmฤฑล iลsiz sayฤฑsฤฑ, bu yฤฑlฤฑn ilk รงeyreฤinde bir รถnceki รงeyreฤe gรถre 50 bin kiลi artarak 3 milyon 845 bin kiลi oldu. Mevsim etkisinden arฤฑndฤฑrฤฑlmฤฑล iลsizlik oranฤฑ ise 0,1 puanlฤฑk artฤฑลla %11,4 seviyesinde gerรงekleลti. ฤฐลsizlik oranฤฑ, ilk รงeyrekte geรงen yฤฑlฤฑn aynฤฑ รงeyreฤine gรถre 1,7 puan azaldฤฑ."
- text: "Boeingโin insansฤฑz uzay aracฤฑ Starliner, birtakฤฑm sorunlara raฤmen Uluslararasฤฑ Uzay ฤฐstasyonuna (ISS) ulaลarak ilk kez baลarฤฑlฤฑ bir ลekilde kenetlendi. Aracฤฑn ISSโte beล gรผn kalmasฤฑnฤฑ takiben sorunsuz bir ลekilde New Mexicoโya inmesi halinde Boeing, sonbaharda astronotlarฤฑ yรถrรผngeye gรถndermek iรงin Starlinerโฤฑ kullanabilir.\n\nNeden รถnemli? NASAโnฤฑn personal aracฤฑ รผretmeyi durdurmasฤฑndan kaynaklฤฑ olarak gรถrevli astronotlar ve kozmonotlar, ISSโte Rusyaโnฤฑn รผrettiฤi uzay araรงlarฤฑ ile taลฤฑnฤฑyordu. Starlinerโฤฑn kendini kanฤฑtlamasฤฑ ise bu konuda Rusyaโya olan baฤฤฑmlฤฑlฤฑฤฤฑn potansiyel olarak ortadan kalkabileceฤi anlamฤฑna geliyor."
model-index:
- name: berturk-128k-keyword-discriminator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# berturk-128k-keyword-discriminator
This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-cased](https://huggingface.co/dbmdz/bert-base-turkish-128k-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3828
- Precision: 0.6791
- Recall: 0.7234
- Accuracy: 0.9294
- F1: 0.7006
- Ent/precision: 0.6931
- Ent/accuracy: 0.7715
- Ent/f1: 0.7302
- Con/precision: 0.6473
- Con/accuracy: 0.6282
- Con/f1: 0.6376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | Ent/precision | Ent/accuracy | Ent/f1 | Con/precision | Con/accuracy | Con/f1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|:-------------:|:------------:|:------:|:-------------:|:------------:|:------:|
| 0.1632 | 1.0 | 1875 | 0.1637 | 0.6661 | 0.6900 | 0.9320 | 0.6778 | 0.6649 | 0.7401 | 0.7005 | 0.6692 | 0.5907 | 0.6275 |
| 0.1151 | 2.0 | 3750 | 0.1709 | 0.6538 | 0.7446 | 0.9292 | 0.6963 | 0.6682 | 0.7864 | 0.7225 | 0.6223 | 0.6619 | 0.6415 |
| 0.0817 | 3.0 | 5625 | 0.1931 | 0.6667 | 0.7292 | 0.9294 | 0.6965 | 0.6843 | 0.7677 | 0.7236 | 0.6290 | 0.6529 | 0.6407 |
| 0.057 | 4.0 | 7500 | 0.2375 | 0.6578 | 0.7486 | 0.9277 | 0.7002 | 0.6708 | 0.7950 | 0.7277 | 0.6284 | 0.6567 | 0.6422 |
| 0.041 | 5.0 | 9375 | 0.2765 | 0.6683 | 0.7390 | 0.9284 | 0.7019 | 0.6834 | 0.7821 | 0.7294 | 0.6351 | 0.6538 | 0.6444 |
| 0.0297 | 6.0 | 11250 | 0.3128 | 0.6811 | 0.7249 | 0.9295 | 0.7023 | 0.6979 | 0.7710 | 0.7327 | 0.6438 | 0.6334 | 0.6386 |
| 0.0211 | 7.0 | 13125 | 0.3633 | 0.6780 | 0.7236 | 0.9290 | 0.7001 | 0.6919 | 0.7722 | 0.7299 | 0.6463 | 0.6273 | 0.6366 |
| 0.0165 | 8.0 | 15000 | 0.3828 | 0.6791 | 0.7234 | 0.9294 | 0.7006 | 0.6931 | 0.7715 | 0.7302 | 0.6473 | 0.6282 | 0.6376 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
murdockthedude/distilbert-base-uncased-finetuned-ner | 591a4f78d95c6529e74b37744fac7838b10817e3 | 2022-06-05T00:57:57.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | murdockthedude | null | murdockthedude/distilbert-base-uncased-finetuned-ner | 11 | null | transformers | 11,319 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.8664567296477723
- name: Recall
type: recall
value: 0.8816757654877759
- name: F1
type: f1
value: 0.8740000000000001
- name: Accuracy
type: accuracy
value: 0.9716525101857606
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1078
- Precision: 0.8665
- Recall: 0.8817
- F1: 0.8740
- Accuracy: 0.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 220 | 0.0993 | 0.8511 | 0.8780 | 0.8643 | 0.9721 |
| No log | 2.0 | 440 | 0.0732 | 0.8913 | 0.9122 | 0.9016 | 0.9783 |
| 0.1878 | 3.0 | 660 | 0.0681 | 0.8984 | 0.9186 | 0.9083 | 0.9797 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Erland/distilbert-base-uncased-finetuned-emotion | 969c9b7e367723d5ef31299e4f93277579273eca | 2022-06-06T02:45:50.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:emotion",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Erland | null | Erland/distilbert-base-uncased-finetuned-emotion | 11 | null | transformers | 11,320 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.927
- name: F1
type: f1
value: 0.9268682520975888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2128
- Accuracy: 0.927
- F1: 0.9269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8246 | 1.0 | 250 | 0.3061 | 0.913 | 0.9118 |
| 0.2398 | 2.0 | 500 | 0.2128 | 0.927 | 0.9269 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Jherb/finetuning-sentiment-model-3000-samples | 112a0307addd51f4aff137db503f5de958106243 | 2022-06-05T21:21:18.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | Jherb | null | Jherb/finetuning-sentiment-model-3000-samples | 11 | null | transformers | 11,321 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3063
- Accuracy: 0.8667
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
binaya-s/xls-r-300m-en | 84eaf67e5490f9e8bccd2687ce2d5680082119fa | 2022-06-07T07:58:50.000Z | [
"pytorch",
"tf",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"transformers",
"audio",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | false | binaya-s | null | binaya-s/xls-r-300m-en | 11 | null | transformers | 11,322 | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: wav2vec2-base-960h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.4
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 8.6
---
# Wav2Vec2-Base-960h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
import torch
# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 3.4 | 8.6 | |
QuentinKemperino/ECHR_test_2 | 2457ef10cc1511a76b0264bf9048bf27ad7971be | 2022-06-21T20:44:10.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"dataset:lex_glue",
"transformers",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"model-index"
]
| text-classification | false | QuentinKemperino | null | QuentinKemperino/ECHR_test_2 | 11 | null | transformers | 11,323 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
datasets:
- lex_glue
model-index:
- name: ECHR_test_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ECHR_test_2 Task A
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1998
- Macro-f1: 0.5295
- Micro-f1: 0.6157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2142 | 0.44 | 500 | 0.2887 | 0.2391 | 0.4263 |
| 0.172 | 0.89 | 1000 | 0.2672 | 0.2908 | 0.4628 |
| 0.1737 | 1.33 | 1500 | 0.2612 | 0.3657 | 0.5102 |
| 0.1581 | 1.78 | 2000 | 0.2412 | 0.3958 | 0.5468 |
| 0.1509 | 2.22 | 2500 | 0.2264 | 0.3950 | 0.5552 |
| 0.1606 | 2.67 | 3000 | 0.2342 | 0.4006 | 0.5511 |
| 0.1491 | 3.11 | 3500 | 0.2176 | 0.4558 | 0.5622 |
| 0.1392 | 3.56 | 4000 | 0.2454 | 0.4128 | 0.5596 |
| 0.15 | 4.0 | 4500 | 0.2113 | 0.4684 | 0.5874 |
| 0.1461 | 4.44 | 5000 | 0.2179 | 0.4631 | 0.5815 |
| 0.1457 | 4.89 | 5500 | 0.2151 | 0.4805 | 0.5949 |
| 0.1443 | 5.33 | 6000 | 0.2155 | 0.5123 | 0.5917 |
| 0.1279 | 5.78 | 6500 | 0.2131 | 0.4915 | 0.5998 |
| 0.1377 | 6.22 | 7000 | 0.2244 | 0.4705 | 0.5944 |
| 0.1242 | 6.67 | 7500 | 0.2150 | 0.5089 | 0.5918 |
| 0.1222 | 7.11 | 8000 | 0.2045 | 0.4801 | 0.5981 |
| 0.1372 | 7.56 | 8500 | 0.2074 | 0.5317 | 0.5962 |
| 0.1289 | 8.0 | 9000 | 0.2035 | 0.5323 | 0.6126 |
| 0.1295 | 8.44 | 9500 | 0.2058 | 0.5213 | 0.6073 |
| 0.123 | 8.89 | 10000 | 0.2027 | 0.5486 | 0.6135 |
| 0.1335 | 9.33 | 10500 | 0.1984 | 0.5442 | 0.6249 |
| 0.1258 | 9.78 | 11000 | 0.1998 | 0.5295 | 0.6157 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
chanifrusydi/t5-dialogue-summarization | bb486e7e0c4adfbec13b4a109adb79258f09780c | 2022-06-09T13:43:18.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | chanifrusydi | null | chanifrusydi/t5-dialogue-summarization | 11 | null | transformers | 11,324 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: t5-dialogue-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-dialogue-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the samsum dataset.
dataset:
type: {summarization}
name: {samsum}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
victorlee071200/bert-base-cased-finetuned-squad_v2 | fecd1da01c4ef50ed9d74d8781b0e8769833eb01 | 2022-06-09T13:16:06.000Z | [
"pytorch",
"bert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | victorlee071200 | null | victorlee071200/bert-base-cased-finetuned-squad_v2 | 11 | null | transformers | 11,325 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: bert-base-cased-finetuned-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad_v2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.03 | 1.0 | 8255 | 1.1334 |
| 0.7511 | 2.0 | 16510 | 1.1299 |
| 0.5376 | 3.0 | 24765 | 1.3226 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
russellc/bert-finetuned-ner | 45bcf27d2f103cfc770031f36fa7ac041feb6cdb | 2022-06-09T11:11:34.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | russellc | null | russellc/bert-finetuned-ner | 11 | 1 | transformers | 11,326 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9344479390829333
- name: Recall
type: recall
value: 0.9500168293503871
- name: F1
type: f1
value: 0.9421680714345323
- name: Accuracy
type: accuracy
value: 0.9859745687878966
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0644
- Precision: 0.9344
- Recall: 0.9500
- F1: 0.9422
- Accuracy: 0.9860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0854 | 1.0 | 1756 | 0.0632 | 0.9080 | 0.9352 | 0.9214 | 0.9822 |
| 0.0401 | 2.0 | 3512 | 0.0605 | 0.9302 | 0.9485 | 0.9393 | 0.9856 |
| 0.0204 | 3.0 | 5268 | 0.0644 | 0.9344 | 0.9500 | 0.9422 | 0.9860 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
enoriega/rule_softmatching | edba9f8ba72d2bf6f823f0e8095f47e008641ca5 | 2022-06-10T03:59:51.000Z | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | false | enoriega | null | enoriega/rule_softmatching | 11 | null | transformers | 11,327 | Entry not found |
ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar | f50f753cde0a17cb315bc640691c141a692a846a | 2022-06-10T23:54:52.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:wiki_lingua",
"transformers",
"summarization",
"mt5",
"ar",
"abstractive summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | ahmeddbahaa | null | ahmeddbahaa/t5-arabic-base-finetuned-wikilingua-ar | 11 | null | transformers | 11,328 | ---
license: apache-2.0
tags:
- summarization
- mt5
- ar
- abstractive summarization
- generated_from_trainer
datasets:
- wiki_lingua
model-index:
- name: t5-arabic-base-finetuned-wikilingua-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-arabic-base-finetuned-wikilingua-ar
This model is a fine-tuned version of [bakrianoo/t5-arabic-base](https://huggingface.co/bakrianoo/t5-arabic-base) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2735
- Rouge-1: 20.72
- Rouge-2: 7.63
- Rouge-l: 18.75
- Gen Len: 18.74
- Bertscore: 70.79
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 8
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ghadeermobasher/BC5CDR-Chem-Modified-SciBERT-384 | 666a5ddc7599170530a1034abb8c6dc50d4d045a | 2022-06-14T00:24:57.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/BC5CDR-Chem-Modified-SciBERT-384 | 11 | null | transformers | 11,329 | Entry not found |
dexay/Ner2HgF | e00ed50f4d40beaad19f51f7358d03abe5de9f7e | 2022-06-14T12:12:33.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | dexay | null | dexay/Ner2HgF | 11 | null | transformers | 11,330 | Entry not found |
cindy203cc/finetuning-sentiment-model-3000-samples | 940c3b83c3457ac3b39f3c8da343b7901f991191 | 2022-06-14T19:16:33.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:imdb",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | cindy203cc | null | cindy203cc/finetuning-sentiment-model-3000-samples | 11 | null | transformers | 11,331 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8628762541806019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3187
- Accuracy: 0.8633
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
AnyaSchen/rugpt3_esenin | c9b714edb93aeb9b19407846176a8b7623b54cbc | 2022-06-15T11:26:44.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | AnyaSchen | null | AnyaSchen/rugpt3_esenin | 11 | null | transformers | 11,332 | This model was created as a fine-tuned GPT-3 medium model, which is tuned to the style of Yesenin's poetry in Russian. You can give her a word, a phrase, or just an empty line as an input, and she will give out a poem in Yesenin's style.
 |
AnyaSchen/rugpt3_blok | bbf7576baee480407c7a306076c478bf00b762ab | 2022-06-15T11:24:59.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | AnyaSchen | null | AnyaSchen/rugpt3_blok | 11 | null | transformers | 11,333 | This model was created as a fine-tuned GPT-3 medium model, which is tuned to the style of Blok's poetry in Russian. You can give her a word, a phrase, or just an empty line as an input, and she will give out a poem in Blok's style.
 |
eunbeee/hyunwoongko-kobart-eb-finetuned-papers-meetings | 86a3974a03aba6372ca3aaa62d00f99901e25493 | 2022-06-16T17:43:44.000Z | [
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | eunbeee | null | eunbeee/hyunwoongko-kobart-eb-finetuned-papers-meetings | 11 | null | transformers | 11,334 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: hyunwoongko-kobart-eb-finetuned-papers-meetings
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hyunwoongko-kobart-eb-finetuned-papers-meetings
This model is a fine-tuned version of [hyunwoongko/kobart](https://huggingface.co/hyunwoongko/kobart) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3136
- Rouge1: 18.3166
- Rouge2: 8.0509
- Rougel: 18.3332
- Rougelsum: 18.3146
- Gen Len: 19.9143
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 0.2118 | 1.0 | 7739 | 0.2951 | 18.0837 | 7.9585 | 18.0787 | 18.0784 | 19.896 |
| 0.1598 | 2.0 | 15478 | 0.2812 | 18.529 | 7.9891 | 18.5421 | 18.5271 | 19.8977 |
| 0.1289 | 3.0 | 23217 | 0.2807 | 18.0638 | 7.8086 | 18.0787 | 18.0583 | 19.9129 |
| 0.0873 | 4.0 | 30956 | 0.2923 | 18.3483 | 8.0233 | 18.3716 | 18.3696 | 19.914 |
| 0.0844 | 5.0 | 38695 | 0.3136 | 18.3166 | 8.0509 | 18.3332 | 18.3146 | 19.9143 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
chlab/efficientnet_61_planet_detection | a29a99abd85bcc28a3d9525632648803b980c4e1 | 2022-06-17T17:12:07.000Z | [
"pytorch",
"efficientnet_61_planet_detection",
"Python 3.7+",
"dataset:imagenet",
"dataset:imagenet-21k",
"transformers",
"vision",
"image-classification",
"license:apache-2.0"
]
| image-classification | false | chlab | null | chlab/efficientnet_61_planet_detection | 11 | null | transformers | 11,335 | ---
language:
- Python 3.7+
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Efficientnetv2 (61 channels)
|
dennis-fast/DialoGPT-ElonMusk | e7353c25d80f21c8e1d23ce281a7dfef306b7c55 | 2022-06-18T15:13:23.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational",
"license:mit"
]
| conversational | false | dennis-fast | null | dennis-fast/DialoGPT-ElonMusk | 11 | null | transformers | 11,336 | ---
tags:
- conversational
license: mit
---
# DialoGPT-ElonMusk: Chat with Elon Musk
This is a conversational language model of Elon Musk. The bot's conversation abilities come from Microsoft's [DialoGPT-small conversational model](https://huggingface.co/microsoft/DialoGPT-small) fine-tuned on conversation transcripts of 22 interviews with Elon Musk from [here](https://elon-musk-interviews.com/category/english/).
|
eslamxm/AraT5-base-title-generation-finetune-ar-xlsum | 949723db7b5c82d9ccb4825779b16b27628fe0e8 | 2022-06-19T05:23:32.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:xlsum",
"transformers",
"summarization",
"Arat5-base",
"abstractive summarization",
"ar",
"xlsum",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| summarization | false | eslamxm | null | eslamxm/AraT5-base-title-generation-finetune-ar-xlsum | 11 | null | transformers | 11,337 | ---
tags:
- summarization
- Arat5-base
- abstractive summarization
- ar
- xlsum
- generated_from_trainer
datasets:
- xlsum
model-index:
- name: AraT5-base-title-generation-finetune-ar-xlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AraT5-base-title-generation-finetune-ar-xlsum
This model is a fine-tuned version of [UBC-NLP/AraT5-base-title-generation](https://huggingface.co/UBC-NLP/AraT5-base-title-generation) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2837
- Rouge-1: 32.46
- Rouge-2: 15.15
- Rouge-l: 28.38
- Gen Len: 18.48
- Bertscore: 74.24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 10
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:|
| 5.815 | 1.0 | 293 | 4.7437 | 27.05 | 10.49 | 23.56 | 18.03 | 72.56 |
| 5.0818 | 2.0 | 586 | 4.5004 | 28.92 | 11.97 | 25.09 | 18.61 | 73.08 |
| 4.7855 | 3.0 | 879 | 4.3910 | 29.66 | 12.57 | 25.79 | 18.58 | 73.3 |
| 4.588 | 4.0 | 1172 | 4.3469 | 30.22 | 13.05 | 26.36 | 18.59 | 73.61 |
| 4.4388 | 5.0 | 1465 | 4.3226 | 30.88 | 13.81 | 27.01 | 18.65 | 73.78 |
| 4.3162 | 6.0 | 1758 | 4.2990 | 30.9 | 13.6 | 26.92 | 18.68 | 73.78 |
| 4.2178 | 7.0 | 2051 | 4.2869 | 31.35 | 14.01 | 27.41 | 18.57 | 73.96 |
| 4.1387 | 8.0 | 2344 | 4.2794 | 31.28 | 13.98 | 27.34 | 18.6 | 73.87 |
| 4.0787 | 9.0 | 2637 | 4.2806 | 31.45 | 14.17 | 27.46 | 18.66 | 73.97 |
| 4.0371 | 10.0 | 2930 | 4.2837 | 31.55 | 14.19 | 27.52 | 18.65 | 74.0 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Danastos/dpr_query_el_1 | f11e6622a2b4f4e07b8b1b74f0964fab0c75f8ff | 2022-06-19T18:41:51.000Z | [
"pytorch",
"bert",
"pretraining",
"transformers"
]
| null | false | Danastos | null | Danastos/dpr_query_el_1 | 11 | null | transformers | 11,338 | Entry not found |
chradden/generation_xyz | d4e8cc9fced8bb2f2c1d88ef0af3cb93c4e5b3a2 | 2022-06-19T21:33:52.000Z | [
"pytorch",
"tensorboard",
"vit",
"image-classification",
"transformers",
"huggingpics",
"model-index"
]
| image-classification | false | chradden | null | chradden/generation_xyz | 11 | null | transformers | 11,339 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: generation_xyz
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5504587292671204
---
# generation_xyz
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Baby Boomers

#### Generation Alpha

#### Generation X

#### Generation Z

#### Millennials
 |
gauravnuti/agro-ner | e163978ab308e29e4be2a2a797b5dd08e17d048b | 2022-06-20T12:12:39.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | gauravnuti | null | gauravnuti/agro-ner | 11 | null | transformers | 11,340 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-17 | e8b4ca4de589a81a8eeb36127700749b92cadb0c | 2022-06-21T13:29:18.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-17 | 11 | null | transformers | 11,341 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-6 | 55b6d3ea7539213338b67d3e37ca1079ecfaa47b | 2022-06-21T13:28:04.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-6 | 11 | null | transformers | 11,342 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-70 | 78f2bd0bf0c25b8466a2d9c26f4c03a05501c3d7 | 2022-06-21T13:28:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-70 | 11 | null | transformers | 11,343 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-43 | c957272e710a3e00081ef0bf118aed80868c3420 | 2022-06-21T13:28:22.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-43 | 11 | null | transformers | 11,344 | Entry not found |
Jeevesh8/std_0pnt2_bert_ft_cola-2 | 48153c5108fc78804f29cb410909d8454b24a152 | 2022-06-21T13:27:58.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/std_0pnt2_bert_ft_cola-2 | 11 | null | transformers | 11,345 | Entry not found |
kktoto/tiny_no_focal_v2 | 105c02ab71aa56d268939033a8b566bb9c9cfd15 | 2022-06-22T08:50:37.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/tiny_no_focal_v2 | 11 | null | transformers | 11,346 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_no_focal_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_no_focal_v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1314
- Precision: 0.7013
- Recall: 0.6837
- F1: 0.6924
- Accuracy: 0.9522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1574 | 1.0 | 5561 | 0.1471 | 0.6907 | 0.6186 | 0.6527 | 0.9462 |
| 0.1456 | 2.0 | 11122 | 0.1396 | 0.6923 | 0.6473 | 0.6690 | 0.9485 |
| 0.1412 | 3.0 | 16683 | 0.1373 | 0.6845 | 0.6705 | 0.6774 | 0.9490 |
| 0.1338 | 4.0 | 22244 | 0.1343 | 0.6988 | 0.6640 | 0.6810 | 0.9505 |
| 0.1311 | 5.0 | 27805 | 0.1342 | 0.6971 | 0.6751 | 0.6859 | 0.9510 |
| 0.1289 | 6.0 | 33366 | 0.1324 | 0.7081 | 0.6653 | 0.6860 | 0.9517 |
| 0.1258 | 7.0 | 38927 | 0.1309 | 0.7053 | 0.6731 | 0.6888 | 0.9521 |
| 0.1223 | 8.0 | 44488 | 0.1325 | 0.7001 | 0.6818 | 0.6908 | 0.9519 |
| 0.1213 | 9.0 | 50049 | 0.1316 | 0.7020 | 0.6813 | 0.6915 | 0.9522 |
| 0.1197 | 10.0 | 55610 | 0.1314 | 0.7013 | 0.6837 | 0.6924 | 0.9522 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kunalr63/simple_transformer | fa3f934a64c3034210bb88ceb67f42af60818c63 | 2022-06-22T10:01:23.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kunalr63 | null | kunalr63/simple_transformer | 11 | null | transformers | 11,347 | Entry not found |
JamesStratford/Pidrow-bot-DialoGPT-Medium | 0458aca4e6cd429db57b7fd9ade39796fe952e4c | 2022-06-24T01:11:28.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | JamesStratford | null | JamesStratford/Pidrow-bot-DialoGPT-Medium | 11 | null | transformers | 11,348 | ---
tags:
- conversational
---
# Pidrow bot - medium
Pidrow is a person in a discord server that talks a lot and has a very unique personality. So I made this API for a discord bot to talk to in the server... It's like talking to Pidrow 24/7 |
Aktsvigun/bert-base-wikihow | 76ccf8b2a709dc8f1c365699ec2488f953c10e00 | 2022-07-05T17:22:48.000Z | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Aktsvigun | null | Aktsvigun/bert-base-wikihow | 11 | null | transformers | 11,349 | Entry not found |
BigSalmon/TextbookInformalFormalEnglish | 11d3b8af3fa1f4ae41ea79b1620cbce061d27925 | 2022-06-25T02:25:15.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | false | BigSalmon | null | BigSalmon/TextbookInformalFormalEnglish | 11 | null | transformers | 11,350 | ```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/TextbookInformalFormalEnglish")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/TextbookInformalFormalEnglish")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
โก voluntary citizens' group that is organized on a local, national or international level
โก encourage political participation
โก often serve humanitarian functions
โก work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
โก noise
โก parking
โก traffic
โก security
โก strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
``` |
danielmantisnlp/autotrain-oms-ner-bi-1044135953 | cb49c258f7b7e6f62764e55476c04395f0601af4 | 2022-06-27T09:39:42.000Z | [
"pytorch",
"bert",
"token-classification",
"en",
"dataset:danielmantisnlp/autotrain-data-oms-ner-bi",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| token-classification | false | danielmantisnlp | null | danielmantisnlp/autotrain-oms-ner-bi-1044135953 | 11 | null | transformers | 11,351 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- danielmantisnlp/autotrain-data-oms-ner-bi
co2_eq_emissions: 1.425282392185522
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1044135953
- CO2 Emissions (in grams): 1.425282392185522
## Validation Metrics
- Loss: 0.4587894678115845
- Accuracy: 0.8957797220792589
- Precision: 0.553921568627451
- Recall: 0.6793587174348698
- F1: 0.6102610261026103
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/danielmantisnlp/autotrain-oms-ner-bi-1044135953
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("danielmantisnlp/autotrain-oms-ner-bi-1044135953", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("danielmantisnlp/autotrain-oms-ner-bi-1044135953", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
kktoto/tiny_focal_alpah75 | 3c4b6424f05c3a2acb9c38b40f76e303aef23c8a | 2022-06-28T05:34:25.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | kktoto | null | kktoto/tiny_focal_alpah75 | 11 | null | transformers | 11,352 | Entry not found |
Hartmann/DialoGPT-small-koishikomeiji | 794664e06deb5d68e2c0ba0685980a5448676548 | 2022-06-28T04:08:31.000Z | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | false | Hartmann | null | Hartmann/DialoGPT-small-koishikomeiji | 11 | 1 | transformers | 11,353 | ---
tags:
- conversational
---
# Koishi Komeiji DialoGPT Model |
21iridescent/MRC-RE | 509f1250ab8c85f226cadbe8ab3456e74b223d1e | 2022-06-28T09:46:14.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"license:afl-3.0",
"autotrain_compatible"
]
| question-answering | false | 21iridescent | null | 21iridescent/MRC-RE | 11 | null | transformers | 11,354 | ---
license: afl-3.0
---
|
abhiBatu/MeetingSumm | 132e9286c90085450647333f983affd20d0f5e9e | 2022-06-28T14:27:35.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | abhiBatu | null | abhiBatu/MeetingSumm | 11 | null | transformers | 11,355 | Entry not found |
robingeibel/bigbird-large-finetuned-big_patent | dd09c8e04a6584689597ca8f71bb51507cb44f28 | 2022-06-29T22:17:27.000Z | [
"pytorch",
"tensorboard",
"big_bird",
"fill-mask",
"dataset:big_patent",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | robingeibel | null | robingeibel/bigbird-large-finetuned-big_patent | 11 | null | transformers | 11,356 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- big_patent
model-index:
- name: bigbird-large-finetuned-big_patent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-large-finetuned-big_patent
This model is a fine-tuned version of [robingeibel/bigbird-large-finetuned-big_patent](https://huggingface.co/robingeibel/bigbird-large-finetuned-big_patent) on the big_patent dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0460
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0301 | 1.0 | 80099 | 1.0460 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Lamine/bert-finetuned-ner_SourceRecognition | 01ce3faaf64aeffbdb7845ab40d3a03f6485094d | 2022-06-28T14:08:26.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | Lamine | null | Lamine/bert-finetuned-ner_SourceRecognition | 11 | null | transformers | 11,357 | Entry not found |
AlekseyKorshuk/books-v2-3500 | 5a16ba78c992f4c31f0204794afb583e4d53e93e | 2022-06-28T15:11:32.000Z | [
"pytorch",
"gptj",
"text-generation",
"transformers"
]
| text-generation | false | AlekseyKorshuk | null | AlekseyKorshuk/books-v2-3500 | 11 | 1 | transformers | 11,358 | Entry not found |
ubikpt/t5-small-finetuned-cnn | 7fe47a7eec5dc33b39bee16aebec3beabf4be20d | 2022-06-30T10:07:16.000Z | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| summarization | false | ubikpt | null | ubikpt/t5-small-finetuned-cnn | 11 | null | transformers | 11,359 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- cnn_dailymail
metrics:
- rouge
model-index:
- name: t5-small-finetuned-cnn
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: cnn_dailymail
type: cnn_dailymail
args: 3.0.0
metrics:
- name: Rouge1
type: rouge
value: 33.2082
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-cnn
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8436
- Rouge1: 33.2082
- Rouge2: 16.798
- Rougel: 28.9573
- Rougelsum: 31.1044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.3793 | 1.0 | 359 | 1.8885 | 33.0321 | 16.7798 | 28.9367 | 30.9509 |
| 2.1432 | 2.0 | 718 | 1.8481 | 33.1559 | 16.8557 | 29.015 | 31.1122 |
| 2.0571 | 3.0 | 1077 | 1.8391 | 32.99 | 16.716 | 28.8118 | 30.9178 |
| 2.0001 | 4.0 | 1436 | 1.8357 | 33.0543 | 16.6731 | 28.8375 | 30.9604 |
| 1.9609 | 5.0 | 1795 | 1.8437 | 33.1019 | 16.7576 | 28.8669 | 31.001 |
| 1.925 | 6.0 | 2154 | 1.8402 | 33.1388 | 16.7539 | 28.8887 | 31.0262 |
| 1.9036 | 7.0 | 2513 | 1.8423 | 33.1825 | 16.759 | 28.9154 | 31.0656 |
| 1.8821 | 8.0 | 2872 | 1.8436 | 33.2082 | 16.798 | 28.9573 | 31.1044 |
### Framework versions
- Transformers 4.14.0
- Pytorch 1.5.0
- Datasets 2.3.2
- Tokenizers 0.10.3
|
Jeevesh8/goog_bert_ft_cola-22 | 3ca27d096e98dc874284f27b1776315f4bbbe91b | 2022-06-29T17:33:13.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-22 | 11 | null | transformers | 11,360 | Entry not found |
Jeevesh8/goog_bert_ft_cola-20 | 1b16f5378cda9f16efaa33f25d601c1e9fea564d | 2022-06-29T17:33:32.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-20 | 11 | null | transformers | 11,361 | Entry not found |
Jeevesh8/goog_bert_ft_cola-21 | fb2602ae6e19e5b9459313d21f6975df3edd0eda | 2022-06-29T17:33:01.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-21 | 11 | null | transformers | 11,362 | Entry not found |
Jeevesh8/goog_bert_ft_cola-92 | c8f71ca9413a89c706de55ae5b3b8331dec0cc61 | 2022-06-29T17:35:50.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-92 | 11 | null | transformers | 11,363 | Entry not found |
Jeevesh8/goog_bert_ft_cola-96 | d480b3c57502cc627b98b93ef47f36797e53d7d5 | 2022-06-29T17:36:06.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-96 | 11 | null | transformers | 11,364 | Entry not found |
Jeevesh8/goog_bert_ft_cola-97 | b01f2a49db5b4da1ca37cda69cabe9e1ea9d058b | 2022-06-29T17:38:49.000Z | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers"
]
| text-classification | false | Jeevesh8 | null | Jeevesh8/goog_bert_ft_cola-97 | 11 | null | transformers | 11,365 | Entry not found |
nvidia/stt_zh_conformer_transducer_large | 232d284afc458714611424210124d1f3b714c284 | 2022-07-12T16:23:40.000Z | [
"nemo",
"zh",
"dataset:AISHELL-2",
"arxiv:2005.08100",
"arxiv:1808.10583",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"Conformer",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"license:cc-by-4.0",
"model-index"
]
| automatic-speech-recognition | false | nvidia | null | nvidia/stt_zh_conformer_transducer_large | 11 | 2 | nemo | 11,366 | ---
language:
- zh
library_name: nemo
datasets:
- AISHELL-2
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- Transducer
- Conformer
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
model-index:
- name: stt_zh_conformer_transducer_large
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: AISHELL-2 IOS
type: aishell2_ios
split: test
args:
language: zh
metrics:
- name: Test CER
type: cer
value: 5.3
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: AISHELL-2 Android
type: aishell2_android
split: test
args:
language: zh
metrics:
- name: Test CER
type: cer
value: 5.7
- task:
type: Automatic Speech Recognition
name: automatic-speech-recognition
dataset:
name: AISHELL-2 Mic
type: aishell2_mic
split: test
args:
language: zh
metrics:
- name: Test CER
type: cer
value: 5.6
---
# NVIDIA Conformer-Transducer Large (zh-ZH)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model transcribes speech in Mandarin alphabet.
It is a large version of Conformer-Transducer (around 120M parameters) model.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained("nvidia/stt_zh_conformer_transducer_large")
```
### Transcribing using Python
You may transcribe an audio file like this:
```
asr_model.transcribe([PATH_TO_THE_AUDIO])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_zh_conformer_transducer_large"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-Transducer model is an autoregressive variant of Conformer model [1] for Automatic Speech Recognition which uses Transducer loss/decoding instead of CTC Loss. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/conformer/conformer_transducer_bpe.yaml).
### Datasets
All the models in this collection are trained on AISHELL2 [4] comprising of Mandarin speech:
## Performance
The list of the available models in this collection is shown in the following table. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
| Version | Tokenizer | Vocabulary Size | AISHELL2 Test IOS | AISHELL2 Test Android | AISHELL2 Test Mic | Train Dataset |
|---------|-----------|-----------------|-------------------|-----------------------|-------------------|---------------|
| 1.10.0 | Characters| 5026 | 5.3 | 5.7 | 5.6 | AISHELL-2 |
## Limitations
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isnโt supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale](https://arxiv.org/abs/1808.10583)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
austinmw/distilbert-base-uncased-finetuned-tweets-sentiment | bfc594502f76647e9eea90a38b796915960235f3 | 2022-06-29T22:18:47.000Z | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:tweet_eval",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | austinmw | null | austinmw/distilbert-base-uncased-finetuned-tweets-sentiment | 11 | null | transformers | 11,367 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-tweets-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
args: sentiment
metrics:
- name: Accuracy
type: accuracy
value: 0.7295
- name: F1
type: f1
value: 0.7303196028048928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-tweets-sentiment
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8192
- Accuracy: 0.7295
- F1: 0.7303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7126 | 1.0 | 713 | 0.6578 | 0.7185 | 0.7181 |
| 0.5514 | 2.0 | 1426 | 0.6249 | 0.7005 | 0.7046 |
| 0.4406 | 3.0 | 2139 | 0.7053 | 0.731 | 0.7296 |
| 0.3511 | 4.0 | 2852 | 0.7580 | 0.718 | 0.7180 |
| 0.2809 | 5.0 | 3565 | 0.8192 | 0.7295 | 0.7303 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
RuiqianLi/Malaya-speech_fine-tune_realcase_30_Jun_lm | 04d74bada071eda783f65224cd999f231e977437 | 2022-06-30T05:43:57.000Z | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:uob_singlish",
"transformers",
"generated_from_trainer",
"model-index"
]
| automatic-speech-recognition | false | RuiqianLi | null | RuiqianLi/Malaya-speech_fine-tune_realcase_30_Jun_lm | 11 | null | transformers | 11,368 | ---
tags:
- generated_from_trainer
datasets:
- uob_singlish
model-index:
- name: Malaya-speech_fine-tune_realcase_30_Jun_lm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Malaya-speech_fine-tune_realcase_30_Jun_lm
This model is a fine-tuned version of [malay-huggingface/wav2vec2-xls-r-300m-mixed](https://huggingface.co/malay-huggingface/wav2vec2-xls-r-300m-mixed) on the uob_singlish dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7669
- Wer: 0.3194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2487 | 1.82 | 20 | 0.7188 | 0.3403 |
| 0.6386 | 3.64 | 40 | 0.7061 | 0.3264 |
| 0.3525 | 5.45 | 60 | 0.7403 | 0.3542 |
| 0.3088 | 7.27 | 80 | 0.7483 | 0.2986 |
| 0.2609 | 9.09 | 100 | 0.7669 | 0.3194 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
mesolitica/finetuned-bert-base-multilingual-cased-noisy-en-ms | 43eac456719ff48b47db05bb70443e64404d8d4c | 2022-06-30T12:32:59.000Z | [
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"transformers",
"generated_from_keras_callback",
"model-index",
"autotrain_compatible"
]
| fill-mask | false | mesolitica | null | mesolitica/finetuned-bert-base-multilingual-cased-noisy-en-ms | 11 | null | transformers | 11,369 | ---
tags:
- generated_from_keras_callback
model-index:
- name: finetuned-bert-base-multilingual-cased-noisy-en-ms
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# finetuned-bert-base-multilingual-cased-noisy-en-ms
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
qfrodicio/bert-base-cased-finetuned-gesture_prediction | 3baca517608dca6e5b600edb1a1a0cc7e207d901 | 2022-06-30T20:21:09.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| token-classification | false | qfrodicio | null | qfrodicio/bert-base-cased-finetuned-gesture_prediction | 11 | null | transformers | 11,370 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-finetuned-gesture_prediction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-gesture_prediction
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3875
- Precision: 0.6404
- Recall: 0.7109
- F1: 0.6738
- Accuracy: 0.8135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 104 | 1.3672 | 0.3313 | 0.4468 | 0.3805 | 0.6480 |
| No log | 2.0 | 208 | 0.9858 | 0.4122 | 0.5520 | 0.472 | 0.7532 |
| No log | 3.0 | 312 | 0.9603 | 0.4783 | 0.6070 | 0.5351 | 0.7648 |
| No log | 4.0 | 416 | 0.8777 | 0.5602 | 0.6643 | 0.6078 | 0.7952 |
| 0.9471 | 5.0 | 520 | 0.8859 | 0.5827 | 0.6795 | 0.6274 | 0.8057 |
| 0.9471 | 6.0 | 624 | 0.9515 | 0.5604 | 0.6620 | 0.6070 | 0.8000 |
| 0.9471 | 7.0 | 728 | 1.0203 | 0.6142 | 0.6982 | 0.6535 | 0.8037 |
| 0.9471 | 8.0 | 832 | 1.0422 | 0.6058 | 0.7029 | 0.6508 | 0.8085 |
| 0.9471 | 9.0 | 936 | 1.0426 | 0.6227 | 0.7006 | 0.6593 | 0.8045 |
| 0.1111 | 10.0 | 1040 | 1.1450 | 0.6229 | 0.7263 | 0.6706 | 0.8080 |
| 0.1111 | 11.0 | 1144 | 1.1765 | 0.6580 | 0.7111 | 0.6835 | 0.8134 |
| 0.1111 | 12.0 | 1248 | 1.1905 | 0.6396 | 0.7099 | 0.6729 | 0.8124 |
| 0.1111 | 13.0 | 1352 | 1.1967 | 0.6148 | 0.7111 | 0.6594 | 0.8055 |
| 0.1111 | 14.0 | 1456 | 1.2124 | 0.6415 | 0.7158 | 0.6766 | 0.8085 |
| 0.0225 | 15.0 | 1560 | 1.2407 | 0.6351 | 0.7146 | 0.6725 | 0.8114 |
| 0.0225 | 16.0 | 1664 | 1.2745 | 0.6391 | 0.7041 | 0.6700 | 0.8073 |
| 0.0225 | 17.0 | 1768 | 1.2878 | 0.6466 | 0.7146 | 0.6789 | 0.8169 |
| 0.0225 | 18.0 | 1872 | 1.3091 | 0.6412 | 0.7170 | 0.6770 | 0.8101 |
| 0.0225 | 19.0 | 1976 | 1.3373 | 0.6490 | 0.7181 | 0.6818 | 0.8101 |
| 0.0075 | 20.0 | 2080 | 1.3352 | 0.6448 | 0.7135 | 0.6774 | 0.8101 |
| 0.0075 | 21.0 | 2184 | 1.3328 | 0.6477 | 0.7205 | 0.6822 | 0.8114 |
| 0.0075 | 22.0 | 2288 | 1.3498 | 0.6610 | 0.7251 | 0.6916 | 0.8129 |
| 0.0075 | 23.0 | 2392 | 1.3464 | 0.6606 | 0.7216 | 0.6898 | 0.8090 |
| 0.0075 | 24.0 | 2496 | 1.3580 | 0.6551 | 0.7263 | 0.6889 | 0.8144 |
| 0.0036 | 25.0 | 2600 | 1.3687 | 0.6547 | 0.7228 | 0.6870 | 0.8114 |
| 0.0036 | 26.0 | 2704 | 1.3730 | 0.6471 | 0.7228 | 0.6829 | 0.8149 |
| 0.0036 | 27.0 | 2808 | 1.3808 | 0.6505 | 0.7228 | 0.6848 | 0.8124 |
| 0.0036 | 28.0 | 2912 | 1.3869 | 0.6603 | 0.7228 | 0.6901 | 0.8111 |
| 0.0024 | 29.0 | 3016 | 1.3907 | 0.6624 | 0.7228 | 0.6913 | 0.8113 |
| 0.0024 | 30.0 | 3120 | 1.3913 | 0.6667 | 0.7251 | 0.6947 | 0.8121 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zluvolyote/s288cExpressionPrediction_k4 | 25aa83bfd2a8a8f2065dedcf2e50f29f10cd8705 | 2022-06-30T16:48:34.000Z | [
"pytorch",
"distilbert",
"text-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | zluvolyote | null | zluvolyote/s288cExpressionPrediction_k4 | 11 | null | transformers | 11,371 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: s288cExpressionPrediction_k4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# s288cExpressionPrediction_k4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-large-dapt-tapt-scientific-papers-pubmed-finetuned-DAGPap22 | 856d6739857cdc234a1fbd7ad5b0a125804cd1dc | 2022-06-30T23:08:57.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-large-dapt-tapt-scientific-papers-pubmed-finetuned-DAGPap22 | 11 | null | transformers | 11,372 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-large-dapt-tapt-scientific-papers-pubmed-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large-dapt-tapt-scientific-papers-pubmed-finetuned-DAGPap22
This model is a fine-tuned version of [domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed-tapt](https://huggingface.co/domenicrosati/deberta-v3-large-dapt-scientific-papers-pubmed-tapt) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
- Accuracy: 0.9998
- F1: 0.9999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1884 | 1.0 | 669 | 0.0248 | 0.9951 | 0.9964 |
| 0.0494 | 2.0 | 1338 | 0.0084 | 0.9987 | 0.9990 |
| 0.0199 | 3.0 | 2007 | 0.0051 | 0.9991 | 0.9993 |
| 0.0079 | 4.0 | 2676 | 0.0030 | 0.9993 | 0.9995 |
| 0.0 | 5.0 | 3345 | 0.0026 | 0.9994 | 0.9996 |
| 0.0 | 6.0 | 4014 | 0.0014 | 0.9996 | 0.9997 |
| 0.0 | 7.0 | 4683 | 0.0015 | 0.9996 | 0.9997 |
| 0.0 | 8.0 | 5352 | 0.0011 | 0.9996 | 0.9997 |
| 0.0143 | 9.0 | 6021 | 0.0000 | 1.0 | 1.0 |
| 0.0 | 10.0 | 6690 | 0.0035 | 0.9991 | 0.9993 |
| 0.0 | 11.0 | 7359 | 0.0004 | 0.9998 | 0.9999 |
| 0.0 | 12.0 | 8028 | 0.0002 | 0.9998 | 0.9999 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ardauzunoglu/mT5-en-to-tr | 1bc2b13f35489e25dc17b2d0fc97341d9fdfaa74 | 2022-06-30T20:32:33.000Z | [
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | ardauzunoglu | null | ardauzunoglu/mT5-en-to-tr | 11 | null | transformers | 11,373 | Entry not found |
FabianWillner/bert-base-uncased-finetuned-triviaqa-finetuned-squad | 09a3a079e92631cb9742bae0318d4a74f967a169 | 2022-07-01T10:42:12.000Z | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"dataset:squad",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| question-answering | false | FabianWillner | null | FabianWillner/bert-base-uncased-finetuned-triviaqa-finetuned-squad | 11 | null | transformers | 11,374 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-uncased-finetuned-triviaqa-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-triviaqa-finetuned-squad
This model is a fine-tuned version of [FabianWillner/bert-base-uncased-finetuned-triviaqa](https://huggingface.co/FabianWillner/bert-base-uncased-finetuned-triviaqa) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.0184 | 1.0 | 5533 | 0.9733 |
| 0.7496 | 2.0 | 11066 | 0.9981 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Vlasta/L3UOT_best_K6Stride1Wide1epoch10percent_size | d9ac106e1484309806b476474dcf962a3d87af44 | 2022-07-01T17:02:37.000Z | [
"pytorch",
"deberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | false | Vlasta | null | Vlasta/L3UOT_best_K6Stride1Wide1epoch10percent_size | 11 | null | transformers | 11,375 | Entry not found |
tner/bertweet-base-tweetner-2020 | 07c93da9edf45a9b09946d611e32fc086354c382 | 2022-07-07T23:35:25.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tner | null | tner/bertweet-base-tweetner-2020 | 11 | null | transformers | 11,376 | Entry not found |
Gorilla115/t5-austen | 47afcc5d5adc78e0aada2a942560e9f8ac260eab | 2022-07-03T07:59:25.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | Gorilla115 | null | Gorilla115/t5-austen | 11 | null | transformers | 11,377 | ---
tags:
- generated_from_trainer
model-index:
- name: t5-austen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-austen
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
Hyeongdon/t5-large-dgen-SciQ | 79fa6bddaf18f2b3d2926ee3b06bc6635a7193d6 | 2022-07-03T10:26:16.000Z | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | Hyeongdon | null | Hyeongdon/t5-large-dgen-SciQ | 11 | null | transformers | 11,378 | ---
license: apache-2.0
---
T5-large Distractor generation model fine-tuned on SciQ dataset.
Input Format
```
{correct_answer} <sep> {question} <sep> {context}
```
Output Format
```
{Option1}</s>{Option2}</s>{Option3}
```
The paper is not published yet. |
datien228/distilbart-wikilingua-autotrain | 0275c4a0d86e0307775c0fcbf98144f1c70eaef9 | 2022-07-05T00:53:41.000Z | [
"pytorch",
"bart",
"text2text-generation",
"unk",
"dataset:datien228/autotrain-data-summary-text",
"transformers",
"autotrain",
"co2_eq_emissions",
"autotrain_compatible"
]
| text2text-generation | false | datien228 | null | datien228/distilbart-wikilingua-autotrain | 11 | null | transformers | 11,379 | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- datien228/autotrain-data-summary-text
co2_eq_emissions: 1850.790132860878
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1079039131
- CO2 Emissions (in grams): 1850.790132860878
## Validation Metrics
- Loss: 1.8720897436141968
- Rouge1: 40.3451
- Rouge2: 17.4156
- RougeL: 30.9608
- RougeLsum: 38.8329
- Gen Len: 67.0434
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/datien228/autotrain-summary-text-1079039131
``` |
sanchit-gandhi/wav2vec2-large-tedlium | b77192000300e9cbb5e22864d80d9c4a69f3a047 | 2022-07-04T11:10:28.000Z | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"en",
"dataset:LIUM/tedlium",
"transformers",
"speech",
"license:apache-2.0"
]
| automatic-speech-recognition | false | sanchit-gandhi | null | sanchit-gandhi/wav2vec2-large-tedlium | 11 | 1 | transformers | 11,380 | ---
language: en
datasets:
- LIUM/tedlium
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Large-Tedlium
The Wav2Vec2 large model fine-tuned on the TEDLIUM corpus.
The model is initialised with Facebook's [Wav2Vec2 large LV-60k](https://huggingface.co/facebook/wav2vec2-large-lv60) checkpoint pre-trained on 60,000h of audiobooks from the LibriVox project. It is fine-tuned on 452h of TED talks from the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) corpus (Release 3). When using the model, make sure that your speech input is sampled at 16Khz.
The model achieves a word error rate (WER) of 8.4% on the dev set and 8.2% on the test set. [Training logs](https://wandb.ai/sanchit-gandhi/tedlium/runs/10c85yc4?workspace=user-sanchit-gandhi) document the training and evaluation progress over 50k steps of fine-tuning.
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how this model was fine-tuned.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("sanchit-gandhi/wav2vec2-large-tedlium")
model = Wav2Vec2ForCTC.from_pretrained("sanchit-gandhi/wav2vec2-large-tedlium")
# load dummy dataset
ds = load_dataset("sanchit-gandhi/tedlium_dummy", split="validation")
# process audio inputs
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
print("Target: ", ds["text"][0])
print("Transcription: ", transcription[0])
```
## Evaluation
This code snippet shows how to evaluate **Wav2Vec2-Large-Tedlium** on the TEDLIUM test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer
tedlium_eval = load_dataset("LIUM/tedlium", "release3", split="test")
model = Wav2Vec2ForCTC.from_pretrained("sanchit-gandhi/wav2vec2-large-tedlium").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("sanchit-gandhi/wav2vec2-large-tedlium")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = tedlium_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
``` |
ricardo-filho/bert_base_tcm_no_objeto_0.8 | 9dc724ed47ea520e05974001c6eedda2ce2246c1 | 2022-07-04T13:21:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index",
"autotrain_compatible"
]
| token-classification | false | ricardo-filho | null | ricardo-filho/bert_base_tcm_no_objeto_0.8 | 11 | null | transformers | 11,381 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert_base_tcm_no_objeto_0.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_tcm_no_objeto_0.8
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
- Criterio Julgamento Precision: 0.7444
- Criterio Julgamento Recall: 0.8684
- Criterio Julgamento F1: 0.8016
- Criterio Julgamento Number: 114
- Data Sessao Precision: 0.7297
- Data Sessao Recall: 0.9153
- Data Sessao F1: 0.8120
- Data Sessao Number: 59
- Modalidade Licitacao Precision: 0.9412
- Modalidade Licitacao Recall: 0.9697
- Modalidade Licitacao F1: 0.9552
- Modalidade Licitacao Number: 462
- Numero Exercicio Precision: 0.9018
- Numero Exercicio Recall: 0.9619
- Numero Exercicio F1: 0.9309
- Numero Exercicio Number: 210
- Valor Objeto Precision: 0.7778
- Valor Objeto Recall: 0.8537
- Valor Objeto F1: 0.8140
- Valor Objeto Number: 41
- Overall Precision: 0.8803
- Overall Recall: 0.9458
- Overall F1: 0.9119
- Overall Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Valor Objeto Precision | Valor Objeto Recall | Valor Objeto F1 | Valor Objeto Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.012 | 1.0 | 2863 | 0.0099 | 0.7059 | 0.8421 | 0.7680 | 114 | 0.7013 | 0.9153 | 0.7941 | 59 | 0.9366 | 0.9589 | 0.9476 | 462 | 0.9136 | 0.9571 | 0.9349 | 210 | 0.5902 | 0.8780 | 0.7059 | 41 | 0.8583 | 0.9368 | 0.8958 | 0.9974 |
| 0.0095 | 2.0 | 5726 | 0.0076 | 0.8095 | 0.8947 | 0.8500 | 114 | 0.6935 | 0.7288 | 0.7107 | 59 | 0.9346 | 0.9589 | 0.9466 | 462 | 0.9054 | 0.9571 | 0.9306 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.8901 | 0.9323 | 0.9107 | 0.9981 |
| 0.005 | 3.0 | 8589 | 0.0091 | 0.7574 | 0.9035 | 0.8240 | 114 | 0.6471 | 0.9322 | 0.7639 | 59 | 0.9371 | 0.9675 | 0.9521 | 462 | 0.9091 | 0.9524 | 0.9302 | 210 | 0.7660 | 0.8780 | 0.8182 | 41 | 0.8715 | 0.9492 | 0.9087 | 0.9978 |
| 0.0042 | 4.0 | 11452 | 0.0076 | 0.7444 | 0.8684 | 0.8016 | 114 | 0.7297 | 0.9153 | 0.8120 | 59 | 0.9412 | 0.9697 | 0.9552 | 462 | 0.9018 | 0.9619 | 0.9309 | 210 | 0.7778 | 0.8537 | 0.8140 | 41 | 0.8803 | 0.9458 | 0.9119 | 0.9983 |
| 0.004 | 5.0 | 14315 | 0.0100 | 0.7373 | 0.7632 | 0.7500 | 114 | 0.7534 | 0.9322 | 0.8333 | 59 | 0.9124 | 0.9697 | 0.9402 | 462 | 0.9196 | 0.9810 | 0.9493 | 210 | 0.76 | 0.9268 | 0.8352 | 41 | 0.8724 | 0.9413 | 0.9055 | 0.9979 |
| 0.0041 | 6.0 | 17178 | 0.0103 | 0.7377 | 0.7895 | 0.7627 | 114 | 0.75 | 0.8644 | 0.8031 | 59 | 0.9492 | 0.9697 | 0.9593 | 462 | 0.92 | 0.9857 | 0.9517 | 210 | 0.7872 | 0.9024 | 0.8409 | 41 | 0.8919 | 0.9402 | 0.9154 | 0.9980 |
| 0.002 | 7.0 | 20041 | 0.0092 | 0.7984 | 0.8684 | 0.8319 | 114 | 0.68 | 0.8644 | 0.7612 | 59 | 0.9471 | 0.9697 | 0.9583 | 462 | 0.9196 | 0.9810 | 0.9493 | 210 | 0.7872 | 0.9024 | 0.8409 | 41 | 0.8918 | 0.9492 | 0.9196 | 0.9983 |
| 0.0014 | 8.0 | 22904 | 0.0100 | 0.8033 | 0.8596 | 0.8305 | 114 | 0.7612 | 0.8644 | 0.8095 | 59 | 0.9532 | 0.9697 | 0.9614 | 462 | 0.9186 | 0.9667 | 0.9420 | 210 | 0.8222 | 0.9024 | 0.8605 | 41 | 0.9049 | 0.9447 | 0.9244 | 0.9983 |
| 0.0015 | 9.0 | 25767 | 0.0108 | 0.7787 | 0.8333 | 0.8051 | 114 | 0.7067 | 0.8983 | 0.7910 | 59 | 0.9513 | 0.9719 | 0.9615 | 462 | 0.9107 | 0.9714 | 0.9401 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.8943 | 0.9458 | 0.9194 | 0.9984 |
| 0.0008 | 10.0 | 28630 | 0.0112 | 0.7934 | 0.8421 | 0.8170 | 114 | 0.7222 | 0.8814 | 0.7939 | 59 | 0.9533 | 0.9719 | 0.9625 | 462 | 0.9193 | 0.9762 | 0.9469 | 210 | 0.8409 | 0.9024 | 0.8706 | 41 | 0.9012 | 0.9470 | 0.9235 | 0.9984 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jakka/t5_small_NCC-finetuned-sv-frp-classifier | b221f9052655bdf174e526b6600f6b28c6f6fc7c | 2022-07-04T16:03:42.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:norwegian_parliament",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | jakka | null | jakka/t5_small_NCC-finetuned-sv-frp-classifier | 11 | null | transformers | 11,382 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- norwegian_parliament
model-index:
- name: t5_small_NCC-finetuned-sv-frp-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_small_NCC-finetuned-sv-frp-classifier
This model is a fine-tuned version of [north/t5_small_NCC](https://huggingface.co/north/t5_small_NCC) on the norwegian_parliament dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Sequence Accuracy: 69.7875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sequence Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| No log | 1.0 | 113 | nan | 69.7875 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.0
- Datasets 2.3.2
- Tokenizers 0.11.0
|
juridics/bertimbaulaw-base-portuguese-sts | 03a50eea489a9f7c8a18f8ed617815d7437937d9 | 2022-07-04T22:35:51.000Z | [
"pytorch",
"bert",
"feature-extraction",
"sentence-transformers",
"sentence-similarity",
"transformers"
]
| sentence-similarity | false | juridics | null | juridics/bertimbaulaw-base-portuguese-sts | 11 | null | sentence-transformers | 11,383 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# juridics/bertimbaulaw-base-portuguese-sts-scale
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('juridics/bertimbaulaw-base-portuguese-sts-scale')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('juridics/bertimbaulaw-base-portuguese-sts-scale')
model = AutoModel.from_pretrained('juridics/bertimbaulaw-base-portuguese-sts-scale')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=juridics/bertimbaulaw-base-portuguese-sts-scale)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2492 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 2492,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 748,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
enoriega/rule_learning_margin_1mm_spanpred_nospec | 5657f714254d1cdcbaf0cf825b5e6a0881a5660d | 2022-07-05T13:56:15.000Z | [
"pytorch",
"tensorboard",
"bert",
"dataset:enoriega/odinsynth_dataset",
"transformers",
"generated_from_trainer",
"model-index"
]
| null | false | enoriega | null | enoriega/rule_learning_margin_1mm_spanpred_nospec | 11 | null | transformers | 11,384 | ---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm_spanpred_nospec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm_spanpred_nospec
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3972
- Margin Accuracy: 0.8136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.5864 | 0.16 | 20 | 0.5454 | 0.7564 |
| 0.4995 | 0.32 | 40 | 0.4761 | 0.7867 |
| 0.4866 | 0.48 | 60 | 0.4353 | 0.8057 |
| 0.4568 | 0.64 | 80 | 0.4229 | 0.8098 |
| 0.4409 | 0.8 | 100 | 0.4136 | 0.8140 |
| 0.4369 | 0.96 | 120 | 0.4124 | 0.8118 |
| 0.4172 | 1.12 | 140 | 0.4043 | 0.8118 |
| 0.4208 | 1.28 | 160 | 0.4072 | 0.8119 |
| 0.4256 | 1.44 | 180 | 0.4041 | 0.8124 |
| 0.4201 | 1.6 | 200 | 0.4041 | 0.8127 |
| 0.4159 | 1.76 | 220 | 0.4006 | 0.8125 |
| 0.4103 | 1.92 | 240 | 0.4004 | 0.8131 |
| 0.4282 | 2.08 | 260 | 0.3999 | 0.8138 |
| 0.4169 | 2.24 | 280 | 0.4006 | 0.8136 |
| 0.4263 | 2.4 | 300 | 0.3962 | 0.8133 |
| 0.4252 | 2.56 | 320 | 0.3994 | 0.8137 |
| 0.4202 | 2.72 | 340 | 0.3965 | 0.8137 |
| 0.4146 | 2.88 | 360 | 0.3967 | 0.8139 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
domenicrosati/deberta-v3-xsmall-finetuned-review_classifier | c2eac86ff4103773240ef2d48b64dddbcb24c30b | 2022-07-06T01:09:25.000Z | [
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
]
| text-classification | false | domenicrosati | null | domenicrosati/deberta-v3-xsmall-finetuned-review_classifier | 11 | null | transformers | 11,385 | ---
license: mit
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-xsmall-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-xsmall-finetuned-review_classifier
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1441
- Accuracy: 0.9513
- F1: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.1518 | 1.0 | 6667 | 0.1575 | 0.9510 | 0.7155 |
| 0.1247 | 2.0 | 13334 | 0.1441 | 0.9513 | 0.7458 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ghadeermobasher/Modified-BlueBERT-BioRED-Chem | a07392c011f9771022acdc38bec2e900ea3ce936 | 2022-07-06T14:52:30.000Z | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | ghadeermobasher | null | ghadeermobasher/Modified-BlueBERT-BioRED-Chem | 11 | null | transformers | 11,386 | Entry not found |
saekomdalkom/long-t5-local-base-finetuned-xsum | 935edc43ccd1d551d7884e5403869e8d49c6fbd9 | 2022-07-06T15:59:44.000Z | [
"pytorch",
"longt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | saekomdalkom | null | saekomdalkom/long-t5-local-base-finetuned-xsum | 11 | null | transformers | 11,387 | Entry not found |
Aktsvigun/bart-base_aeslc_12345 | cd288721c3ed7edc23f16180d65c1feb6a24e01f | 2022-07-07T15:42:28.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Aktsvigun | null | Aktsvigun/bart-base_aeslc_12345 | 11 | null | transformers | 11,388 | Entry not found |
tner/twitter-roberta-base-2019-90m-tweetner-2020 | a1c1fb1ac5c3cdfdf48f12f72127f1eb1a184db1 | 2022-07-07T10:09:38.000Z | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | false | tner | null | tner/twitter-roberta-base-2019-90m-tweetner-2020 | 11 | null | transformers | 11,389 | Entry not found |
mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243 | 12d61c59030a9dd9cfb6b6efd0c7176899203b5b | 2022-07-07T20:02:39.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:mbyanfei/autotrain-data-amazon-shoe-reviews-classification",
"transformers",
"autotrain",
"co2_eq_emissions"
]
| text-classification | false | mbyanfei | null | mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243 | 11 | null | transformers | 11,390 | ---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain ๐ค"
datasets:
- mbyanfei/autotrain-data-amazon-shoe-reviews-classification
co2_eq_emissions: 27.982443349742287
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1104340243
- CO2 Emissions (in grams): 27.982443349742287
## Validation Metrics
- Loss: 0.9584922790527344
- Accuracy: 0.5843
- Macro F1: 0.5801009597024507
- Micro F1: 0.5843
- Weighted F1: 0.5792137097243996
- Macro Precision: 0.5897236028586046
- Micro Precision: 0.5843
- Weighted Precision: 0.5896188517045103
- Macro Recall: 0.5857983081566331
- Micro Recall: 0.5843
- Weighted Recall: 0.5843
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
huggingtweets/gassy_dragon | 191a24ba02a728a701edc88742766140c7ddb930 | 2022-07-07T21:05:38.000Z | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"huggingtweets"
]
| text-generation | false | huggingtweets | null | huggingtweets/gassy_dragon | 11 | null | transformers | 11,391 | ---
language: en
thumbnail: http://www.huggingtweets.com/gassy_dragon/1657227895422/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1423289998544044032/vc29B5yA_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bau be tootin on ur butt.</div>
<div style="text-align: center; font-size: 14px;">@gassy_dragon</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Bau be tootin on ur butt..
| Data | Bau be tootin on ur butt. |
| --- | --- |
| Tweets downloaded | 3188 |
| Retweets | 953 |
| Short tweets | 487 |
| Tweets kept | 1748 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3puk9479/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gassy_dragon's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3cp8z35e) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3cp8z35e/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/gassy_dragon')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nateraw/resnet50d | 662b2093d2a0d37f9e4ac6f1326d8aff30c01604 | 2022-07-08T05:34:39.000Z | [
"pytorch",
"timm",
"image-classification"
]
| image-classification | false | nateraw | null | nateraw/resnet50d | 11 | null | timm | 11,392 | ---
tags:
- image-classification
- timm
library_tag: timm
---
# Model card for resnet50d |
domenicrosati/SPECTER-with-biblio-context-finetuned-review_classifier | be1b9f63adf023ca28788034503b4939fd8958a4 | 2022-07-08T18:53:09.000Z | [
"pytorch",
"bert",
"transformers",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | false | domenicrosati | null | domenicrosati/SPECTER-with-biblio-context-finetuned-review_classifier | 11 | null | transformers | 11,393 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: SPECTER-with-biblio-context-finetuned-review_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SPECTER-with-biblio-context-finetuned-review_classifier
This model is a fine-tuned version of [allenai/specter](https://huggingface.co/allenai/specter) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1284
- Accuracy: 0.962
- F1: 0.7892
- Recall: 0.7593
- Precision: 0.8216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.1956 | 1.0 | 6667 | 0.1805 | 0.9514 | 0.7257 | 0.6860 | 0.7702 |
| 0.135 | 2.0 | 13334 | 0.1284 | 0.962 | 0.7892 | 0.7593 | 0.8216 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
robb17/XLNet-finetuned-sentiment-analysis | af80e39e4a66f7dd16220574fbf54ded780486b4 | 2022-07-10T11:53:27.000Z | [
"pytorch",
"xlnet",
"text-classification",
"transformers"
]
| text-classification | false | robb17 | null | robb17/XLNet-finetuned-sentiment-analysis | 11 | null | transformers | 11,394 | Entry not found |
Zamachi/RoBERTa-for-multilabel-sentence-classification | f8d542318fae3affd54b48577eab964e700d3f72 | 2022-07-14T13:19:22.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | false | Zamachi | null | Zamachi/RoBERTa-for-multilabel-sentence-classification | 11 | null | transformers | 11,395 | Entry not found |
camilag/t5-end2end-questions-generation | a331016f052175e3b388f0b5c1dc778566dd65d8 | 2022-07-11T20:52:28.000Z | [
"pytorch",
"t5",
"text2text-generation",
"dataset:squad_modified_for_t5_qg",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"autotrain_compatible"
]
| text2text-generation | false | camilag | null | camilag/t5-end2end-questions-generation | 11 | null | transformers | 11,396 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5425 | 0.34 | 100 | 1.9416 |
| 2.0221 | 0.68 | 200 | 1.7927 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
omarxadel/hubert-large-arabic-egyptian | a3dbba3a3b6f0ee6c80a3dbd18fc557b8f31dcd7 | 2022-07-12T14:10:51.000Z | [
"pytorch",
"hubert",
"automatic-speech-recognition",
"ar",
"dataset:MGB-3",
"dataset:egyptian-arabic-conversational-speech-corpus",
"arxiv:2106.07447",
"transformers",
"CTC",
"Attention",
"Transformer",
"license:cc-by-nc-4.0",
"model-index"
]
| automatic-speech-recognition | false | omarxadel | null | omarxadel/hubert-large-arabic-egyptian | 11 | 1 | transformers | 11,397 | ---
language: "ar"
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- Attention
- pytorch
- Transformer
license: "cc-by-nc-4.0"
datasets:
- MGB-3
- egyptian-arabic-conversational-speech-corpus
metrics:
- wer
model-index:
- name: omarxadel/hubert-large-arabic-egyptian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 25.9
- name: Validation WER
type: wer
value: 23.5
---
# Arabic Hubert-Large - with CTC fine-tuned on MGB-3 and Egyptian Arabic Conversational Speech Corpus (No LM)
This model is a fine-tuned version of [Arabic Hubert-Large](https://huggingface.co/asafaya/hubert-large-arabic). We finetuned this model on the MGB-3 and Egyptian Arabic Conversational Speech Corpus datasets, acheiving a state of the art for Egyptian Arabic with WER of `25.9%`.
The original model was pre-trained on 2,000 hours of 16kHz sampled Arabic speech audio. When using the model make sure that your speech input is also sampled at 16Khz, see the original [paper](https://arxiv.org/abs/2106.07447) for more details on the model.
The performance of the model on the datasets is the following:
| Valid WER | Test WER |
|:---------:|:--------:|
| 23.55 | 25.59 |
# Acknowledgement
Model fine-tuning and data processing for this work were performed as a part of a Graduation Project from Faculty of Engineering, Alexandria University, CCE Program. |
Doohae/lassl-kobart | 303823d25fc627dd513d442bb422aedbb4247b65 | 2022-07-12T18:28:33.000Z | [
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | false | Doohae | null | Doohae/lassl-kobart | 11 | null | transformers | 11,398 | Entry not found |
jimacasaet/SalamaThanksEN2FILv3 | 30983a9aae35be45f612f0199d486e97ded3bdff | 2022-07-13T09:09:11.000Z | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| text2text-generation | false | jimacasaet | null | jimacasaet/SalamaThanksEN2FILv3 | 11 | null | transformers | 11,399 | ---
license: apache-2.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.