modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 06:27:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sofia425/khipu-finetuned-amazon_reviews_multi | sofia425 | 2023-03-22T17:52:41Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T17:47:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: khipu-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9085
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# khipu-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2836
- Accuracy: 0.9085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2305 | 1.0 | 63 | 0.2953 | 0.895 |
| 0.196 | 2.0 | 126 | 0.2836 | 0.9085 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
mathichpp/khipu-finetuned-amazon_reviews_multi | mathichpp | 2023-03-22T17:52:05Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T17:48:33Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: khipu-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9025
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# khipu-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2815
- Accuracy: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.254 | 1.0 | 63 | 0.2662 | 0.9067 |
| 0.2024 | 2.0 | 126 | 0.2815 | 0.9025 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
brianlorenzo/TALLER-IA-Comentarios-De-Amazon | brianlorenzo | 2023-03-22T17:52:04Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T17:48:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: TALLER-IA-Comentarios-De-Amazon
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.90375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TALLER-IA-Comentarios-De-Amazon
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3000
- Accuracy: 0.9038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2229 | 1.0 | 63 | 0.2589 | 0.908 |
| 0.2068 | 2.0 | 126 | 0.3000 | 0.9038 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
jp9999/house | jp9999 | 2023-03-22T17:51:47Z | 0 | 0 | null | [
"region:us"
]
| null | 2023-03-22T17:50:47Z | create a house on a 700 sqm lot with 8 car garage and a pool on the second floor, 5 bedrooms garden
|
leinho/khipu-finetuned-amazon_reviews_multi | leinho | 2023-03-22T17:51:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T17:47:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: khipu-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.90725
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# khipu-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2864
- Accuracy: 0.9073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2609 | 1.0 | 63 | 0.2640 | 0.905 |
| 0.1918 | 2.0 | 126 | 0.2864 | 0.9073 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
maleperezt/khipu-finetuned-amazon_reviews_multi | maleperezt | 2023-03-22T17:50:43Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T17:46:21Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: khipu-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9055
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# khipu-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3014
- Accuracy: 0.9055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2226 | 1.0 | 63 | 0.2641 | 0.9085 |
| 0.1862 | 2.0 | 126 | 0.3014 | 0.9055 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
sd-concepts-library/ahx-beta-41b373e | sd-concepts-library | 2023-03-22T17:48:15Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-03-22T17:48:14Z | ---
license: mit
---
### ahx-beta-41b373e on Stable Diffusion
This is the `<ahx-beta-41b373e>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
facebook/esmfold_v1 | facebook | 2023-03-22T17:39:28Z | 8,997,176 | 27 | transformers | [
"transformers",
"pytorch",
"esm",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2022-11-01T18:24:14Z | ---
license: mit
---
# ESMFold
ESMFold is a state-of-the-art end-to-end protein folding model based on an ESM-2 backbone. It does not require any lookup or MSA step, and therefore does not require any external databases to be present in order to make predictions. As a result, inference time is very significantly faster than AlphaFold2. For details on the model architecture and training, please refer to the [accompanying paper](https://www.science.org/doi/10.1126/science.ade2574).
If you're interested in using ESMFold in practice, please check out the associated [tutorial notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb). |
nguyenvulebinh/mbart-large-50-latin-only | nguyenvulebinh | 2023-03-22T17:30:12Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"mbart-50",
"multilingual",
"en",
"arxiv:2008.00401",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-03-22T17:19:04Z | ---
language:
- multilingual
- en
license: mit
tags:
- mbart-50
---
# mBART-50
mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
## Model description
mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning.
Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below.
**Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data:
`D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes,
first randomly shuffling the original sentences' order, and second a novel in-filling scheme,
where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text.
35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`.
The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence.
## Checking
```python
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('facebook/mbart-large-50')
tokenizer = AutoTokenizer.from_pretrained('facebook/mbart-large-50')
src_text = "UN Chief Says There Is <mask> Military Solution <mask> Syria"
encoded_hi = tokenizer(src_text, return_tensors="pt")
generated_output = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"],
return_dict_in_generate=True, return_dict=True, output_hidden_states=True)
text_output = tokenizer.batch_decode(generated_output.sequences, skip_special_tokens=True)
new_model = AutoModelForSeq2SeqLM.from_pretrained('nguyenvulebinh/mbart-large-50-latin-only')
new_tokenizer = AutoTokenizer.from_pretrained('nguyenvulebinh/mbart-large-50-latin-only')
new_encoded_hi = new_tokenizer(src_text, return_tensors="pt")
new_generated_output = new_model.generate(**new_encoded_hi, forced_bos_token_id=new_tokenizer.lang_code_to_id["en_XX"],
return_dict_in_generate=True, return_dict=True, output_hidden_states=True)
new_text_output = new_tokenizer.batch_decode(new_generated_output.sequences, skip_special_tokens=True)
assert text_output == new_text_output
assert torch.equal(generated_output.encoder_hidden_states[-1], new_generated_output.encoder_hidden_states[-1])
assert torch.equal(generated_output.decoder_hidden_states[-1][-1], new_generated_output.decoder_hidden_states[-1][-1])
print(new_text_output)
# ['UN Chief Says There Is No Military Solution to the War in Syria']
```
## Languages covered
English (en_XX)
## BibTeX entry and citation info
```
@article{tang2020multilingual,
title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
year={2020},
eprint={2008.00401},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
AnaniyaX/decision-distilbert-uncased | AnaniyaX | 2023-03-22T17:24:16Z | 9 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"dataset:textvqa",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-21T19:21:09Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AnaniyaX/decision-distilbert-uncased
results: []
datasets:
- textvqa
- squad
widget:
- text: 'What does the sign says'
example_title: 'Visual Question Example 1'
- text: 'What does string theory talks about'
example_title: 'Textual Question Example 1'
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AnaniyaX/decision-distilbert-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on textvqa and squad.
It achieves the following results on the evaluation set:
- Train Loss: 0.0097
- Train Accuracy: 0.9976
- Epoch: 9
## Model description
The Text-Visual Question Classifier is a Hugging Face model that can classify questions as either text-based or visual-based.
It uses a natural language processing and techniques to analyze the question and determine its type.
The model has been trained on a large dataset of questions labeled as either text-based or visual-based, and has achieved high accuracy in identifying the correct type of question.
## Intended uses & limitations
#### Applications
This model can be used in various applications such as chatbots, virtual assistants, search engines, and recommendation systems. For example, it can help chatbots to provide more accurate responses by understanding the type of question being asked. It can also help search engines to retrieve more relevant results by filtering out irrelevant content based on the type of question.
#### Limitations:
The model may not perform well on questions that are ambiguous or have multiple interpretations. It may also be biased towards certain types of questions based on the training data.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-06, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 0.1914 | 0.9444 | 0 |
| 0.0711 | 0.9768 | 1 |
| 0.0531 | 0.9826 | 2 |
| 0.0427 | 0.9868 | 3 |
| 0.0330 | 0.9904 | 4 |
| 0.0264 | 0.9923 | 5 |
| 0.0195 | 0.9947 | 6 |
| 0.0149 | 0.9960 | 7 |
| 0.0123 | 0.9965 | 8 |
| 0.0097 | 0.9976 | 9 |
### Framework versions
- Transformers 4.27.2
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2 |
livingbody/FlyingDunhuang | livingbody | 2023-03-22T17:21:59Z | 0 | 0 | null | [
"paddlepaddle",
"stable-diffusion",
"stable-diffusion-ppdiffusers",
"text-to-image",
"ppdiffusers",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2023-03-22T14:09:49Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of FlyingDunhuang
tags:
- stable-diffusion
- stable-diffusion-ppdiffusers
- text-to-image
- ppdiffusers
- lora
inference: false
---
# LoRA DreamBooth - livingbody/FlyingDunhuang
本仓库的 LoRA 权重是基于 runwayml/stable-diffusion-v1-5 训练而来的,我们采用[DreamBooth](https://dreambooth.github.io/)的技术并使用 a photo of FlyingDunhuang 文本进行了训练。
|
system-technologies/biogpt | system-technologies | 2023-03-22T17:10:41Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"biogpt",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-03-22T16:15:05Z | ---
language: en
license: mit
widget:
- text: COVID-19 is
duplicated_from: microsoft/biogpt
---
## BioGPT
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> from transformers import BioGptTokenizer, BioGptForCausalLM
>>> model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
>>> tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
>>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
>>> set_seed(42)
>>> generator("COVID-19 is", max_length=20, num_return_sequences=5, do_sample=True)
[{'generated_text': 'COVID-19 is a disease that spreads worldwide and is currently found in a growing proportion of the population'},
{'generated_text': 'COVID-19 is one of the largest viral epidemics in the world.'},
{'generated_text': 'COVID-19 is a common condition affecting an estimated 1.1 million people in the United States alone.'},
{'generated_text': 'COVID-19 is a pandemic, the incidence has been increased in a manner similar to that in other'},
{'generated_text': 'COVID-19 is transmitted via droplets, air-borne, or airborne transmission.'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BioGptTokenizer, BioGptForCausalLM
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
Beam-search decoding:
```python
import torch
from transformers import BioGptTokenizer, BioGptForCausalLM, set_seed
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
sentence = "COVID-19 is"
inputs = tokenizer(sentence, return_tensors="pt")
set_seed(42)
with torch.no_grad():
beam_output = model.generate(**inputs,
min_length=100,
max_length=1024,
num_beams=5,
early_stopping=True
)
tokenizer.decode(beam_output[0], skip_special_tokens=True)
'COVID-19 is a global pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of coronavirus disease 2019 (COVID-19), which has spread to more than 200 countries and territories, including the United States (US), Canada, Australia, New Zealand, the United Kingdom (UK), and the United States of America (USA), as of March 11, 2020, with more than 800,000 confirmed cases and more than 800,000 deaths.'
```
## Citation
If you find BioGPT useful in your research, please cite the following paper:
```latex
@article{10.1093/bib/bbac409,
author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
journal = {Briefings in Bioinformatics},
volume = {23},
number = {6},
year = {2022},
month = {09},
abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
issn = {1477-4054},
doi = {10.1093/bib/bbac409},
url = {https://doi.org/10.1093/bib/bbac409},
note = {bbac409},
eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
}
```
|
TRiddle/ppo-SnowballTarget | TRiddle | 2023-03-22T16:56:16Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-03-22T16:56:10Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: TRiddle/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jfforero/distilbert-base-uncased-finetuned-imdb | jfforero | 2023-03-22T16:55:37Z | 3 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-03-22T12:58:39Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jfforero/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jfforero/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8449
- Validation Loss: 2.5443
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8449 | 2.5443 | 0 |
### Framework versions
- Transformers 4.27.2
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Geotrend/bert-base-bg-cased | Geotrend | 2023-03-22T16:53:29Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"bg",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:04Z | ---
language: bg
datasets: wikipedia
license: apache-2.0
---
# bert-base-bg-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-bg-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-bg-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Mutlilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
sd-concepts-library/ahx-beta-41b2a57 | sd-concepts-library | 2023-03-22T16:53:05Z | 0 | 0 | null | [
"license:mit",
"region:us"
]
| null | 2023-03-22T16:53:03Z | ---
license: mit
---
### ahx-beta-41b2a57 on Stable Diffusion
This is the `<ahx-beta-41b2a57>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







|
socialmediaie/TRAC2020_ENG_B_bert-base-uncased | socialmediaie | 2023-03-22T16:39:22Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | # Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020
Models and predictions for submission to TRAC - 2020 Second Workshop on Trolling, Aggression and Cyberbullying.
Our trained models as well as evaluation metrics during traing are available at: https://databank.illinois.edu/datasets/IDB-8882752#
We also make a few of our models available in HuggingFace's models repository at https://huggingface.co/socialmediaie/, these models can be further fine-tuned on your dataset of choice.
Our approach is described in our paper titled:
> Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
The source code for training this model and more details can be found on our code repository: https://github.com/socialmediaie/TRAC2020
NOTE: These models are retrained for uploading here after our submission so the evaluation measures may be slightly different from the ones reported in the paper.
If you plan to use the dataset please cite the following resources:
* Mishra, Sudhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. "Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020." In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020).
* Mishra, Shubhanshu, Shivangi Prasad, and Shubhanshu Mishra. 2020. “Trained Models for Multilingual Joint Fine-Tuning of Transformer Models for Identifying Trolling, Aggression and Cyberbullying at TRAC 2020.” University of Illinois at Urbana-Champaign. https://doi.org/10.13012/B2IDB-8882752_V1.
```
@inproceedings{Mishra2020TRAC,
author = {Mishra, Sudhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
booktitle = {Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying (TRAC-2020)},
title = {{Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
year = {2020}
}
@data{illinoisdatabankIDB-8882752,
author = {Mishra, Shubhanshu and Prasad, Shivangi and Mishra, Shubhanshu},
doi = {10.13012/B2IDB-8882752_V1},
publisher = {University of Illinois at Urbana-Champaign},
title = {{Trained models for Multilingual Joint Fine-tuning of Transformer models for identifying Trolling, Aggression and Cyberbullying at TRAC 2020}},
url = {https://doi.org/10.13012/B2IDB-8882752{\_}V1},
year = {2020}
}
```
## Usage
The models can be used via the following code:
```python
from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification
import torch
from pathlib import Path
from scipy.special import softmax
import numpy as np
import pandas as pd
TASK_LABEL_IDS = {
"Sub-task A": ["OAG", "NAG", "CAG"],
"Sub-task B": ["GEN", "NGEN"],
"Sub-task C": ["OAG-GEN", "OAG-NGEN", "NAG-GEN", "NAG-NGEN", "CAG-GEN", "CAG-NGEN"]
}
model_version="databank" # other option is hugging face library
if model_version == "databank":
# Make sure you have downloaded the required model file from https://databank.illinois.edu/datasets/IDB-8882752
# Unzip the file at some model_path (we are using: "databank_model")
model_path = next(Path("databank_model").glob("./*/output/*/model"))
# Assuming you get the following type of structure inside "databank_model"
# 'databank_model/ALL/Sub-task C/output/bert-base-multilingual-uncased/model'
lang, task, _, base_model, _ = model_path.parts
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(model_path)
else:
lang, task, base_model = "ALL", "Sub-task C", "bert-base-multilingual-uncased"
base_model = f"socialmediaie/TRAC2020_{lang}_{lang.split()[-1]}_{base_model}"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForSequenceClassification.from_pretrained(base_model)
# For doing inference set model in eval mode
model.eval()
# If you want to further fine-tune the model you can reset the model to model.train()
task_labels = TASK_LABEL_IDS[task]
sentence = "This is a good cat and this is a bad dog."
processed_sentence = f"{tokenizer.cls_token} {sentence}"
tokens = tokenizer.tokenize(sentence)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokens)
tokens_tensor = torch.tensor([indexed_tokens])
with torch.no_grad():
logits, = model(tokens_tensor, labels=None)
logits
preds = logits.detach().cpu().numpy()
preds_probs = softmax(preds, axis=1)
preds = np.argmax(preds_probs, axis=1)
preds_labels = np.array(task_labels)[preds]
print(dict(zip(task_labels, preds_probs[0])), preds_labels)
"""You should get an output as follows:
({'CAG-GEN': 0.06762535,
'CAG-NGEN': 0.03244293,
'NAG-GEN': 0.6897794,
'NAG-NGEN': 0.15498641,
'OAG-GEN': 0.034373745,
'OAG-NGEN': 0.020792078},
array(['NAG-GEN'], dtype='<U8'))
"""
``` |
TheAbyssYouSee/QW5pbWVsaWsyRA | TheAbyssYouSee | 2023-03-22T16:35:14Z | 0 | 23 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-21T07:22:16Z | ---
license: creativeml-openrail-m
---
|
agucci/my-model | agucci | 2023-03-22T16:29:04Z | 0 | 0 | sklearn | [
"sklearn",
"skops",
"tabular-classification",
"license:mit",
"region:us"
]
| tabular-classification | 2023-03-22T16:21:05Z | ---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
model_format: pickle
model_file: example.pkl
widget:
structuredData:
area error:
- 30.29
- 96.05
- 48.31
compactness error:
- 0.01911
- 0.01652
- 0.01484
concave points error:
- 0.01037
- 0.0137
- 0.01093
concavity error:
- 0.02701
- 0.02269
- 0.02813
fractal dimension error:
- 0.003586
- 0.001698
- 0.002461
mean area:
- 481.9
- 1130.0
- 748.9
mean compactness:
- 0.1058
- 0.1029
- 0.1223
mean concave points:
- 0.03821
- 0.07951
- 0.08087
mean concavity:
- 0.08005
- 0.108
- 0.1466
mean fractal dimension:
- 0.06373
- 0.05461
- 0.05796
mean perimeter:
- 81.09
- 123.6
- 101.7
mean radius:
- 12.47
- 18.94
- 15.46
mean smoothness:
- 0.09965
- 0.09009
- 0.1092
mean symmetry:
- 0.1925
- 0.1582
- 0.1931
mean texture:
- 18.6
- 21.31
- 19.48
perimeter error:
- 2.497
- 5.486
- 3.094
radius error:
- 0.3961
- 0.7888
- 0.4743
smoothness error:
- 0.006953
- 0.004444
- 0.00624
symmetry error:
- 0.01782
- 0.01386
- 0.01397
texture error:
- 1.044
- 0.7975
- 0.7859
worst area:
- 677.9
- 1866.0
- 1156.0
worst compactness:
- 0.2378
- 0.2336
- 0.2394
worst concave points:
- 0.1015
- 0.1789
- 0.1514
worst concavity:
- 0.2671
- 0.2687
- 0.3791
worst fractal dimension:
- 0.0875
- 0.06589
- 0.08019
worst perimeter:
- 96.05
- 165.9
- 124.9
worst radius:
- 14.97
- 24.86
- 19.26
worst smoothness:
- 0.1426
- 0.1193
- 0.1546
worst symmetry:
- 0.3014
- 0.2551
- 0.2837
worst texture:
- 24.64
- 26.58
- 26.0
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------|
| ccp_alpha | 0.0 |
| class_weight | |
| criterion | gini |
| max_depth | |
| max_features | |
| max_leaf_nodes | |
| min_impurity_decrease | 0.0 |
| min_impurity_split | |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| random_state | |
| splitter | best |
</details>
### Model Plot
The model plot is below.
<style>div.sk-top-container {color: black;background-color: white;}div.sk-toggleable {background-color: white;}label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.2em 0.3em;box-sizing: border-box;text-align: center;}div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}div.sk-estimator {font-family: monospace;background-color: #f0f8ff;margin: 0.25em 0.25em;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;}div.sk-estimator:hover {background-color: #d4ebff;}div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;}div.sk-item {z-index: 1;}div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}div.sk-parallel-item:only-child::after {width: 0;}div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0.2em;box-sizing: border-box;padding-bottom: 0.1em;background-color: white;position: relative;}div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}div.sk-label-container {position: relative;z-index: 2;text-align: center;}div.sk-container {display: inline-block;position: relative;}</style><div class="sk-top-container"><div class="sk-container"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="12a6e028-23d4-4552-93d7-bca81eacd271" type="checkbox" checked><label class="sk-toggleable__label" for="12a6e028-23d4-4552-93d7-bca81eacd271">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.935673 |
| f1 score | 0.935673 |
# How to Get Started with the Model
[More Information Needed]
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# citation_bibtex
bibtex
@inproceedings{...,year={2020}}
# get_started_code
import pickle
with open(dtc_pkl_filename, 'rb') as file:
clf = pickle.load(file)
# model_card_authors
skops_user
# limitations
This model is not ready to be used in production.
# model_description
This is a DecisionTreeClassifier model trained on breast cancer dataset.
# eval_method
The model is evaluated using test split, on accuracy and F1 score with macro average.
# confusion_matrix

|
jamesportis/vit-base-patch16-224-finetuned-flower | jamesportis | 2023-03-22T16:16:44Z | 21 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-03-22T16:09:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-finetuned-flower
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-flower
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
whyoke/tumore_test | whyoke | 2023-03-22T16:16:42Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
]
| null | 2023-03-22T15:14:24Z | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: tumore_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tumore_test
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0
- eval_mean_iou: nan
- eval_mean_accuracy: nan
- eval_overall_accuracy: nan
- eval_per_category_iou: [nan]
- eval_per_category_accuracy: [nan]
- eval_runtime: 338.6408
- eval_samples_per_second: 1.161
- eval_steps_per_second: 0.582
- epoch: 1.27
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
arrandi/ppo-LunarLander-v2 | arrandi | 2023-03-22T16:11:34Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"tensorboard",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2022-12-16T12:55:37Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.45 +/- 21.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Handun/xlm-roberta-base-finetuned-panx-de-fr | Handun | 2023-03-22T16:11:25Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2023-03-22T11:05:54Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1634
- F1: 0.8588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2889 | 1.0 | 715 | 0.1818 | 0.8338 |
| 0.1435 | 2.0 | 1430 | 0.1624 | 0.8531 |
| 0.0933 | 3.0 | 2145 | 0.1634 | 0.8588 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Mahmoud22/AraClassificationModel2 | Mahmoud22 | 2023-03-22T16:07:23Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T16:06:39Z | ---
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [Mahmoud22/AraClassificationModel](https://huggingface.co/Mahmoud22/AraClassificationModel) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0295
- F1-macro: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1-macro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1818 | 1.0 | 1630 | 0.0996 | 0.9661 |
| 0.0899 | 2.0 | 3260 | 0.0398 | 0.9837 |
| 0.0326 | 3.0 | 4890 | 0.0218 | 0.9893 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
alespalla/distillbert_conv_quality_score | alespalla | 2023-03-22T15:57:44Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:conv_ai_2",
"doi:10.57967/hf/0435",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-11T16:39:18Z | ---
license: apache-2.0
tags:
- transformers
- pytorch
datasets:
- conv_ai_2
model-index:
- name: distillbert_conv_quality_score
results: []
language:
- en
---
# distillbert_conv_quality_score
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conv_ai_2 dataset.
It was trained to generate a score (in the [0, 1] range) from a conversation
It achieves the following results on the evaluation set:
- training/loss: 0.0165
- validation/loss: 0.0149
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_name = "alespalla/distillbert_conv_quality_score"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
conversation = '''
Q: Begin
A: lol ! do you think it is strange to feel like you have been through life before ?
Q: Hellow
A: I don't understand you 🙈. Also, try to guess: i like to ...
Q: How are you?
A: make time stop, funny you :)
Q: What is your name?
A: jessie. hows your day going ? 😃
'''
score = model(**tokenizer(conversation, return_tensors='pt')).logits.item()
print(f"Score: {score}")
```
## Training and evaluation data
The training data was generated from `conv_ai_2` using the following function
```python
from datasets import load_dataset
def get_dataset(regression=False):
db = load_dataset("conv_ai_2")
def generate_converation(elem):
text = ""
for idx, txt in enumerate(elem["dialog"]):
if idx % 2:
text += f"A: {txt['text']}\n"
else:
text += f"Q: {txt['text']}\n"
if regression:
return {'text': text, "labels": (elem['eval_score'] - 1)/4}
return {'text': text, "labels": elem['eval_score'] - 1}
db = db.filter(lambda example: example["eval_score"] > 0)
db = db.map(generate_converation, remove_columns=db['train'].column_names)
db = db['train'].train_test_split(test_size=0.2).shuffle(42)
return db
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- epochs: 40
- batch_size: 16
- learning_rate: 0.0002
- eval_steps: 82
- log_steps: 82
- save_steps: 41
- gradient_accumulation_steps: 1
- warmup_steps: 0
### Training results
| step | training/loss | validation/loss |
|:----:|:-------------:|:---------------:|
| 81 | 0.1020 | 0.0794 |
| 163 | 0.0800 | 0.0713 |
| 245 | 0.0553 | 0.0491 |
| 327 | 0.0362 | 0.0440 |
| 409 | 0.0282 | 0.0352 |
| 491 | 0.0282 | 0.0412 |
| 573 | 0.0256 | 0.0293 |
| 655 | 0.0238 | 0.0252 |
| 737 | 0.0175 | 0.0226 |
| 819 | 0.0154 | 0.0228 |
| 901 | 0.0116 | 0.0205 |
| 983 | 0.0160 | 0.0202 |
| 1065 | 0.0146 | 0.0240 |
| 1147 | 0.0182 | 0.0180 |
| 1229 | 0.0171 | 0.0192 |
| 1311 | 0.0091 | 0.0174 |
| 1393 | 0.0171 | 0.0158 |
| 1475 | 0.0137 | 0.0158 |
| 1557 | 0.0158 | 0.0148 |
| 1639 | 0.0165 | 0.0149 |
### Framework versions
- Transformers 4.26.1
- Datasets 2.10.1
- Tokenizers 0.13.2 |
pabloac31/rl_course_vizdoom_health_gathering_supreme | pabloac31 | 2023-03-22T15:48:41Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T15:48:33Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.34 +/- 4.87
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r pabloac31/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
LorenzoDeMattei/GePpeTto | LorenzoDeMattei | 2023-03-22T15:39:46Z | 3,547 | 13 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"it",
"arxiv:2004.14253",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-03-02T23:29:04Z | ---
language: it
---
# GePpeTto GPT2 Model 🇮🇹
Pretrained GPT2 117M model for Italian.
You can find further details in the paper:
Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: https://arxiv.org/abs/2004.14253
## Pretraining Corpus
The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019),
consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web
texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span,
with older texts than the Wikipedia dump (the latter stretches only to the late 2000s).
## Pretraining details
This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps.
Training parameters:
- GPT-2 small configuration
- vocabulary size: 30k
- Batch size: 32
- Block size: 100
- Adam Optimizer
- Initial learning rate: 5e-5
- Warm up steps: 10k
## Perplexity scores
| Domain | Perplexity |
|---|---|
| Wikipedia | 26.1052 |
| ItWac | 30.3965 |
| Legal | 37.2197 |
| News | 45.3859 |
| Social Media | 84.6408 |
For further details, qualitative analysis and human evaluation check out: https://arxiv.org/abs/2004.14253
## Load Pretrained Model
You can use this model by installing Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import GPT2Tokenizer, GPT2Model
model = GPT2Model.from_pretrained('LorenzoDeMattei/GePpeTto')
tokenizer = GPT2Tokenizer.from_pretrained(
'LorenzoDeMattei/GePpeTto',
)
```
## Example using GPT2LMHeadModel
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, GPT2Tokenizer
tokenizer = AutoTokenizer.from_pretrained("LorenzoDeMattei/GePpeTto")
model = AutoModelWithLMHead.from_pretrained("LorenzoDeMattei/GePpeTto")
text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
prompts = [
"Wikipedia Geppetto",
"Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso"]
samples_outputs = text_generator(
prompts,
do_sample=True,
max_length=50,
top_k=50,
top_p=0.95,
num_return_sequences=3
)
for i, sample_outputs in enumerate(samples_outputs):
print(100 * '-')
print("Prompt:", prompts[i])
for sample_output in sample_outputs:
print("Sample:", sample_output['generated_text'])
print()
```
Output is,
```
----------------------------------------------------------------------------------------------------
Prompt: Wikipedia Geppetto
Sample: Wikipedia Geppetto rosso (film 1920)
Geppetto rosso ("The Smokes in the Black") è un film muto del 1920 diretto da Henry H. Leonard.
Il film fu prodotto dalla Selig Poly
Sample: Wikipedia Geppetto
Geppetto ("Geppetto" in piemontese) è un comune italiano di 978 abitanti della provincia di Cuneo in Piemonte.
L'abitato, che si trova nel versante valtellinese, si sviluppa nella
Sample: Wikipedia Geppetto di Natale (romanzo)
Geppetto di Natale è un romanzo di Mario Caiano, pubblicato nel 2012.
----------------------------------------------------------------------------------------------------
Prompt: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso. Il burattino riesce a scappare. Dopo aver trovato un prezioso sacchetto si reca
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso, e l'unico che lo possiede, ma, di fronte a tutte queste prove
Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso: - A voi gli occhi, le guance! A voi il mio pezzo!
```
## Citation
Please use the following bibtex entry:
```
@misc{mattei2020geppetto,
title={GePpeTto Carves Italian into a Language Model},
author={Lorenzo De Mattei and Michele Cafagna and Felice Dell'Orletta and Malvina Nissim and Marco Guerini},
year={2020},
eprint={2004.14253},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## References
Marco Baroni, Silvia Bernardini, Adriano Ferraresi,
and Eros Zanchetta. 2009. The WaCky wide web: a
collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
|
duyduong9htv/phobert-qa-finetuned-viet-qa | duyduong9htv | 2023-03-22T15:28:44Z | 33 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-09-19T22:23:22Z | ---
tags:
- generated_from_trainer
model-index:
- name: phobert-qa-finetuned-viet-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-qa-finetuned-viet-qa
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6004 | 1.0 | 2027 | 1.5128 |
| 1.3018 | 2.0 | 4054 | 1.4657 |
| 1.1052 | 3.0 | 6081 | 1.4754 |
| 0.9502 | 4.0 | 8108 | 1.5288 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
vocabtrimmer/mt5-small-trimmed-fr-120000-frquad-qa | vocabtrimmer | 2023-03-22T15:24:09Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question answering",
"fr",
"dataset:lmqg/qg_frquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-03-21T19:02:06Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: fr
datasets:
- lmqg/qg_frquad
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: "question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu."
example_title: "Question Answering Example 1"
- text: "question: Comment appelle-t-on la Guerre de 14-18 ?, context: Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la Grande Guerre de 14-18, ou son rejet par l'électorat en juillet 1945. On sait également que dans ces deux cas, la guérison, certes lente et douloureuse et jamais complète ni définitive, se fera grâce à la peinture. D'un autre côté, étant donnés les symptômes de ce mal que Churchill éprouvait de plus en plus, il ne pouvait rien moins qu'être purement associé à de telles causes extrinsèques, ce qui correspond au profil classique de la dépression majeure unipolaire ou bipolaire."
example_title: "Question Answering Example 2"
model-index:
- name: vocabtrimmer/mt5-small-trimmed-fr-120000-frquad-qa
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_frquad
type: default
args: default
metrics:
- name: BLEU4 (Question Answering)
type: bleu4_question_answering
value: 14.81
- name: ROUGE-L (Question Answering)
type: rouge_l_question_answering
value: 26.59
- name: METEOR (Question Answering)
type: meteor_question_answering
value: 20.09
- name: BERTScore (Question Answering)
type: bertscore_question_answering
value: 88.11
- name: MoverScore (Question Answering)
type: moverscore_question_answering
value: 69.17
- name: AnswerF1Score (Question Answering)
type: answer_f1_score__question_answering
value: 40.01
- name: AnswerExactMatch (Question Answering)
type: answer_exact_match_question_answering
value: 23.71
---
# Model Card of `vocabtrimmer/mt5-small-trimmed-fr-120000-frquad-qa`
This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-fr-120000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-120000) for question answering task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [vocabtrimmer/mt5-small-trimmed-fr-120000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-120000)
- **Language:** fr
- **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="fr", model="vocabtrimmer/mt5-small-trimmed-fr-120000-frquad-qa")
# model prediction
answers = model.answer_q(list_question="En quelle année a-t-on trouvé trace d'un haut fourneau similaire?", list_context=" Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "vocabtrimmer/mt5-small-trimmed-fr-120000-frquad-qa")
output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fourneau similaire?, context: Cette technologie ne disparaît qu'au début du XXe siècle. On retrouve vers 1900 un haut fourneau similaire dans le Bulacan, aux Philippines. Plus tard encore, le « haut fourneau dans la cour » prôné par Mao Zedong pendant le Grand Bond en avant est de ce type. L'expérience n'est un échec technique que dans les régions où le savoir-faire n'existe pas, ou a disparu.")
```
## Evaluation
- ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-120000-frquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 23.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| AnswerF1Score | 40.01 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| BERTScore | 88.11 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_1 | 23.93 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_2 | 19.95 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_3 | 17.18 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| Bleu_4 | 14.81 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| METEOR | 20.09 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| MoverScore | 69.17 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
| ROUGE_L | 26.59 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_frquad
- dataset_name: default
- input_types: ['paragraph_question']
- output_types: ['answer']
- prefix_types: None
- model: vocabtrimmer/mt5-small-trimmed-fr-120000
- max_length: 512
- max_length_output: 32
- epoch: 27
- batch: 32
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-120000-frquad-qa/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
cardiffnlp/xlm-roberta-base-tweet-sentiment-de | cardiffnlp | 2023-03-22T14:57:00Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-06T09:14:30Z | # `cardiffnlp/xlm-roberta-base-tweet-sentiment-de`
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (german).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(german).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 73.22 | 73.22 | 73.22 | 73.18 | 73.22 | 73.18 | 73.22 |
Check the result file [here](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-de/raw/main/eval.json). |
JessicaHsu/rl_course_vizdoom_health_gathering_supreme | JessicaHsu | 2023-03-22T14:47:26Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T14:47:17Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.08 +/- 5.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r JessicaHsu/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
eolang/DRL-vizdoome_health_gathering_supreme | eolang | 2023-03-22T14:39:05Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T07:15:06Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 18.61 +/- 4.52
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r eolang/DRL-vizdoome_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .workspace.stable-diffusion-webui.venv.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=DRL-vizdoome_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .workspace.stable-diffusion-webui.venv.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=DRL-vizdoome_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
helenai/wav2vec2-base-superb-ks-jpqd-ov | helenai | 2023-03-22T14:31:21Z | 4 | 0 | transformers | [
"transformers",
"openvino",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-03-09T08:14:06Z | ---
license: apache-2.0
tags:
- audio-classification
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: jpqd-wav2vec2-base-ft-keyword-spotting
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jpqd-wav2vec2-base-ft-keyword-spotting
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the
superb dataset, using [superb/wav2vec2-base-superb-ks](https://huggingface.co/superb/wav2vec2-base-superb-ks) as a teacher model
It was compressed using [NNCF](https://github.com/openvinotoolkit/nncf) with [Optimum Intel](https://github.com/huggingface/optimum-intel#openvino) following the
JPQD image classification example.
It achieves the following results on the evaluation set:
- Loss: 0.5632
- Accuracy: 0.9756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.5
- num_epochs: 12.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2245 | 1.0 | 399 | 2.2351 | 0.6209 |
| 6.9856 | 2.0 | 798 | 7.0597 | 0.7354 |
| 10.013 | 3.0 | 1197 | 9.8779 | 0.8069 |
| 11.3484 | 4.0 | 1596 | 11.1949 | 0.8719 |
| 11.6849 | 5.0 | 1995 | 11.5479 | 0.9014 |
| 11.5921 | 6.0 | 2394 | 11.4193 | 0.9495 |
| 0.8911 | 7.0 | 2793 | 0.7334 | 0.9500 |
| 0.8965 | 8.0 | 3192 | 0.6553 | 0.9685 |
| 0.7198 | 9.0 | 3591 | 0.6213 | 0.9669 |
| 0.7372 | 10.0 | 3990 | 0.5929 | 0.9675 |
| 0.7004 | 11.0 | 4389 | 0.5720 | 0.9721 |
| 0.6195 | 12.0 | 4788 | 0.5632 | 0.9756 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
JessicaHsu/ppo-CartPole-v1 | JessicaHsu | 2023-03-22T14:11:45Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T13:52:25Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -138.56 +/- 89.66
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': '__file__'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 80000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'JessicaHsu/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
Piemag/FirstPPO-LunarLander-v2 | Piemag | 2023-03-22T14:06:45Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T14:06:25Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.69 +/- 19.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
aubmindlab/araelectra-base-generator | aubmindlab | 2023-03-22T13:55:02Z | 64 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"tensorboard",
"safetensors",
"electra",
"fill-mask",
"ar",
"arxiv:1406.2661",
"arxiv:2012.15516",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: ar
datasets:
- wikipedia
- Osian
- 1.5B-Arabic-Corpus
- oscar-arabic-unshuffled
- Assafir(private)
widget:
- text: " عاصمة لبنان هي [MASK] ."
---
# AraELECTRA
<img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraELECTRA.png" width="100" align="left"/>
**ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). AraELECTRA achieves state-of-the-art results on Arabic QA dataset.
For a detailed description, please refer to the AraELECTRA paper [AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding](https://arxiv.org/abs/2012.15516).
## How to use the generator in `transformers`
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="aubmindlab/araelectra-base-generator",
tokenizer="aubmindlab/araelectra-base-generator"
)
print(
fill_mask(" عاصمة لبنان هي [MASK] .)
)
```
# Preprocessing
It is recommended to apply our preprocessing function before training/testing on any dataset.
**Install the arabert python package to segment text for AraBERT v1 & v2 or to clean your data `pip install arabert`**
```python
from arabert.preprocess import ArabertPreprocessor
model_name="aubmindlab/araelectra-base"
arabert_prep = ArabertPreprocessor(model_name=model_name)
text = "ولن نبالغ إذا قلنا إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري"
arabert_prep.preprocess(text)
>>> output: ولن نبالغ إذا قلنا : إن هاتف أو كمبيوتر المكتب في زمننا هذا ضروري
```
# Model
Model | HuggingFace Model Name | Size (MB/Params)|
---|:---:|:---:
AraELECTRA-base-generator | [araelectra-base-generator](https://huggingface.co/aubmindlab/araelectra-base-generator) | 227MB/60M |
AraELECTRA-base-discriminator | [araelectra-base-discriminator](https://huggingface.co/aubmindlab/araelectra-base-discriminator) | 516MB/135M |
# Compute
Model | Hardware | num of examples (seq len = 512) | Batch Size | Num of Steps | Time (in days)
---|:---:|:---:|:---:|:---:|:---:
AraELECTRA-base | TPUv3-8 | - | 256 | 2M | 24
# Dataset
The pretraining data used for the new AraELECTRA model is also used for **AraGPT2 and AraELECTRA**.
The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation)
For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled:
- OSCAR unshuffled and filtered.
- [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01
- [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4)
- [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619)
- Assafir news articles. Huge thank you for Assafir for giving us the data
# TensorFlow 1.x models
**You can find the PyTorch, TF2 and TF1 models in HuggingFace's Transformer Library under the ```aubmindlab``` username**
- `wget https://huggingface.co/aubmindlab/MODEL_NAME/resolve/main/tf1_model.tar.gz` where `MODEL_NAME` is any model under the `aubmindlab` name
# If you used this model please cite us as :
```
@inproceedings{antoun-etal-2021-araelectra,
title = "{A}ra{ELECTRA}: Pre-Training Text Discriminators for {A}rabic Language Understanding",
author = "Antoun, Wissam and
Baly, Fady and
Hajj, Hazem",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Virtual)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.wanlp-1.20",
pages = "191--195",
}
```
# Acknowledgments
Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT.
# Contacts
**Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]>
**Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
|
pszemraj/pegasus-large-summary-explain | pszemraj | 2023-03-22T13:53:07Z | 18 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"en",
"dataset:kmfoda/booksum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| summarization | 2022-03-02T23:29:05Z | ---
language:
- en
license: apache-2.0
tags:
- summarization
- pegasus
datasets:
- kmfoda/booksum
metrics:
- rouge
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg-
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur- face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).'
example_title: scientific paper
- text: ' the big variety of data coming from diverse sources is one of the key properties
of the big data phenomenon. It is, therefore, beneficial to understand how data
is generated in various environments and scenarios, before looking at what should
be done with this data and how to design the best possible architecture to accomplish
this The evolution of IT architectures, described in Chapter 2, means that the
data is no longer processed by a few big monolith systems, but rather by a group
of services In parallel to the processing layer, the underlying data storage has
also changed and became more distributed This, in turn, required a significant
paradigm shift as the traditional approach to transactions (ACID) could no longer
be supported. On top of this, cloud computing is becoming a major approach with
the benefits of reducing costs and providing on-demand scalability but at the
same time introducing concerns about privacy, data ownership, etc In the meantime
the Internet continues its exponential growth: Every day both structured and unstructured
data is published and available for processing: To achieve competitive advantage
companies have to relate their corporate resources to external services, e.g.
financial markets, weather forecasts, social media, etc While several of the sites
provide some sort of API to access the data in a more orderly fashion; countless
sources require advanced web mining and Natural Language Processing (NLP) processing
techniques: Advances in science push researchers to construct new instruments
for observing the universe O conducting experiments to understand even better
the laws of physics and other domains. Every year humans have at their disposal
new telescopes, space probes, particle accelerators, etc These instruments generate
huge streams of data, which need to be stored and analyzed. The constant drive
for efficiency in the industry motivates the introduction of new automation techniques
and process optimization: This could not be done without analyzing the precise
data that describe these processes. As more and more human tasks are automated,
machines provide rich data sets, which can be analyzed in real-time to drive efficiency
to new levels. Finally, it is now evident that the growth of the Internet of Things
is becoming a major source of data. More and more of the devices are equipped
with significant computational power and can generate a continuous data stream
from their sensors. In the subsequent sections of this chapter, we will look at
the domains described above to see what they generate in terms of data sets. We
will compare the volumes but will also look at what is characteristic and important
from their respective points of view. 3.1 The Internet is undoubtedly the largest
database ever created by humans. While several well described; cleaned, and structured
data sets have been made available through this medium, most of the resources
are of an ambiguous, unstructured, incomplete or even erroneous nature. Still,
several examples in the areas such as opinion mining, social media analysis, e-governance,
etc, clearly show the potential lying in these resources. Those who can successfully
mine and interpret the Internet data can gain unique insight and competitive advantage
in their business An important area of data analytics on the edge of corporate
IT and the Internet is Web Analytics.'
example_title: data science textbook
- text: 'Transformer-based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer-based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in-depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer-based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT-like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT-like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo-code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.'
example_title: bigbird blog intro
inference:
parameters:
max_length: 64
no_repeat_ngram_size: 2
encoder_no_repeat_ngram_size: 3
repetition_penalty: 2.4
length_penalty: 0.5
num_beams: 4
early_stopping: true
model-index:
- name: pszemraj/pegasus-large-summary-explain
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 29.1023
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTFhNjg4YTFlODU5MmVjNGVmNDRmMjQ4M2YyZGNmMWRlYjBhZmVhMTY3ZTUxNDkzNjY0OGVmNWJlNmY1OTkzNCIsInZlcnNpb24iOjF9.E_rVKqB7WEerLeRq6JIVTLZ1TgmsThFQJVKh11WH1qWa-cL3766psPWDKe8mK3lNkjmwbiDW0DZlDt4dm2ATCA
- type: rouge
value: 6.2441
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDVmZmFlOTgwN2Q3ZWRkZGVkMzU1ZDRkYzU1MWMzMTk1NDM5YTU0MzFjNDljNmZlY2I2NjZmZjcyYjBkZGExZCIsInZlcnNpb24iOjF9.QnuGoMWX8cq5_ukRtiaLRLau_F9XiCjg313GC7Iu1VGK8Kj_9lzU43377VsH0fBWooA1zJjtIK0UA-YpGQQOAA
- type: rouge
value: 14.7503
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzJhNzE0YjZiZWQ4NDE1Yjg3ZGJjY2ZmYWEwYzU5MTRhYWNiNTcyODU1NzM5NTZhNjNlNmYwNDVlYmZmYjkxOCIsInZlcnNpb24iOjF9.m5BLUMefXa1KivIIE9-gYKYq5aRRbfpQWazqzXxfCsqqp38Lt0ymk6OwXSlQyB_5oksNHIDFKpJX4wjYx2i7Bw
- type: rouge
value: 27.2375
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTY1OTIxMzBkMGJiZmNiNjZjYmQ2MjUwMjBkYTg5Zjc1NjVlZjllNTg0MDM1NTdhZDJlZmIwOTczOGNkZDc5YyIsInZlcnNpb24iOjF9.bThI16mvqhEuGBhdao0w8j03vv9G9Quy-ITRZzalr41zOour9it4oxEPFCvmPf-nLCQkqgWKUDEzgr6Ww8qgBg
- type: loss
value: 2.979011058807373
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGM0NzM3YTI4Njg4NDY0ZjQzNTZmYTIxYzcxNDBlNzAwNTAxNDE4MTZjYmZmNzYwODU0OWQ1ZjM5YjRmMmFkZiIsInZlcnNpb24iOjF9.EPEP53AoqHz0rjVGStJI2dM7ivxFmOj572I3llWdAoejm3zO1Iq5WDArYsqOse_oLxYCgcqPmNVc5IcLW9x7Dg
- type: gen_len
value: 467.269
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjgzYzU2ZjkwN2RhNzJlZmQyZTBlYmUxMTZhNzg0ODMwMjA3OTUzNTIwOWFkZWVmNjVmMTJiZmZhNWFmY2UzZCIsInZlcnNpb24iOjF9.RW5tzk2fcc_m4bgaSopRDFhSR9R8hRaYKrstXH4X5iGP_Xwvhy5Q7-igd2ACnlxIfmtdTmMxLMsvHr5oAZEwDg
---
# pszemraj/pegasus-large-summary-explain
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the [booksum](https://github.com/salesforce/booksum) dataset for four total epochs.
It achieves the following results on the evaluation set:
- eval_loss: 1.1193
- eval_runtime: 6.6754
- eval_samples_per_second: 27.714
- eval_steps_per_second: 1.798
- epoch: 3.0
- step: 900
A 1-epoch checkpoint can be found at [pszemraj/pegasus-large-book-summary](https://huggingface.co/pszemraj/pegasus-large-book-summary), which is where the second training session started from.
## Model description
- After some initial tests, it was found that models trained on the [booksum](https://github.com/salesforce/booksum) dataset seem to inherit the summaries' SparkNotes-style explanations; so the user gets a shorter and easier-to-understand version of the text instead of **just** more compact.
- This quality (anecdotally) is favourable for learning/comprehension because summarization datasets that simply make the information more compact (* cough * arXiv) can be so dense that the overall time spent trying to _comprehend_ what it is saying can be the same as just reading the original material.
## Intended uses & limitations
- standard pegasus has a max input length of 1024 tokens, therefore the model only saw the first 1024 tokens of a chapter when training, and learned to try to make the chapter's summary from that. Keep this in mind when using this model, as information at the end of a text sequence longer than 1024 tokens may be excluded from the final summary/the model will be biased towards information presented first.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
RafaelEiji/bert_character | RafaelEiji | 2023-03-22T13:43:23Z | 3 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-01-31T14:02:02Z | ---
tags:
- generated_from_trainer
model-index:
- name: from_scratch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# from_scratch
This model is a fine-tuned version of [tokenizer/config.json](https://huggingface.co/tokenizer/config.json) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4744
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 360
- eval_batch_size: 360
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 1.0952 | 0.05 | 20000 | 1.0383 |
| 0.936 | 0.1 | 40000 | 0.8852 |
| 0.8679 | 0.14 | 60000 | 0.8207 |
| 0.8276 | 0.19 | 80000 | 0.7796 |
| 0.796 | 0.24 | 100000 | 0.7519 |
| 0.7756 | 0.29 | 120000 | 0.7299 |
| 0.7545 | 0.33 | 140000 | 0.7103 |
| 0.7395 | 0.38 | 160000 | 0.6947 |
| 0.7236 | 0.43 | 180000 | 0.6809 |
| 0.7143 | 0.48 | 200000 | 0.6705 |
| 0.705 | 0.52 | 220000 | 0.6585 |
| 0.6904 | 0.57 | 240000 | 0.6479 |
| 0.6835 | 0.62 | 260000 | 0.6388 |
| 0.672 | 0.67 | 280000 | 0.6290 |
| 0.665 | 0.72 | 300000 | 0.6217 |
| 0.6581 | 0.76 | 320000 | 0.6136 |
| 0.6466 | 0.81 | 340000 | 0.6071 |
| 0.6396 | 0.86 | 360000 | 0.6000 |
| 0.6343 | 0.91 | 380000 | 0.5940 |
| 0.6286 | 0.95 | 400000 | 0.5880 |
| 0.6183 | 1.0 | 420000 | 0.5809 |
| 0.6134 | 1.05 | 440000 | 0.5757 |
| 0.6094 | 1.1 | 460000 | 0.5693 |
| 0.6032 | 1.15 | 480000 | 0.5641 |
| 0.5954 | 1.19 | 500000 | 0.5596 |
| 0.5915 | 1.24 | 520000 | 0.5532 |
| 0.5845 | 1.29 | 540000 | 0.5489 |
| 0.5823 | 1.34 | 560000 | 0.5437 |
| 0.5754 | 1.38 | 580000 | 0.5393 |
| 0.573 | 1.43 | 600000 | 0.5345 |
| 0.5643 | 1.48 | 620000 | 0.5309 |
| 0.5627 | 1.53 | 640000 | 0.5262 |
| 0.56 | 1.57 | 660000 | 0.5220 |
| 0.5554 | 1.62 | 680000 | 0.5186 |
| 0.5507 | 1.67 | 700000 | 0.5152 |
| 0.5494 | 1.72 | 720000 | 0.5117 |
| 0.5445 | 1.77 | 740000 | 0.5076 |
| 0.5396 | 1.81 | 760000 | 0.5051 |
| 0.5363 | 1.86 | 780000 | 0.5026 |
| 0.5356 | 1.91 | 800000 | 0.4998 |
| 0.5303 | 1.96 | 820000 | 0.4982 |
| 0.5583 | 2.0 | 840000 | 0.5195 |
| 0.5565 | 2.05 | 860000 | 0.5180 |
| 0.5535 | 2.1 | 880000 | 0.5158 |
| 0.5497 | 2.15 | 900000 | 0.5133 |
| 0.5511 | 2.19 | 920000 | 0.5110 |
| 0.5439 | 2.24 | 940000 | 0.5085 |
| 0.5413 | 2.29 | 960000 | 0.5060 |
| 0.5376 | 2.34 | 980000 | 0.5023 |
| 0.5333 | 2.39 | 1000000 | 0.5004 |
| 0.5322 | 2.43 | 1020000 | 0.4973 |
| 0.5312 | 2.48 | 1040000 | 0.4941 |
| 0.5281 | 2.53 | 1060000 | 0.4921 |
| 0.5267 | 2.58 | 1080000 | 0.4902 |
| 0.5257 | 2.62 | 1100000 | 0.4871 |
| 0.5174 | 2.67 | 1120000 | 0.4849 |
| 0.5183 | 2.72 | 1140000 | 0.4825 |
| 0.5181 | 2.77 | 1160000 | 0.4807 |
| 0.5116 | 2.81 | 1180000 | 0.4784 |
| 0.5092 | 2.86 | 1200000 | 0.4769 |
| 0.5109 | 2.91 | 1220000 | 0.4757 |
| 0.5102 | 2.96 | 1240000 | 0.4739 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
XaneWayner/ppo-LunarLander-v2 | XaneWayner | 2023-03-22T13:18:24Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T13:17:52Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.38 +/- 17.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Yuvarraj/Streaming_ASR_PSG | Yuvarraj | 2023-03-22T13:14:10Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"en",
"dataset:librispeech_asr",
"arxiv:2006.11477",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2023-03-22T13:10:02Z | ---
language: en
datasets:
- librispeech_asr
tags:
- speech
license: apache-2.0
---
# Wav2Vec2-Large-960h
[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
The large model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.
[Paper](https://arxiv.org/abs/2006.11477)
Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
**Abstract**
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
# Usage
To transcribe audio files the model can be used as a standalone acoustic model as follows:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch
# load model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h")
# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
# tokenize
input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1
# retrieve logits
logits = model(input_values).logits
# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-large-960h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h")
def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean" | "other" |
|---|---|
| 2.8 | 6.3 | |
McCheng/a2c-PandaReachDense-v2 | McCheng | 2023-03-22T13:13:48Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T07:05:52Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.96 +/- 0.34
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Yureeh/Reinforce-PixelCopter | Yureeh | 2023-03-22T13:08:25Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-21T22:43:22Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.30 +/- 24.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ellipsoul/Reinforce-Pixelcopter-PLE-v0 | Ellipsoul | 2023-03-22T13:05:14Z | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T13:04:52Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 22.30 +/- 16.59
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
gcapde/gcapde-finetuned-amazon_reviews_multi | gcapde | 2023-03-22T12:53:49Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T12:48:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: gcapde-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.908
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gcapde-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2896
- Accuracy: 0.908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2333 | 1.0 | 63 | 0.2837 | 0.9002 |
| 0.2117 | 2.0 | 126 | 0.2896 | 0.908 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
lucadiliello/opt-30b-deepspeed-inference-fp16-shard-4 | lucadiliello | 2023-03-22T12:49:04Z | 4 | 0 | transformers | [
"transformers",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-03-21T11:57:42Z | This is a copy of the original [OPT weights](https://huggingface.co/facebook/opt-30b) that is more efficient to use with the [DeepSpeed-MII](https://github.com/microsoft/deepspeed-mii) and [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). In this repo the original tensors are split into 4 shards to target 4 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.
For specific details about the OPT model itself, please see the [original OPT model card](https://huggingface.co/facebook/opt-30b).
For examples on using this repo please see the following:
* https://github.com/huggingface/transformers-bloom-inference
* https://github.com/microsoft/DeepSpeed-MII |
lucadiliello/opt-30b-deepspeed-inference-fp16-shard-2 | lucadiliello | 2023-03-22T12:48:50Z | 5 | 0 | transformers | [
"transformers",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-03-21T11:57:48Z | This is a copy of the original [OPT weights](https://huggingface.co/facebook/opt-30b) that is more efficient to use with the [DeepSpeed-MII](https://github.com/microsoft/deepspeed-mii) and [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). In this repo the original tensors are split into 2 shards to target 2 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.
For specific details about the OPT model itself, please see the [original OPT model card](https://huggingface.co/facebook/opt-30b).
For examples on using this repo please see the following:
* https://github.com/huggingface/transformers-bloom-inference
* https://github.com/microsoft/DeepSpeed-MII
|
stanlochten/t5-KGQgen | stanlochten | 2023-03-22T12:41:28Z | 7 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"knowledge_graphs",
"question_generation",
"en",
"dataset:web_questions",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-02T23:29:05Z | ---
license: openrail
datasets:
- web_questions
language:
- en
metrics:
- bleu
- bertscore
library_name: transformers
tags:
- knowledge_graphs
- question_generation
---
T5-base model fine-tuned for question generation from knowledge graphs. Can be used to generate questions from linearized knowledge graphs, meaning graphs in the form of its all its triples listed in the following format:
`<A> answer node(s) <H> head <R> relation <T> tail <H> head <R> relation <T> tail ... etc ...`,
where `answer node(s)` refers to the node(s) which should contain the answer to the generated question.
To load the model:
```
from transformers import T5ForConditionalGeneration, T5TokenizerFast
model = T5ForConditionalGeneration.from_pretrained('stanlochten/t5-KGQgen')
tokenizer = T5TokenizerFast.from_pretrained('t5-base', extra_ids=0,
additional_special_tokens = ['<A>', '<H>', '<R>', '<T>'])
```
To generate questions from your graphs, where `graphs` is a list of strings for each graph:
```
print('Tokenizing...')
inputs = tokenizer(graphs, return_tensors="pt", padding=True, truncation=True)
print('Predicting...')
y_hats = model.generate(inputs.input_ids)
print('Decoding...')
preds = tokenizer.batch_decode(y_hats, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
Good luck!
[Associated research report](https://dspace.uba.uva.nl/server/api/core/bitstreams/fee95174-b7d4-4cd8-8545-f7ec8ab29e2d/content) |
arrandi/PyramidTraining | arrandi | 2023-03-22T12:40:59Z | 8 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
]
| reinforcement-learning | 2023-03-22T12:38:59Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: arrandi/PyramidTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pfunk/PongNoFrameskip-v4-P_DQPN_x3-seed929 | pfunk | 2023-03-22T12:35:57Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T12:35:48Z | ---
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 20.32 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **PongNoFrameskip-v4**
This is a trained model of a DQPN_freq agent playing PongNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/P_DQPN_x3.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[P_DQPN_x3]"
python -m cleanrl_utils.enjoy --exp-name P_DQPN_x3 --env-id PongNoFrameskip-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x3-seed929/raw/main/dqpn_freq_atari.py
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x3-seed929/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x3-seed929/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq_atari.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name P_DQPN_x3 --policy-network-frequency 3000 --seed 929
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq_atari.py',
'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'double_learning': False,
'end_e': 0.01,
'env_id': 'PongNoFrameskip-v4',
'exp_name': 'P_DQPN_x3',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 10000,
'max_gradient_norm': inf,
'policy_network_frequency': 3000,
'policy_tau': 1.0,
'save_model': True,
'seed': 929,
'start_e': 1.0,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 5000000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
qanastek/biomedical-specialities-classifier-french | qanastek | 2023-03-22T12:33:55Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"camembert",
"text-classification",
"medical",
"chemistry",
"biology",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T11:13:24Z | ---
license: apache-2.0
language:
- fr
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- medical
- chemistry
- biology
--- |
stelladk/PPO-CleanRL-LunarLander | stelladk | 2023-03-22T12:32:11Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T12:06:46Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -207.10 +/- 95.51
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'default_ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'stelladk/PPO-CleanRL-LunarLander'
'batch_size': 512
'minibatch_size': 128}
```
|
dvesely/dqn-SpaceInvadersNoFrameskip-v4 | dvesely | 2023-03-22T12:19:37Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T12:18:53Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 486.00 +/- 133.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dvesely -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga dvesely -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga dvesely
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.2),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0003),
('learning_starts', 50000),
('n_timesteps', 1500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 500),
('train_freq', 4),
('normalize', False)])
```
|
marcatanante1/my_mind_model | marcatanante1 | 2023-03-22T12:15:53Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2023-03-22T09:45:53Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6450
- Accuracy: 0.0929
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.73 | 2 | 2.6408 | 0.0752 |
| No log | 1.82 | 5 | 2.6441 | 0.0619 |
| No log | 2.91 | 8 | 2.6435 | 0.0973 |
| 2.6293 | 4.0 | 11 | 2.6446 | 0.0885 |
| 2.6293 | 4.73 | 13 | 2.6449 | 0.0841 |
| 2.6293 | 5.82 | 16 | 2.6463 | 0.0841 |
| 2.6293 | 6.91 | 19 | 2.6452 | 0.0885 |
| 2.6177 | 7.27 | 20 | 2.6450 | 0.0929 |
### Framework versions
- Transformers 4.27.2
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
pfunk/PongNoFrameskip-v4-P_DQPN_x3-seed888 | pfunk | 2023-03-22T12:15:06Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T12:14:57Z | ---
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 20.11 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **PongNoFrameskip-v4**
This is a trained model of a DQPN_freq agent playing PongNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/P_DQPN_x3.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[P_DQPN_x3]"
python -m cleanrl_utils.enjoy --exp-name P_DQPN_x3 --env-id PongNoFrameskip-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x3-seed888/raw/main/dqpn_freq_atari.py
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x3-seed888/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x3-seed888/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq_atari.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name P_DQPN_x3 --policy-network-frequency 3000 --seed 888
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq_atari.py',
'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'double_learning': False,
'end_e': 0.01,
'env_id': 'PongNoFrameskip-v4',
'exp_name': 'P_DQPN_x3',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 10000,
'max_gradient_norm': inf,
'policy_network_frequency': 3000,
'policy_tau': 1.0,
'save_model': True,
'seed': 888,
'start_e': 1.0,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 5000000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
kgmann/ai-image-det-resnet18 | kgmann | 2023-03-22T11:45:38Z | 5 | 2 | timm | [
"timm",
"pytorch",
"image-classification",
"dataset:competitions/aiornot",
"region:us"
]
| image-classification | 2023-03-22T11:34:13Z | ---
tags:
- image-classification
- timm
library_tag: timm
datasets:
- competitions/aiornot
metrics:
- accuracy
---
# Model card for kgmann/ai-image-det-resnet18
This is a small **resnet18** pretrained model, fine-tuned for 5 epochs on 80% of the [AI or Not dataset](https://huggingface.co/datasets/competitions/aiornot) and evaluated on the remaining 20% of the training dataset.
It has an **accuracy of 99%** on the validation dataset. |
arrandi/ppo-SnowballTarget | arrandi | 2023-03-22T11:30:23Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
]
| reinforcement-learning | 2023-03-22T11:30:17Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
library_name: ml-agents
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Write your model_id: arrandi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
alramalho/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7-extended-labels | alramalho | 2023-03-22T11:25:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"zero-shot-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| zero-shot-classification | 2023-03-21T19:10:41Z | ---
pipeline_tag: zero-shot-classification
--- |
blinoff/roberta-base-russian-v0 | blinoff | 2023-03-22T11:23:40Z | 339 | 8 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: ru
widget:
- text: "Мозг — это машина вывода, которая пытается <mask> ошибку в прогнозе."
example_title: "brain_example"
- text: "Никогда не спорьте с идиотами, <mask> опуститесь до их уровня, где они вас задавят своим опытом."
example_title: "idiot_example"
---
# RoBERTa-like language model trained on part of part of TAIGA corpus
## Training Details
- about 60k steps
![]()
## Example pipeline
```python
from transformers import pipeline
from transformers import RobertaTokenizerFast
tokenizer = RobertaTokenizerFast.from_pretrained('blinoff/roberta-base-russian-v0', max_len=512)
fill_mask = pipeline(
"fill-mask",
model="blinoff/roberta-base-russian-v0",
tokenizer=tokenizer
)
fill_mask("Мозг — это машина <mask>, которая пытается снизить ошибку в прогнозе.")
# {
# 'sequence': '<s>Мозг — это машина города, которая пытается снизить ошибку в прогнозе.</s>',
# 'score': 0.012859329581260681,
# 'token': 2144,
# 'token_str': 'ĠгоÑĢода'
# },
# {
# 'sequence': '<s>Мозг — это машина человека, которая пытается снизить ошибку в прогнозе.</s>',
# 'score': 0.01185101643204689,
# 'token': 1470,
# 'token_str': 'ĠÑĩеловека'
# },
# {
# 'sequence': '<s>Мозг — это машина дома, которая пытается снизить ошибку в прогнозе.</s>',
# 'score': 0.009940559044480324,
# 'token': 1411,
# 'token_str': 'Ġдома'
# },
# {
# 'sequence': '<s>Мозг — это машина женщина, которая пытается снизить ошибку в прогнозе.</s>',
# 'score': 0.007794599514454603,
# 'token': 2707,
# 'token_str': 'ĠженÑīина'
# },
# {
# 'sequence': '<s>Мозг — это машина женщины, которая пытается снизить ошибку в прогнозе.</s>',
# 'score': 0.007725382689386606,
# 'token': 3546,
# 'token_str': 'ĠженÑīинÑĭ'
# }
```
|
arrandi/poca-SoccerTwos | arrandi | 2023-03-22T11:08:56Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2023-03-22T11:08:47Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: arrandi/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
antonioricciardi/ppo-Huggy | antonioricciardi | 2023-03-22T10:56:56Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-03-22T10:56:50Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: antonioricciardi/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
faviasono/bert-base-banking77-pt2 | faviasono | 2023-03-22T10:52:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:banking77",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T09:57:01Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- banking77
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: banking77
type: banking77
config: default
split: test
args: default
metrics:
- name: F1
type: f1
value: 0.9264340389849328
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the banking77 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3012
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0535 | 1.0 | 626 | 0.7552 | 0.8650 |
| 0.3807 | 2.0 | 1252 | 0.3574 | 0.9224 |
| 0.1794 | 3.0 | 1878 | 0.3012 | 0.9264 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Chattiori/PetalMix | Chattiori | 2023-03-22T10:50:42Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T10:48:56Z | ---
license: creativeml-openrail-m
---
|
vevlins/autotrain-classify-42751109216 | vevlins | 2023-03-22T10:43:30Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"beit",
"image-classification",
"autotrain",
"vision",
"dataset:vevlins/autotrain-data-classify",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2023-03-22T10:41:19Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- vevlins/autotrain-data-classify
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.852147336270292
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 42751109216
- CO2 Emissions (in grams): 0.8521
## Validation Metrics
- Loss: 0.010
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000 |
satyaalmasian/temporal_tagger_BERT_tokenclassifier | satyaalmasian | 2023-03-22T10:24:27Z | 23 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-03-02T23:29:05Z | # BERT based temporal tagged
Token classifier for temporal tagging of plain text using BERT language model. The model is introduced in the paper BERT got a Date: Introducing Transformers to Temporal Tagging and release in this [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
# Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. We use BERT for token classification to tag the tokens in text with classes:
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_BERT_tokenclassifier")
```
for inference use:
```
processed_text = tokenizer(input_text, return_tensors="pt")
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
#Training data
We use 3 data sources:
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html), Wikiwars, Tweets datasets. For the correct data versions please refer to our [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
#Training procedure
The model is trained from publicly available checkpoints on huggingface (`bert-base-uncased`), with a batch size of 34. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 5 different random seeds, this version of the model is the only seed=4.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
agcagc/ppo-LunarLander-v2-clear | agcagc | 2023-03-22T10:04:14Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T10:04:03Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -145.06 +/- 94.66
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'agcagc/ppo-LunarLander-v2-clear'
'batch_size': 512
'minibatch_size': 128}
```
|
whit/mthwmdl | whit | 2023-03-22T10:03:51Z | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2023-03-22T09:34:04Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mthwmdl Dreambooth model trained by whit with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
juliusco/GPT-2-finetuned-papers | juliusco | 2023-03-22T10:01:37Z | 4 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-03-22T08:20:52Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: juliusco/GPT-2-finetuned-papers
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juliusco/GPT-2-finetuned-papers
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4240
- Validation Loss: 2.2215
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4240 | 2.2215 | 0 |
### Framework versions
- Transformers 4.27.2
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
mrm8488/bloom-560m-finetuned-samsum | mrm8488 | 2023-03-22T10:00:28Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bloom",
"text-generation",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2022-09-27T15:31:39Z | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloom-560m-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-560m-finetuned-samsum
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7663 | 0.63 | 200 | 2.6934 |
| 2.3769 | 1.26 | 400 | 2.6274 |
| 2.2776 | 1.89 | 600 | 2.5818 |
| 1.873 | 2.52 | 800 | 2.7177 |
| 1.6715 | 3.15 | 1000 | 2.9178 |
| 1.4515 | 3.78 | 1200 | 2.8924 |
| 1.0522 | 4.42 | 1400 | 3.3753 |
| 1.0237 | 5.05 | 1600 | 3.8098 |
| 0.7416 | 5.68 | 1800 | 3.9139 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
marcovarrone/scvi-dlpfc-full-visium | marcovarrone | 2023-03-22T09:58:46Z | 0 | 1 | scvi-tools | [
"scvi-tools",
"biology",
"genomics",
"single-cell",
"model_cls_name:SCVI",
"scvi_version:0.20.0",
"anndata_version:0.8.0",
"modality:rna",
"annotated:False",
"license:cc-by-4.0",
"region:us"
]
| null | 2023-03-22T09:51:57Z | ---
license: cc-by-4.0
library_name: scvi-tools
tags:
- biology
- genomics
- single-cell
- model_cls_name:SCVI
- scvi_version:0.20.0
- anndata_version:0.8.0
- modality:rna
- annotated:False
---
# Description
scVI model trained on the full DLPFC Visium data (including the pilot samples).
# Model properties
Many model properties are in the model tags. Some more are listed below.
**model_init_params**:
```json
{
"n_hidden": 128,
"n_latent": 5,
"n_layers": 1,
"dropout_rate": 0.1,
"dispersion": "gene",
"gene_likelihood": "zinb",
"latent_distribution": "normal"
}
```
**model_setup_anndata_args**:
```json
{
"layer": "counts",
"batch_key": "patient",
"labels_key": null,
"size_factor_key": null,
"categorical_covariate_keys": [
"sample",
"study"
],
"continuous_covariate_keys": null
}
```
**model_summary_stats**:
| Summary Stat Key | Value |
|--------------------------|--------|
| n_batch | 13 |
| n_cells | 166443 |
| n_extra_categorical_covs | 2 |
| n_extra_continuous_covs | 0 |
| n_labels | 1 |
| n_vars | 5000 |
**model_data_registry**:
| Registry Key | scvi-tools Location |
|------------------------|--------------------------------------------|
| X | adata.layers['counts'] |
| batch | adata.obs['_scvi_batch'] |
| extra_categorical_covs | adata.obsm['_scvi_extra_categorical_covs'] |
| labels | adata.obs['_scvi_labels'] |
**model_parent_module**: scvi.model
**data_is_minified**: False
# Training data
This is an optional link to where the training data is stored if it is too large
to host on the huggingface Model hub.
<!-- If your model is not uploaded with any data (e.g., minified data) on the Model Hub, then make
sure to provide this field if you want users to be able to access your training data. See the scvi-tools
documentation for details. -->
Training data url: N/A
# Training code
This is an optional link to the code used to train the model.
Training code url: N/A
# References
1. Maynard, Kristen R., et al. "Transcriptome-scale spatial gene expression in the human dorsolateral prefrontal cortex." Nature neuroscience 24.3 (2021): 425-436.
2. Huuki-Myers, Louise A., et al. "Integrated single cell and unsupervised spatial transcriptomic analysis defines molecular anatomy of the human dorsolateral prefrontal cortex." BioRxiv (2023): 2023-02. |
nahiavl/huggy_20_03 | nahiavl | 2023-03-22T09:55:00Z | 10 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2023-03-20T14:17:46Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: nahiavl/huggy_20_03
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gogo432754/finetuning-sentiment-model-3000-samples | gogo432754 | 2023-03-22T09:52:33Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-21T10:25:24Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8712871287128714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2955
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
ybelkada/clap-model-card | ybelkada | 2023-03-22T09:50:52Z | 0 | 1 | null | [
"arxiv:2211.06687",
"license:apache-2.0",
"region:us"
]
| null | 2023-03-14T16:13:06Z | ---
license: apache-2.0
---
# Model card for CLAP
Model card for CLAP: Contrastive Language-Audio Pretraining

# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Uses](#uses)
4. [Citation](#citation)
# TL;DR
The abstract of the paper states that:
> Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zero-shot setting and is able to obtain performance comparable to models' results in the non-zero-shot setting. LAION-Audio-630K and the proposed model are both available to the public.
# Usage
You can use this model for zero shot audio classification or extracting audio and/or textual features.
# Uses
## Perform zero-shot audio classification
### Using `pipeline`
```python
from datasets import load_dataset
from transformers import pipeline
dataset = load_dataset("ashraq/esc50")
audio = dataset["train"]["audio"][-1]["array"]
audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
print(output)
>>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
```
## Run the model:
You can also get the audio and text embeddings using `ClapModel`
### Run the model on CPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
audio_embed = model.get_audio_features(**inputs)
```
### Run the model on GPU:
```python
from datasets import load_dataset
from transformers import ClapModel, ClapProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = librispeech_dummy[0]
model = ClapModel.from_pretrained("laion/clap-htsat-unfused").to(0)
processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
audio_embed = model.get_audio_features(**inputs)
```
# Citation
If you are using this model for your work, please consider citing the original paper:
```
@misc{https://doi.org/10.48550/arxiv.2211.06687,
doi = {10.48550/ARXIV.2211.06687},
url = {https://arxiv.org/abs/2211.06687},
author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
gonced8/godel-multiwoz | gonced8 | 2023-03-22T09:43:29Z | 12 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"en",
"dataset:multi_woz_v22",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-01-24T12:30:06Z | ---
license: gpl-3.0
datasets:
- multi_woz_v22
language:
- en
metrics:
- bleu
- rouge
---
Pretrained model: [GODEL-v1_1-base-seq2seq](https://huggingface.co/microsoft/GODEL-v1_1-base-seq2seq/)
Fine-tuning dataset: [MultiWOZ 2.2](https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2)
# How to use:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("gonced8/godel-multiwoz")
model = AutoModelForSeq2SeqLM.from_pretrained("gonced8/godel-multiwoz")
# Encoder input
context = [
"USER: I need train reservations from norwich to cambridge",
"SYSTEM: I have 133 trains matching your request. Is there a specific day and time you would like to travel?",
"USER: I'd like to leave on Monday and arrive by 18:00.",
]
input_text = " EOS ".join(context[-5:]) + " => "
model_inputs = tokenizer(
input_text, max_length=512, truncation=True, return_tensors="pt"
)["input_ids"]
# Decoder input
answer_start = "SYSTEM: "
decoder_input_ids = tokenizer(
"<pad>" + answer_start,
max_length=256,
truncation=True,
add_special_tokens=False,
return_tensors="pt",
)["input_ids"]
# Generate
output = model.generate(
model_inputs, decoder_input_ids=decoder_input_ids, max_length=256
)
output = tokenizer.decode(
output[0], clean_up_tokenization_spaces=True, skip_special_tokens=True
)
print(output)
# SYSTEM: TR4634 arrives at 17:35. Would you like me to book that for you?
``` |
micromind/MNIST | micromind | 2023-03-22T09:39:31Z | 0 | 0 | null | [
"image-classification",
"en",
"dataset:mnist",
"license:mit",
"region:us"
]
| image-classification | 2023-02-22T09:47:46Z | ---
license: mit
datasets:
- mnist
language:
- en
pipeline_tag: image-classification
---
# micromind checkpoints for MNIST
This repository contains checkpoints for the MNIST dataset for the following networks:
| Model | Top 1 Accuracy | Top 5 Accuracy |
| ------------------ |---------------- | -------------- |
| `PhiNet(alpha=0.5, beta=1, t_zero=6, num_layers=4, resolution=28)` | 98.96% | 100% |
| `PhiNet(alpha=0.75, beta=1, t_zero=6, num_layers=5, resolution=28)` | 99.03% | 99.98% |
| `PhiNet(alpha=0.35, beta=1, t_zero=6, num_layers=7, resolution=28)` | 98.72% | 99.99% |
| `PhiNet(alpha=0.25, beta=1, t_zero=6, num_layers=7, resolution=28)` | 98.84% | 99.99% |
| `PhiNet(alpha=0.25, beta=1, t_zero=5, num_layers=7, resolution=28)` | 98.76% | 99.97% |
To download and use this repo:
```
from micromind import PhiNet
model = PhiNet.from_pretrained("MNIST", alpha=0.5, beta=1.0, t_zero=6, num_layers=4, num_classes=10, resolution=28)
```
## Authors
- [@fpaissan](https://www.github.com/fpaissan)
- [@matteobeltrami](https://www.github.com/matteobeltrami)
---
license: mit
--- |
bhadresh-savani/electra-base-squad2 | bhadresh-savani | 2023-03-22T09:36:46Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"electra",
"question-answering",
"dataset:squad_v2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-04-13T14:25:23Z | ---
datasets:
- squad_v2
license: cc-by-4.0
---
# electra-base for QA
## Overview
**Language model:** electra-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Code:** See [example](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py) in [FARM](https://github.com/deepset-ai/FARM/blob/master/examples/question_answering.py)
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
seed=42
batch_size = 32
n_epochs = 5
base_LM_model = "google/electra-base-discriminator"
max_seq_len = 384
learning_rate = 1e-4
lr_schedule = LinearWarmup
warmup_proportion = 0.1
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 77.30144024256717,
"f1": 81.35438272008543,
"total": 11873,
"HasAns_exact": 74.34210526315789,
"HasAns_f1": 82.45961302894314,
"HasAns_total": 5928,
"NoAns_exact": 80.25231286795626,
"NoAns_f1": 80.25231286795626,
"NoAns_total": 5945
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
### In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import Inferencer
model_name = "deepset/electra-base-squad2"
# a) Get predictions
nlp = Inferencer.load(model_name, task_type="question_answering")
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
### In haystack
For doing QA at scale (i.e. many docs instead of single paragraph), you can load the model also in [haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="deepset/electra-base-squad2")
# or
reader = TransformersReader(model="deepset/electra-base-squad2",tokenizer="deepset/electra-base-squad2")
```
## Authors
Vaishali Pal `vaishali.pal [at] deepset.ai`
Branden Chan: `branden.chan [at] deepset.ai`
Timo Möller: `timo.moeller [at] deepset.ai`
Malte Pietsch: `malte.pietsch [at] deepset.ai`
Tanay Soni: `tanay.soni [at] deepset.ai`
Note:
Borrowed this model from Haystack model repo for adding tensorflow model. |
Mor1998/distilbert-base-uncased-distilled-dtkd-clinc | Mor1998 | 2023-03-22T08:54:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T08:05:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-dtkd-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9319354838709677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-dtkd-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0655
- Accuracy: 0.9319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6996 | 1.0 | 318 | 0.4150 | 0.5790 |
| 0.3134 | 2.0 | 636 | 0.2040 | 0.8381 |
| 0.1843 | 3.0 | 954 | 0.1330 | 0.8952 |
| 0.1322 | 4.0 | 1272 | 0.1032 | 0.9119 |
| 0.1053 | 5.0 | 1590 | 0.0858 | 0.9213 |
| 0.0908 | 6.0 | 1908 | 0.0771 | 0.9258 |
| 0.0813 | 7.0 | 2226 | 0.0710 | 0.9287 |
| 0.0754 | 8.0 | 2544 | 0.0681 | 0.9310 |
| 0.0717 | 9.0 | 2862 | 0.0660 | 0.9310 |
| 0.0701 | 10.0 | 3180 | 0.0655 | 0.9319 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
|
loveplay1983/distilbert-base-uncased-finetuned-emotion | loveplay1983 | 2023-03-22T08:51:15Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T02:20:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.941
- name: F1
type: f1
value: 0.9410654356868428
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1570
- Accuracy: 0.941
- F1: 0.9411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4447 | 1.0 | 1600 | 0.1987 | 0.9295 | 0.9290 |
| 0.155 | 2.0 | 3200 | 0.1570 | 0.941 | 0.9411 |
### Framework versions
- Transformers 4.27.1
- Pytorch 1.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
asuzuki/PPO-LunarLander-v2 | asuzuki | 2023-03-22T08:49:52Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
]
| reinforcement-learning | 2023-01-06T08:49:20Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -117.50 +/- 52.66
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'asuzuki/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
bhadresh-savani/roberta-base-emotion | bhadresh-savani | 2023-03-22T08:48:07Z | 779 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"text-classification",
"emotion",
"en",
"dataset:emotion",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-03-02T23:29:05Z | ---
language:
- en
license: apache-2.0
tags:
- text-classification
- emotion
- pytorch
datasets:
- emotion
metrics:
- Accuracy, F1 Score
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
model-index:
- name: bhadresh-savani/roberta-base-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.931
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjg5OTI4ZTlkY2VmZjYzNGEzZGQ3ZjczYzY5YjJmMGVmZDQ4ZWNiYTAyZTJiZjlmMTU2MjE1NTllMWFhYzU0MiIsInZlcnNpb24iOjF9.dc44cEsbu900M2s64GyVIWKPagBzwI-dPlfvh0NGyJFMGKOcypke9P2ary9fBZITrH3UF6lza3sCh7vWYZFHBQ
- type: precision
value: 0.9168321948556312
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2EzYTcxNTExNGU1MmFiZjE3NGE5MDIyMDU2M2U3OGExOTdjZDE5YWU2NDhmOTJlYWMzY2NkN2U5MmRmZTE0MiIsInZlcnNpb24iOjF9.4U7vJ3ALdUUxySMhVeb4Qa1tSp3wphSIZkRYNMujz-KrOZW8kkcmCde3ioStBg3Qqyf1powYd88uk1R7DuWRBA
- type: precision
value: 0.931
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjhmZGRlYWE5ZTAzMmJiMzlmMWZiM2VlYjdiNzI0NjVmN2M2YzcxM2EzYTg0OTFiZTE1MjVmNzE5NGEzYTg2ZCIsInZlcnNpb24iOjF9.8eCHAK0rlZWnhBNQdh9kcuAeItmDUAgK3KkZ7eC-GyYhi4HT5dZiS6btcC5EjkYVOS4czcjzqxfVz4PuZgtLDQ
- type: precision
value: 0.9357445689014415
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDhhZTdkNzYzMjhjZjc4MTAxNWZiYjgzMjhhNjRiZWRmYjc5YTA0NTQ1MzllMTYxMTVkMDk4OTE0ZGEyMTNhMiIsInZlcnNpb24iOjF9.YIZfj2Eo1nMX2GVSfqJy-Cp7VBubfUh2LuOnU60sG5Lci8FdlNbAanS1IzAyxU3U29lqiTasxfS_yrwAj5cmBQ
- type: recall
value: 0.8743657671177089
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2Y2YTcyNzMwYzZiMmM1Yzc4YWZhNDM3ZDQyMjI1NWZhMjQyNmU5NTA0YmE2ZDBiZmY1MmUyZWRlMjRhMjFmYSIsInZlcnNpb24iOjF9.XKlFy_Cx4T4l7Otd8aAwWcI-fJ_dJ6V1Kp3uZm6OWjwCb1Do6mSdPFfwiMeBZZyfEIsNBnguegssZvHsOfTSAQ
- type: recall
value: 0.931
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzgzN2JkNzAzZDRjNjJmZjNkY2RmYzVkMWEzYTMzZDU4NzJlYzBmOWE4MTU0MGU0MTJhM2JjZDdjODhlZDExOCIsInZlcnNpb24iOjF9.9tSVB4yNBdFXpH3equwo1ZaEnVUktO6lm93UEJ-luKhxo6wgS54OLjgDq7IpJYwa3lvYyjy-sxzQEe9ri31WAg
- type: recall
value: 0.931
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGVhZTIyMmVmOTU1YWNjMmZiZjNmOTNlNzlhZTk3NjhlZmMwZGFkZWQxZTlhZWUwZGQyN2JhOWQyNWQ3MTVhOCIsInZlcnNpb24iOjF9.2odv2fK7zH0_S_7wC3obONzjxOipDdjWvddhnGdMnrIN6CiZwLp7XgizpqcWbwAQ_9YJwjC-6wXpbq2jTvN0Bw
- type: f1
value: 0.8821236522209227
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDI0YTUxOTA2M2ZjNGM1OTJlZDAzZTAxNTg4YjY3OWNmMjNmMTk0YWRjZTE2Y2ZmYWI1ZmU3ZmJmNzNjMjBlOCIsInZlcnNpb24iOjF9.P5-TbuEUrCtX9H7F-tKn8LI1RBPhoJwjJm_l853WTSzdLioThAtIK5HBG0xgXT2uB0Q8v94qH2b8cz1j_WonDg
- type: f1
value: 0.931
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjNmNDgyMmFjODYwNjcwOTJiOGM2N2YwYjUyMDk5Yjk2Y2I3NmFmZGFhYjU0NGM2OGUwZmRjNjcxYTU3YzgzNSIsInZlcnNpb24iOjF9.2ZoRJwQWVIcl_Ykxce1MnZ3mSxBGxGeNYFPxt9mivo9yTi3gUE7ua6JRpVEOnOUbevlWxVkUUNnmOPFqBN1sCQ
- type: f1
value: 0.9300782840205046
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGE1OTcxNmNmMjQ3ZDAzYzk0N2Q1MGFjM2VhNWMyYmRjY2E3ZThjODExOTNlNWMxYzdlMWM2MDBiMTZhY2M2OSIsInZlcnNpb24iOjF9.r63SEArCiFB5m0ccV2q_t5uSOtjVnWdz4PfvCYUchm0JlrRC9YAm5oWKeO419wdyFY4rZFe014yv7sRcV-CgBQ
- type: loss
value: 0.15155883133411407
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiN2M4MmVlNjAzZjhiMWJlNWQxMDg5ZTRiYjFlZGYyMGMyYzU4M2IwY2E1M2E2MzA5NmU5ZjgwZTZmMDI5YjgzMyIsInZlcnNpb24iOjF9.kjgFJohkTxLKtzHJDlBvd6qolGQDSZLbrDE7C07xNGmarhTLc_A3MmLeC4MmQGOl1DxfnHflImIkdqPylyylDA
---
# robert-base-emotion
## Model description:
[roberta](https://arxiv.org/abs/1907.11692) is Bert with better hyperparameter choices so they said it's Robustly optimized Bert during pretraining.
[roberta-base](https://huggingface.co/roberta-base) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/roberta-base-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.002281982684507966},
{'label': 'joy', 'score': 0.9726489186286926},
{'label': 'love', 'score': 0.021365027874708176},
{'label': 'anger', 'score': 0.0026395076420158148},
{'label': 'fear', 'score': 0.0007162453257478774},
{'label': 'surprise', 'score': 0.0003483477921690792}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
follow the above notebook by changing the model name to roberta
## Eval results
```json
{
'test_accuracy': 0.9395,
'test_f1': 0.9397328860104454,
'test_loss': 0.14367154240608215,
'test_runtime': 10.2229,
'test_samples_per_second': 195.639,
'test_steps_per_second': 3.13
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/) |
bhadresh-savani/bertweet-base-finetuned-emotion | bhadresh-savani | 2023-03-22T08:42:15Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-07-11T15:57:26Z | ---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bertweet-base-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- type: accuracy
value: 0.929
name: Accuracy
- type: f1
value: 0.9295613935787139
name: F1
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- type: accuracy
value: 0.925
name: Accuracy
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZThkYWEwYTdjY2IwMmE4NmM2Mzc3ZTVkNTNmNWYwNGUxYTM5ZDA5ODEwMGQ1ZGU0ZmJmY2U1ZDhjYWRlZjU2NSIsInZlcnNpb24iOjF9.QJYOUR_EPrYzbZGBb1N27BSlTQIdvd1hmUfnfPJdTGGrNoQwXBUA4amVsWh1txV_YtO8hcCx-b3pTqzpdy1FAw
- type: precision
value: 0.8722017563353339
name: Precision Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2Y3ZGM2NDk5ZTQyNTNjZDdmNjk5Y2IwNzkxNmU3MDM0YTljMTJjMzFmMTlkN2ZjN2NhZjNhYTVlMWY5NWFjNCIsInZlcnNpb24iOjF9.cBYScC_c6g1ECi3rj6HiRI3AMuoxg8wp7JKha0UKh1Q2qjzTr5ml8JAByPL0iu-Ix5BO2Bsx0fZNFhUS82LiCg
- type: precision
value: 0.925
name: Precision Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjVjYjgwYjE2ZWUyM2Y5MzE0ODk4NDA5MGM2ODIxYTgxZDYyMTUxNzcwZWQ2MjZjZGYwODkyNzFkMjAxOTUzYyIsInZlcnNpb24iOjF9.phgA4BJcqp4ZUhecNeuGU8OAf6f_asN9Mf6JfFGd0cPORYltd_N4Wf6EXqu6z1ADqWeeibteEyIUwmmMEbjYBQ
- type: precision
value: 0.9283646705517916
name: Precision Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjE4MDUyYjU4YTc4Mzk4YmIwMGIzZWEyYmU5ZDQ0MjRhZDQ4OGMwZjVmZmEyNDM5NzYyZTMzMTJiMmRkZTU4NiIsInZlcnNpb24iOjF9.LbYjoga-JSCzHZAF1fhm1CfuaSSI-ok0yXj3gtd4QTWY1TjzOHoMG3Q6zEGz84l6ASoHsvi9wjS7_EaSQLB4Dw
- type: recall
value: 0.8982480793145559
name: Recall Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzFiY2JmYWRhYmE1YzA1MmRhODI0MGE1NTRiMWE3YmNjZWQ4OGExMDg3NmUzYmUzYTYyNTdkNjM1Y2M0ODJmMCIsInZlcnNpb24iOjF9.dAq2gloG0O-4z5Ng7RZkFO7e0og3wBQBmIDzic6onwjw83yaHPVfRd1e0j6mNhMUifOwPLEavnYkBYa9DVFqCw
- type: recall
value: 0.925
name: Recall Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjg5Y2M2NWIwZmI4YjEyMmIzYjNjZmQyMzhlYjg2ZDRmY2U1M2I1NzQzYjRmMzYxYzJkNTI5MDJjMmY5ZDVmNyIsInZlcnNpb24iOjF9.Z5hmQBUsoKAgqTXk47aUDNKf5jJ0mXzY9TAgM9vG8I3pgCT465PEfM-TOKfG_YcPMLd3tkB8AdwDpmVnNj5QCw
- type: recall
value: 0.925
name: Recall Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODIzMjY1YjgzYzAwNjk2NGI1ZjFhZTg4MGY1Mzg5NDhkM2EzY2JlMWM4MjZmNjg4ZmEyZDJmZTUwNDFkZmNiOCIsInZlcnNpb24iOjF9.S-9p04Lru9WTzm50mM5qGM4oA-TPgNw6uwxKr5AejU1iPKjyTDQvoumBs41T5OKL5zN_NyYXsFsCermSbirLAw
- type: f1
value: 0.883488774573809
name: F1 Macro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGMyMjc3NTkxNDNjNTJiNmRmMTA4NzY0MTgyMDc3ZDE4N2RlMTY1YzU4OGQ1YmM1NzY2OGQ4Y2I0MzVhOGU3OSIsInZlcnNpb24iOjF9.D65sLHNZGjp15ra4i5ccYyOX705Xq-hftZjDb6kqE5X-jhzA5VLev6FirhnhyYLBQmA6Q9T1eDYHKkVZG4CcBg
- type: f1
value: 0.925
name: F1 Micro
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTNkMmUwNjYxNGE4M2E2MDA0NTI4ZjA1OTNkMWEwN2MzY2JmMWYxYThiYTZmM2MwZjM5YTIzMGIzMGI4ODJlZSIsInZlcnNpb24iOjF9.cB4WUQN_weyKdMZehH0ECaTcD9Jl1xzmrOzJZz27OJeCPjY0uW8O63HnJZ_LmBF2xqd7HDypT4s8hZBMT-6eDw
- type: f1
value: 0.9259820821054494
name: F1 Weighted
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTY3NDhjNmMzYzM1MjIyY2FkYjI5YTQ1NTdmODhmMGVlNjc2ZjQ3MWZmZWEyMDQ1OGI1NDllZTBhM2VjYzg2MSIsInZlcnNpb24iOjF9.Akd8PVgc2tyin_TaOZV1bio_b00g3QmlHA-GWV3rMX13B1imDLuPAuP-HWIwgqg-umQUkJzcUQlTqbcQ06v0DQ
- type: loss
value: 0.18158096075057983
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDZhODkzODQ2ZjYyMWQxYjEzNzRmMmQ0NjM3M2RiNDdlMTcwOGRhYjA0NWEwYTVjMmY0ZWY3NGQ3MzFhMTQ3ZSIsInZlcnNpb24iOjF9.jzv7qMmQuFmrsR3WoRAsCbrRJhNk0sfEcN07lCqhxUwYcO4rblVbBiePQtr0IDN067PbQmV6ES6W2cjHqvuHAA
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-finetuned-emotion
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1737
- Accuracy: 0.929
- F1: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.9469 | 1.0 | 250 | 0.3643 | 0.895 | 0.8921 |
| 0.2807 | 2.0 | 500 | 0.2173 | 0.9245 | 0.9252 |
| 0.1749 | 3.0 | 750 | 0.1859 | 0.926 | 0.9266 |
| 0.1355 | 4.0 | 1000 | 0.1737 | 0.929 | 0.9296 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
bhadresh-savani/electra-base-discriminator-finetuned-conll03-english | bhadresh-savani | 2023-03-22T08:41:28Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"electra",
"token-classification",
"en",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2022-04-02T11:22:08Z | ---
language:
- en
tags:
- token-classification
- pytorch
license: apache-2.0
datasets:
- conll2003
metrics:
- Accuracy, F1 Score, Precision, Recall
model-index:
- name: bhadresh-savani/electra-base-discriminator-finetuned-conll03-english
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9397659001450176
verified: true
- name: Precision
type: precision
value: 0.9492206245667668
verified: true
- name: Recall
type: recall
value: 0.9468813162653806
verified: true
- name: F1
type: f1
value: 0.9480495273598721
verified: true
- name: loss
type: loss
value: 0.3468747138977051
verified: true
---
# Electra Base Discriminator conll03 English
# Results:
```
***** predict metrics *****
predict_accuracy = 0.9813
predict_f1 = 0.9137
predict_loss = 0.1251
predict_precision = 0.9098
predict_recall = 0.9177
predict_runtime = 0:00:10.11
predict_samples_per_second = 341.368
predict_steps_per_second = 42.696
``` |
GanjinZero/biobart-v2-base | GanjinZero | 2023-03-22T08:22:33Z | 779 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"biobart",
"biomedical",
"en",
"arxiv:2204.03905",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-10-21T15:57:43Z | ---
language:
- en
license: apache-2.0
tags:
- bart
- biobart
- biomedical
inference: true
widget:
- text: "Influenza is a <mask> disease."
- type: "text-generation"
---
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
V2 adopts a new biomedical vocab.
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
``` |
GanjinZero/biobart-base | GanjinZero | 2023-03-22T08:22:29Z | 463 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"biobart",
"biomedical",
"en",
"arxiv:2204.03905",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2022-03-12T07:00:32Z | ---
language:
- en
license: apache-2.0
tags:
- bart
- biobart
- biomedical
inference: true
widget:
- text: "Influenza is a <mask> disease."
- type: "text-generation"
---
Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf)
```
@misc{BioBART,
title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model},
author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu},
year={2022},
eprint={2204.03905},
archivePrefix={arXiv}
}
``` |
LarryAIDraw/SaekiSayakaBloomInto_v10 | LarryAIDraw | 2023-03-22T08:12:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T08:10:34Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/22624/saeki-sayaka-bloom-into-you |
LarryAIDraw/dunkerqueAzurLane_v10 | LarryAIDraw | 2023-03-22T08:12:32Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T08:09:25Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/19040/dunkerque-or-azur-lane |
LarryAIDraw/liselotteCretiaSeirei_liselottecretia4 | LarryAIDraw | 2023-03-22T08:12:10Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T08:08:33Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/22379/liselotte-cretia-seirei-gensouki |
patrickramos/bert-base-japanese-v2-wrime-fine-tune | patrickramos | 2023-03-22T08:11:34Z | 5,123 | 6 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"text-classification",
"ja",
"dataset:wrime",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2022-05-22T09:42:14Z | ---
license: cc-by-sa-3.0
language:
- ja
tag:
- emotion-analysis
datasets:
- wrime
widget:
- text: "車のタイヤがパンクしてた。。いたずらの可能性が高いんだって。。"
---
# WRIME-fine-tuned BERT base Japanese
This model is a [Japanese BERT<sub>BASE</sub>](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) fine-tuned on the [WRIME](https://github.com/ids-cv/wrime) dataset. It was trained as part of the paper ["Emotion Analysis of Writers and Readers of Japanese Tweets on Vaccinations"](https://aclanthology.org/2022.wassa-1.10/). Fine-tuning code is available at this [repo](https://github.com/PatrickJohnRamos/BERT-Japan-vaccination).
# Intended uses and limitations
This model can be used to predict intensities scores for eight emotions for writers and readers. Please refer to the `Fine-tuning data` section for the list of emotions.
Because of the regression fine-tuning task, it is possible for the model to infer scores outside of the range of the scores of the fine-tuning data (`score < 0` or `score > 4`).
# Model Architecture, Tokenization, and Pretraining
The Japanese BERT<sub>BASE</sub> fine-tuned was `cl-tohoku/bert-base-japanese-v2`. Please refer to their [model card](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) for details regarding the model architecture, tokenization, pretraining data, and pretraining procedure.
# Fine-tuning data
The model is fine-tuned on [WRIME](https://github.com/ids-cv/wrime), a dataset of Japanese Tweets annotated with writer and reader emotion intensities. We use version 1 of the dataset. Each Tweet is accompanied by a set of writer emotion intensities (from the author of the Tweet) and three sets of reader emotions (from three annotators). The emotions follow Plutchhik's emotions, namely:
* joy
* sadness
* anticipation
* surprise
* anger
* fear
* disgust
* trust
These emotion intensities follow a four-point scale:
| emotion intensity | emotion presence|
|---|---|
| 0 | no |
| 1 | weak |
| 2 | medium |
| 3 | strong |
# Fine-tuning
The BERT is fine-tuned to directly regress the emotion intensities of the writer and the averaged emotions of the readers from each Tweet, meaning there are 16 outputs (8 emotions per writer/reader).
The fine-tuning was inspired by common BERT fine-tuning procedures. The BERT was fine-tuned on WRIME for 3 epochs using the AdamW optimizer with a learning rate of 2e-5, β<sub>1</sub>=0.9, β<sub>2</sub>=0.999, weight decay of 0.01, linear decay, a warmup ratio of 0.01, and a batch size of 32. Training was conducted with an NVIDIA Tesla K80 and finished in 3 hours.
# Evaluation results
Below are the MSEs of the BERT on the test split of WRIME.
| Annotator | Joy | Sadness | Anticipation | Surprise | Anger | Fear | Disgust | Trust | Overall |
|---|---|---|---|---|---|---|---|---|---|
| Writer | 0.658 | 0.688 | 0.746 | 0.542 | 0.486 | 0.462 | 0.664 | 0.400 | 0.581 |
| Reader | 0.192 | 0.178 | 0.211 | 0.139 | 0.032 | 0.147 | 0.123 | 0.029 | 0.131 |
| Both | 0.425 | 0.433 | 0.479 | 0.341 | 0.259 | 0.304 | 0.394 | 0.214 | 0.356 | |
Mor1998/distilbert-base-uncased-distilled-clinc | Mor1998 | 2023-03-22T07:58:27Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-22T07:48:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9470967741935484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1678
- Accuracy: 0.9471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.511 | 1.0 | 318 | 1.0358 | 0.7565 |
| 0.806 | 2.0 | 636 | 0.5325 | 0.8803 |
| 0.4369 | 3.0 | 954 | 0.3103 | 0.9219 |
| 0.2731 | 4.0 | 1272 | 0.2269 | 0.9368 |
| 0.2042 | 5.0 | 1590 | 0.1968 | 0.9403 |
| 0.175 | 6.0 | 1908 | 0.1824 | 0.9465 |
| 0.1589 | 7.0 | 2226 | 0.1745 | 0.9465 |
| 0.1498 | 8.0 | 2544 | 0.1708 | 0.9468 |
| 0.1445 | 9.0 | 2862 | 0.1686 | 0.9461 |
| 0.1425 | 10.0 | 3180 | 0.1678 | 0.9471 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
|
easyNLP/distilbert-base-uncased-finetuned-emotion | easyNLP | 2023-03-22T07:45:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-02-23T08:32:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2078
- Accuracy: 0.9225
- F1: 0.9228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7799 | 1.0 | 250 | 0.2978 | 0.9045 | 0.9020 |
| 0.2346 | 2.0 | 500 | 0.2078 | 0.9225 | 0.9228 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
|
domenicrosati/led-base-16384-biolaysum-both-with_references | domenicrosati | 2023-03-22T07:41:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2023-03-20T17:44:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: led-base-16384-biolaysum-both-with_references
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-base-16384-biolaysum-both-with_references
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1595
- Rouge1: 0.4548
- Rouge2: 0.1555
- Rougel: 0.2435
- Rougelsum: 0.2435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.2751 | 0.69 | 5000 | 2.2219 | 0.4488 | 0.1496 | 0.2392 | 0.2392 |
| 2.0407 | 1.37 | 10000 | 2.1595 | 0.4548 | 0.1555 | 0.2435 | 0.2435 |
| 1.9246 | 2.06 | 15000 | 2.1263 | 0.4537 | 0.1522 | 0.2395 | 0.2396 |
| 1.9066 | 2.75 | 20000 | 2.1091 | 0.4562 | 0.1538 | 0.2409 | 0.2409 |
| 1.7802 | 3.43 | 25000 | 2.0998 | 0.4539 | 0.1523 | 0.2411 | 0.2411 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.12.1
|
dhanyaXchandra/femasturboobs | dhanyaXchandra | 2023-03-22T07:14:34Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T07:13:30Z | ---
license: creativeml-openrail-m
---
|
dhanyaXchandra/cameltoe | dhanyaXchandra | 2023-03-22T07:13:01Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T07:08:55Z | ---
license: creativeml-openrail-m
---
|
Splend1dchan/canine-c-squad | Splend1dchan | 2023-03-22T07:09:39Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"canine",
"question-answering",
"endpoints_compatible",
"region:us"
]
| question-answering | 2022-04-08T14:16:41Z | python run_squad.py \
--model_name_or_path google/canine-c \
--do_train \
--do_eval \
--per_gpu_train_batch_size 1 \
--per_gpu_eval_batch_size 1 \
--gradient_accumulation_steps 128 \
--learning_rate 3e-5 \
--num_train_epochs 3 \
--max_seq_length 1024 \
--doc_stride 128 \
--max_answer_length 240 \
--output_dir canine-c-squad \
--model_type bert
{
"_name_or_path": "google/canine-c",
"architectures": [
"CanineForQuestionAnswering"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 57344,
"downsampling_rate": 4,
"eos_token_id": 57345,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"local_transformer_stride": 128,
"max_position_embeddings": 16384,
"model_type": "canine",
"num_attention_heads": 12,
"num_hash_buckets": 16384,
"num_hash_functions": 8,
"num_hidden_layers": 12,
"pad_token_id": 0,
"torch_dtype": "float32",
"transformers_version": "4.19.0.dev0",
"type_vocab_size": 16,
"upsampling_kernel_size": 4,
"use_cache": true
}
{'exact': 58.893093661305585, 'f1': 72.18823344945899} |
Mor1998/distilbert-base-uncased-finetuned-clinc | Mor1998 | 2023-03-22T07:07:01Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-03-21T03:56:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2890 | 0.7432 |
| 2.6284 | 2.0 | 636 | 1.8756 | 0.8377 |
| 1.5483 | 3.0 | 954 | 1.1572 | 0.8961 |
| 1.015 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7953 | 5.0 | 1590 | 0.7721 | 0.9184 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.13.1+cu116
- Datasets 1.16.1
- Tokenizers 0.10.3
|
dhanyaXchandra/skirtlift | dhanyaXchandra | 2023-03-22T07:00:56Z | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T06:58:13Z | ---
license: creativeml-openrail-m
---
|
pfunk/PongNoFrameskip-v4-P_DQPN_x2-seed555 | pfunk | 2023-03-22T06:59:25Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"PongNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2023-03-22T06:59:16Z | ---
tags:
- PongNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQPN_freq
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PongNoFrameskip-v4
type: PongNoFrameskip-v4
metrics:
- type: mean_reward
value: 20.20 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **DQPN_freq** Agent Playing **PongNoFrameskip-v4**
This is a trained model of a DQPN_freq agent playing PongNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/P_DQPN_x2.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[P_DQPN_x2]"
python -m cleanrl_utils.enjoy --exp-name P_DQPN_x2 --env-id PongNoFrameskip-v4
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x2-seed555/raw/main/dqpn_freq_atari.py
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x2-seed555/raw/main/pyproject.toml
curl -OL https://huggingface.co/pfunk/PongNoFrameskip-v4-P_DQPN_x2-seed555/raw/main/poetry.lock
poetry install --all-extras
python dqpn_freq_atari.py --track --wandb-entity pfunk --wandb-project-name dqpn --capture-video true --save-model true --upload-model true --hf-entity pfunk --exp-name P_DQPN_x2 --policy-network-frequency 2000 --seed 555
```
# Hyperparameters
```python
{'alg_type': 'dqpn_freq_atari.py',
'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'cuda': True,
'double_learning': False,
'end_e': 0.01,
'env_id': 'PongNoFrameskip-v4',
'exp_name': 'P_DQPN_x2',
'exploration_fraction': 0.2,
'gamma': 0.99,
'hf_entity': 'pfunk',
'learning_rate': 0.0001,
'learning_starts': 10000,
'max_gradient_norm': inf,
'policy_network_frequency': 2000,
'policy_tau': 1.0,
'save_model': True,
'seed': 555,
'start_e': 1.0,
'target_network_frequency': 1000,
'target_tau': 1.0,
'torch_deterministic': True,
'total_timesteps': 5000000,
'track': True,
'train_frequency': 1,
'upload_model': True,
'wandb_entity': 'pfunk',
'wandb_project_name': 'dqpn'}
```
|
dhanyaXchandra/breastinclassbetter | dhanyaXchandra | 2023-03-22T06:51:04Z | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T06:48:01Z | ---
license: creativeml-openrail-m
---
|
LKINGKK/2131 | LKINGKK | 2023-03-22T06:50:54Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
]
| null | 2023-03-22T06:50:16Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
### How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dhanyaXchandra/creampiev11 | dhanyaXchandra | 2023-03-22T06:47:34Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
]
| null | 2023-03-22T06:46:27Z | ---
license: creativeml-openrail-m
---
|
vietgpt/bert-30M-cased | vietgpt | 2023-03-22T06:47:30Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"vi",
"dataset:hieunguyen1053/binhvq-news-corpus",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2023-02-27T20:38:15Z | ---
license: apache-2.0
datasets:
- hieunguyen1053/binhvq-news-corpus
language:
- vi
library_name: transformers
pipeline_tag: fill-mask
widget:
- text: "Tôi là <mask> viên trường Đại học Tôn Đức Thắng"
example_title: "Example 1"
--- |
Subsets and Splits