modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 12:28:24
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 12:27:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
EnterNameBros/DialoGPT-large-Senko-san-ver-2 | EnterNameBros | 2023-05-24T03:42:34Z | 143 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-24T03:17:01Z | ---
pipeline_tag: conversational
language:
- en
metrics:
- character
- accuracy
--- |
AdonaiHS/unit_8_LunarLander-v2 | AdonaiHS | 2023-05-24T03:38:36Z | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-24T03:38:30Z | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -268.70 +/- 134.46
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'AdonaiHS/unit_8_LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
YakovElm/Jira5Classic_with_cleaning | YakovElm | 2023-05-24T03:37:03Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-24T03:36:17Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira5Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira5Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2646
- Train Accuracy: 0.8919
- Validation Loss: 1.1625
- Validation Accuracy: 0.5584
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5257 | 0.7639 | 0.6646 | 0.5931 | 0 |
| 0.4200 | 0.7901 | 1.2433 | 0.4890 | 1 |
| 0.2646 | 0.8919 | 1.1625 | 0.5584 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mio/chtholly | mio | 2023-05-24T03:21:37Z | 3 | 9 | espnet | [
"espnet",
"audio",
"text-to-speech",
"jp",
"dataset:chtholly",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | 2023-05-22T13:06:29Z | ---
tags:
- espnet
- audio
- text-to-speech
language: jp
datasets:
- chtholly
license: cc-by-4.0
widget:
- text: "こんにちは、クトリ・ノタ・ セニオリスです。 終末なにしてますか? 忙しいですか? 救ってもらっていいですか?"
---
## ESPnet2 TTS model

### `mio/chtholly`
This model was trained by mio using chtholly recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 0232f540a98ece921477b961db8ae019211da9af
pip install -e .
cd egs2/chtholly/tts1
./run.sh --skip_data_prep false --skip_train true --download_model mio/chtholly
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/finetune_vits.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_chtholly_vits_finetune_from_jsut
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 50705
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- total_count
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: false
create_graph_in_tensorboard: false
use_wandb: true
wandb_project: chtholly
wandb_id: null
wandb_entity: null
wandb_name: vits_finetune_chtholly_from_jsut
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- downloads/f3698edf589206588f58f5ec837fa516/exp/tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause/train.total_count.ave_10best.pth:tts:tts
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 5000000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/text_shape.phn
- exp/tts_stats_raw_linear_spectrogram_phn_jaconv_pyopenjtalk_accent_with_pause/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/22k/raw/train/text
- text
- text
- - dump/22k/raw/train/wav.scp
- speech
- sound
valid_data_path_and_name_and_type:
- - dump/22k/raw/dev/text
- text
- text
- - dump/22k/raw/dev/wav.scp
- speech
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0001
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: false
token_list:
- <blank>
- <unk>
- '1'
- '2'
- '0'
- '3'
- '4'
- '-1'
- '5'
- a
- o
- '-2'
- i
- '-3'
- u
- e
- k
- n
- t
- '6'
- r
- '-4'
- s
- N
- m
- pau
- '7'
- sh
- d
- g
- w
- '8'
- U
- '-5'
- I
- cl
- h
- y
- b
- '9'
- j
- ts
- ch
- '-6'
- z
- p
- '-7'
- f
- ky
- ry
- '-8'
- gy
- '-9'
- hy
- ny
- '-10'
- by
- my
- '-11'
- '-12'
- '-13'
- py
- '-14'
- '-15'
- v
- '10'
- '-16'
- '-17'
- '11'
- '-21'
- '-20'
- '12'
- '-19'
- '13'
- '-18'
- '14'
- dy
- '15'
- ty
- '-22'
- '16'
- '18'
- '19'
- '17'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: jaconv
g2p: pyopenjtalk_accent_with_pause
feats_extract: linear_spectrogram
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
normalize: null
normalize_conf: {}
tts: vits
tts_conf:
generator_type: vits_generator
generator_params:
hidden_channels: 192
spks: -1
global_channels: -1
segment_size: 32
text_encoder_attention_heads: 2
text_encoder_ffn_expand: 4
text_encoder_blocks: 6
text_encoder_positionwise_layer_type: conv1d
text_encoder_positionwise_conv_kernel_size: 3
text_encoder_positional_encoding_layer_type: rel_pos
text_encoder_self_attention_layer_type: rel_selfattn
text_encoder_activation_type: swish
text_encoder_normalize_before: true
text_encoder_dropout_rate: 0.1
text_encoder_positional_dropout_rate: 0.0
text_encoder_attention_dropout_rate: 0.1
use_macaron_style_in_text_encoder: true
use_conformer_conv_in_text_encoder: false
text_encoder_conformer_kernel_size: -1
decoder_kernel_size: 7
decoder_channels: 512
decoder_upsample_scales:
- 8
- 8
- 2
- 2
decoder_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
decoder_resblock_kernel_sizes:
- 3
- 7
- 11
decoder_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
use_weight_norm_in_decoder: true
posterior_encoder_kernel_size: 5
posterior_encoder_layers: 16
posterior_encoder_stacks: 1
posterior_encoder_base_dilation: 1
posterior_encoder_dropout_rate: 0.0
use_weight_norm_in_posterior_encoder: true
flow_flows: 4
flow_kernel_size: 5
flow_base_dilation: 1
flow_layers: 4
flow_dropout_rate: 0.0
use_weight_norm_in_flow: true
use_only_mean_in_flow: true
stochastic_duration_predictor_kernel_size: 3
stochastic_duration_predictor_dropout_rate: 0.5
stochastic_duration_predictor_flows: 4
stochastic_duration_predictor_dds_conv_layers: 3
vocabs: 85
aux_channels: 513
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_dur: 1.0
lambda_kl: 1.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: null
pitch_extract_conf: {}
pitch_normalize: null
pitch_normalize_conf: {}
energy_extract: null
energy_extract_conf: {}
energy_normalize: null
energy_normalize_conf: {}
required:
- output_dir
- token_list
version: '202207'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
YakovElm/IntelDAOS20SetFitModel_clean_data | YakovElm | 2023-05-24T03:20:40Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-24T03:20:00Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/IntelDAOS20SetFitModel_clean_data
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/IntelDAOS20SetFitModel_clean_data")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
sarahpuspdew/DeepRLCourse_Unit4-Reinforce-CartPole-v1 | sarahpuspdew | 2023-05-24T03:20:03Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T01:51:05Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: DeepRLCourse_Unit4-Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
YakovElm/IntelDAOS15SetFitModel_clean_data | YakovElm | 2023-05-24T03:06:54Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-24T03:06:18Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/IntelDAOS15SetFitModel_clean_data
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/IntelDAOS15SetFitModel_clean_data")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
arjunpatel/bloom-speed-check-small | arjunpatel | 2023-05-24T02:54:32Z | 0 | 0 | transformers | [
"transformers",
"text-generation",
"en",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-15T03:27:00Z | ---
language:
- en
pipeline_tag: text-generation
library_name: transformers
--- |
Erdenebold/test-distilbert-base-multilingual-cased | Erdenebold | 2023-05-24T02:53:05Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"mn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-24T02:11:40Z | ---
language:
- mn
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test-distilbert-base-multilingual-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-distilbert-base-multilingual-cased
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1533
- Precision: 0.8783
- Recall: 0.9010
- F1: 0.8895
- Accuracy: 0.9721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2124 | 1.0 | 477 | 0.1286 | 0.8065 | 0.8469 | 0.8262 | 0.9586 |
| 0.103 | 2.0 | 954 | 0.1113 | 0.8374 | 0.8772 | 0.8568 | 0.9663 |
| 0.0673 | 3.0 | 1431 | 0.1124 | 0.8480 | 0.8810 | 0.8641 | 0.9668 |
| 0.0474 | 4.0 | 1908 | 0.1165 | 0.8658 | 0.8922 | 0.8788 | 0.9710 |
| 0.0338 | 5.0 | 2385 | 0.1254 | 0.8664 | 0.8909 | 0.8785 | 0.9692 |
| 0.0236 | 6.0 | 2862 | 0.1349 | 0.8686 | 0.8954 | 0.8818 | 0.9707 |
| 0.018 | 7.0 | 3339 | 0.1428 | 0.8772 | 0.8991 | 0.8880 | 0.9715 |
| 0.0133 | 8.0 | 3816 | 0.1505 | 0.8739 | 0.8961 | 0.8849 | 0.9712 |
| 0.0106 | 9.0 | 4293 | 0.1529 | 0.8812 | 0.9012 | 0.8911 | 0.9720 |
| 0.0082 | 10.0 | 4770 | 0.1533 | 0.8783 | 0.9010 | 0.8895 | 0.9721 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bbbdbbb/sd-class-butterflies-32_test | bbbdbbb | 2023-05-24T02:43:12Z | 30 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2023-05-24T02:40:57Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('bbbdbbb/sd-class-butterflies-32_test')
image = pipeline().images[0]
image
```
|
dclab/task-test | dclab | 2023-05-24T02:37:37Z | 0 | 0 | null | [
"text-generation",
"license:other",
"region:us"
] | text-generation | 2023-05-23T14:37:47Z | ---
pipeline_tag: text-generation
license: other
--- |
YakovElm/IntelDAOS20Classic_with_cleaning | YakovElm | 2023-05-24T02:36:25Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-24T02:35:50Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS20Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS20Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1286
- Train Accuracy: 0.9610
- Validation Loss: 0.3677
- Validation Accuracy: 0.9099
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2178 | 0.9600 | 0.3604 | 0.9099 | 0 |
| 0.1502 | 0.9610 | 0.3197 | 0.9099 | 1 |
| 0.1286 | 0.9610 | 0.3677 | 0.9099 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cebernalc/ppo-LunarLander-v2 | cebernalc | 2023-05-24T02:28:13Z | 5 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-24T02:27:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.03 +/- 23.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/IntelDAOS10Classic_with_cleaning | YakovElm | 2023-05-24T02:08:53Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-24T02:08:18Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS10Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS10Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2283
- Train Accuracy: 0.9210
- Validation Loss: 0.4310
- Validation Accuracy: 0.8739
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3141 | 0.9200 | 0.3738 | 0.8739 | 0 |
| 0.2612 | 0.9200 | 0.4105 | 0.8739 | 1 |
| 0.2283 | 0.9210 | 0.4310 | 0.8739 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dev2bit/es2bash-mt5 | dev2bit | 2023-05-24T01:57:45Z | 9 | 1 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"code",
"bash",
"es",
"dataset:dev2bit/es2bash",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-23T23:23:17Z | ---
license: apache-2.0
datasets:
- dev2bit/es2bash
language:
- es
pipeline_tag: text2text-generation
tags:
- code
- bash
widget:
- text: Muestra el contenido de file.py que se encuentra en ~/project/
example_title: cat
- text: Lista los 3 primeros archivos en /bin
example_title: ls
- text: Por favor, cambia al directorio /home/user/project/
example_title: cd
- text: Lista todos los átomos del universo
example_title: noCommand
- text: ls -lh
example_title: literal
- text: file.txt
example_title: simple
---
# es2bash-mt5: Spanish to Bash Model
<p align="center">
<img width="460" height="300" src="https://dev2bit.com/wp-content/themes/lovecraft_child/assets/icons/dev2bit_monitor2.svg">
</p>
Developed by dev2bit, es2bash-mt5 is a language transformer model that is capable of predicting the optimal Bash command given a natural language request in Spanish. This model represents a major advancement in human-computer interaction, providing a natural language interface for Unix operating system commands.
## About the Model
es2bash-mt5 is a fine-tuning model based on mt5-small. It has been trained on the dev2bit/es2bash dataset, which specializes in translating natural language in Spanish into Bash commands.
This model is optimized for processing requests related to the commands:
* `cat`
* `ls`
* `cd`
## Usage
Below is an example of how to use es2bash-mt5 with the Hugging Face Transformers library:
```python
from transformers import pipeline
translator = pipeline('translation', model='dev2bit/es2bash-mt5')
request = "listar los archivos en el directorio actual"
translated = translator(request, max_length=512)
print(translated[0]['translation_text'])
```
This will print the Bash command corresponding to the given Spanish request.
## Contributions
We appreciate your contributions! You can help improve es2bash-mt5 in various ways, including:
* Testing the model and reporting any issues or suggestions in the Issues section.
* Improving the documentation.
* Providing usage examples.
---
# es2bash-mt5: Modelo de español a Bash
Desarrollado por dev2bit, `es2bash-mt5` es un modelo transformador de lenguaje que tiene la capacidad de predecir el comando Bash óptimo dada una solicitud en lenguaje natural en español. Este modelo representa un gran avance en la interacción humano-computadora, proporcionando una interfaz de lenguaje natural para los comandos del sistema operativo Unix.
## Sobre el modelo
`es2bash-mt5` es un modelo de ajuste fino basado en `mt5-small`. Ha sido entrenado en el conjunto de datos `dev2bit/es2bash`, especializado en la traducción de lenguaje natural en español a comandos Bash.
Este modelo está optimizado para procesar solicitudes relacionadas con los comandos:
* `cat`
* `ls`
* `cd`
## Uso
A continuación, se muestra un ejemplo de cómo usar `es2bash-mt5` con la biblioteca Hugging Face Transformers:
```python
from transformers import pipeline
translator = pipeline('translation', model='dev2bit/es2bash-mt5')
request = "listar los archivos en el directorio actual"
translated = translator(request, max_length=512)
print(translated[0]['translation_text'])
```
Esto imprimirá el comando Bash correspondiente a la solicitud dada en español.
## Contribuciones
Agradecemos sus contribuciones! Puede ayudar a mejorar es2bash-mt5 de varias formas, incluyendo:
* Probar el modelo y reportar cualquier problema o sugerencia en la sección de Issues.
* Mejorando la documentación.
* Proporcionando ejemplos de uso.
---
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the es2bash dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0919
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 28
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 21.394 | 1.0 | 672 | 1.7470 |
| 2.5294 | 2.0 | 1344 | 0.6350 |
| 0.5873 | 3.0 | 2016 | 0.2996 |
| 0.3802 | 4.0 | 2688 | 0.2142 |
| 0.2951 | 5.0 | 3360 | 0.1806 |
| 0.225 | 6.0 | 4032 | 0.1565 |
| 0.2065 | 7.0 | 4704 | 0.1461 |
| 0.1944 | 8.0 | 5376 | 0.1343 |
| 0.174 | 9.0 | 6048 | 0.1281 |
| 0.1647 | 10.0 | 6720 | 0.1207 |
| 0.1566 | 11.0 | 7392 | 0.1140 |
| 0.1498 | 12.0 | 8064 | 0.1106 |
| 0.1382 | 13.0 | 8736 | 0.1076 |
| 0.1393 | 14.0 | 9408 | 0.1042 |
| 0.1351 | 15.0 | 10080 | 0.1019 |
| 0.13 | 16.0 | 10752 | 0.0998 |
| 0.1292 | 17.0 | 11424 | 0.0983 |
| 0.1265 | 18.0 | 12096 | 0.0973 |
| 0.1255 | 19.0 | 12768 | 0.0969 |
| 0.1216 | 20.0 | 13440 | 0.0956 |
| 0.1216 | 21.0 | 14112 | 0.0946 |
| 0.123 | 22.0 | 14784 | 0.0938 |
| 0.113 | 23.0 | 15456 | 0.0931 |
| 0.1185 | 24.0 | 16128 | 0.0929 |
| 0.1125 | 25.0 | 16800 | 0.0927 |
| 0.1213 | 26.0 | 17472 | 0.0925 |
| 0.1153 | 27.0 | 18144 | 0.0921 |
| 0.1134 | 28.0 | 18816 | 0.0919 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
HippyHoppity/ppo-LunarLander-v2 | HippyHoppity | 2023-05-24T01:53:38Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-24T01:53:07Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 246.66 +/- 21.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ausboss/llama-30b-SuperHOT-4bit | ausboss | 2023-05-24T01:35:50Z | 7 | 5 | transformers | [
"transformers",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-23T18:23:06Z | Merge of [SuperHOT-LoRA-prototype](https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype) and [llama-30b](https://huggingface.co/huggyllama/llama-30b)
Llama30B-SuperHOT-4bit-128g.safetensors Quantization:
```
CUDA_VISIBLE_DEVICES=0 python llama.py ausboss/Llama30B-SuperHOT c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors Llama30B-SuperHOT-4bit-128g.safetensors
```
Llama30B-SuperHOT-4bit.safetensors Quantization:
```
CUDA_VISIBLE_DEVICES=0 python llama.py ausboss/Llama30B-SuperHOT c4 --wbits 4 --true-sequential --save_safetensors Llama30B-SuperHOT-4bit.safetensors
```
# From the SuperHot Page:
## Prototypes for SuperHOT
No guarantees for output quality, simply uploading what I have so others can play around with it. Not even sure if the rank in cutoff-8192 is correct (think it should be 10 maybe.. can't remember)
All prototypes are extremely early epochs (sub 0.5)
## Model/Training
All trained with Flash Attention with conversation sequence lengths ranging from 8K to 16K tokens (No Alibi unless otherwise mentioned)
All trained on LLaMa 13B 4-bit (no groupsize)
(*Personally, I like the 8K cutoff version better, so I would say start with that one*)
## Data
A combination of various datasets and cleaned logs converted into datasets including but not limited to:
- Bluemoon Fanbased
- Roleplaying Guild
- Community-sourced outputs
- [Dan's PocketDoc/RUCAIBox-Story-Generation-Alpaca](https://huggingface.co/datasets/PocketDoc/RUCAIBox-Story-Generation-Alpaca)
- [IlyaGusev/gpt_roleplay_realm](https://huggingface.co/datasets/IlyaGusev/gpt_roleplay_realm)
- others
## Bias
SuperHOT is a fiction-focused model. No alignment has been performed on the training data. Be mindful that this model may output harmful, violent, or otherwise problematic content
## Format
Any format should work with such early checkpoints. However the training data is entirely in the following format:
```
---
mode: chat
characters:
<char1 name>: <descriptive tags for char1>
<char2 name>: <descriptive tags for char2>
summary: <summary of the story thus far or the purpose of the chat> (optional)
<any other miscellaneous data>
---
<chat history>
```
By "any other miscellaneous data", it means you should be able to put any additional metadata for the story or characters. I.e.,
```
...
locations:
location1: <tags for location1>
inventory:
item1: <tags for item1>
```
Again, format does not hold such a large weight on these early checkpoints. I have found success with the following setup for an RPG-like experience. Just play around with the format and see what works:
```
---
mode: rpg
characters:
You: a new player
system: The system controls the RPG, handles character creation, world narration, and quest management. Also controls any NPCs and inventory tracking. Their first message provides a lengthy introduction for the player into the RPG world they are about to play in. After completing the character creation, the system will give a lengthy introduction into the world of ___. The first quest will follow right after
rpg setting: The world of ___
rpg rules: Any rules typical of RPG games, including typical items, battle stats, etc
---
```
|
mauhcs/distilbert-base-uncased-finetuned-emotion | mauhcs | 2023-05-24T01:35:18Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T01:56:39Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9249666408719047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2147
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8493 | 1.0 | 250 | 0.3120 | 0.9115 | 0.9084 |
| 0.2513 | 2.0 | 500 | 0.2147 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
edhrdh/cathx | edhrdh | 2023-05-24T00:49:11Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-24T00:47:02Z | ---
license: creativeml-openrail-m
---
|
Erdenebold/testing_mongolian-roberta_base | Erdenebold | 2023-05-24T00:33:57Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"mn",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-23T23:33:51Z | ---
language:
- mn
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: testing_mongolian-roberta_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing_mongolian-roberta_base
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1244
- Precision: 0.9311
- Recall: 0.9399
- F1: 0.9355
- Accuracy: 0.9821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1683 | 1.0 | 477 | 0.0805 | 0.8377 | 0.8921 | 0.8640 | 0.9730 |
| 0.0545 | 2.0 | 954 | 0.0739 | 0.9205 | 0.9334 | 0.9269 | 0.9806 |
| 0.0292 | 3.0 | 1431 | 0.0778 | 0.9270 | 0.9354 | 0.9312 | 0.9817 |
| 0.0164 | 4.0 | 1908 | 0.0884 | 0.9290 | 0.9360 | 0.9325 | 0.9820 |
| 0.008 | 5.0 | 2385 | 0.1025 | 0.9247 | 0.9365 | 0.9306 | 0.9811 |
| 0.0057 | 6.0 | 2862 | 0.1093 | 0.9294 | 0.9369 | 0.9331 | 0.9815 |
| 0.0037 | 7.0 | 3339 | 0.1173 | 0.9336 | 0.9412 | 0.9374 | 0.9822 |
| 0.0026 | 8.0 | 3816 | 0.1217 | 0.9281 | 0.9374 | 0.9327 | 0.9817 |
| 0.0016 | 9.0 | 4293 | 0.1225 | 0.9334 | 0.9399 | 0.9366 | 0.9821 |
| 0.0012 | 10.0 | 4770 | 0.1244 | 0.9311 | 0.9399 | 0.9355 | 0.9821 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sd-concepts-library/clothes | sd-concepts-library | 2023-05-24T00:05:08Z | 0 | 1 | null | [
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"region:us"
] | null | 2023-05-24T00:05:05Z | ---
license: mit
base_model: stabilityai/stable-diffusion-2
---
### Clothes on Stable Diffusion
This is the `<cat-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
D-Roberts/tf-efficientformer-l3-300-dev1 | D-Roberts | 2023-05-23T23:43:54Z | 62 | 0 | transformers | [
"transformers",
"tf",
"efficientformer",
"image-feature-extraction",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2023-05-17T13:48:42Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tf-efficientformer-l3-300-dev1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-efficientformer-l3-300-dev1
dev only
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.11.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
D-Roberts/tf-efficientformer-l1-300-dev1 | D-Roberts | 2023-05-23T23:43:15Z | 61 | 0 | transformers | [
"transformers",
"tf",
"efficientformer",
"image-feature-extraction",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2023-05-17T13:47:22Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tf-efficientformer-l1-300-dev1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tf-efficientformer-l1-300-dev1
dev-only
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.11.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
wiorz/legal_bert_small_summarized_defined | wiorz | 2023-05-23T23:21:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-22T23:57:17Z | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_bert_small_summarized_defined
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_bert_small_summarized_defined
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8897
- Accuracy: 0.835
- Precision: 0.5
- Recall: 0.1515
- F1: 0.2326
- D-index: 1.5181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1600
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 200 | 0.4467 | 0.835 | 0.0 | 0.0 | 0.0 | 1.4607 |
| No log | 2.0 | 400 | 0.4909 | 0.835 | 0.0 | 0.0 | 0.0 | 1.4607 |
| 0.5409 | 3.0 | 600 | 0.4941 | 0.83 | 0.4545 | 0.1515 | 0.2273 | 1.5113 |
| 0.5409 | 4.0 | 800 | 0.5612 | 0.84 | 0.6 | 0.0909 | 0.1579 | 1.5021 |
| 0.4849 | 5.0 | 1000 | 0.6301 | 0.84 | 0.5714 | 0.1212 | 0.2 | 1.5135 |
| 0.4849 | 6.0 | 1200 | 0.8969 | 0.84 | 0.6 | 0.0909 | 0.1579 | 1.5021 |
| 0.4849 | 7.0 | 1400 | 1.3171 | 0.82 | 0.3636 | 0.1212 | 0.1818 | 1.4865 |
| 0.2104 | 8.0 | 1600 | 1.6653 | 0.775 | 0.2692 | 0.2121 | 0.2373 | 1.4593 |
| 0.2104 | 9.0 | 1800 | 1.7041 | 0.795 | 0.3182 | 0.2121 | 0.2545 | 1.4866 |
| 0.0314 | 10.0 | 2000 | 1.7495 | 0.815 | 0.3571 | 0.1515 | 0.2128 | 1.4911 |
| 0.0314 | 11.0 | 2200 | 1.7627 | 0.815 | 0.3571 | 0.1515 | 0.2128 | 1.4911 |
| 0.0314 | 12.0 | 2400 | 1.7892 | 0.825 | 0.375 | 0.0909 | 0.1463 | 1.4819 |
| 0.0067 | 13.0 | 2600 | 1.8211 | 0.83 | 0.4444 | 0.1212 | 0.1905 | 1.5000 |
| 0.0067 | 14.0 | 2800 | 1.8567 | 0.83 | 0.4444 | 0.1212 | 0.1905 | 1.5000 |
| 0.0 | 15.0 | 3000 | 1.8817 | 0.83 | 0.4444 | 0.1212 | 0.1905 | 1.5000 |
| 0.0 | 16.0 | 3200 | 1.8590 | 0.825 | 0.4167 | 0.1515 | 0.2222 | 1.5046 |
| 0.0 | 17.0 | 3400 | 1.8619 | 0.835 | 0.5 | 0.1515 | 0.2326 | 1.5181 |
| 0.0014 | 18.0 | 3600 | 1.8744 | 0.835 | 0.5 | 0.1515 | 0.2326 | 1.5181 |
| 0.0014 | 19.0 | 3800 | 1.8849 | 0.835 | 0.5 | 0.1515 | 0.2326 | 1.5181 |
| 0.0 | 20.0 | 4000 | 1.8897 | 0.835 | 0.5 | 0.1515 | 0.2326 | 1.5181 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
asenella/reproducing_mvae_mmnist_seed_0 | asenella | 2023-05-23T23:01:10Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-23T23:01:00Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
kaminomisan/PhenixB7b | kaminomisan | 2023-05-23T22:54:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-23T22:54:11Z | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
duplicated_from: mosaicml/mpt-7b
---
# MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
## Model Date
May 5, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
``` |
YakovElm/Hyperledger15Classic_with_cleaning | YakovElm | 2023-05-23T22:44:21Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T22:43:21Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger15Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger15Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2347
- Train Accuracy: 0.9045
- Validation Loss: 0.3515
- Validation Accuracy: 0.8651
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3126 | 0.9031 | 0.3318 | 0.8807 | 0 |
| 0.2844 | 0.9028 | 0.3275 | 0.8807 | 1 |
| 0.2347 | 0.9045 | 0.3515 | 0.8651 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dgalik/distilbert-finetuning-hate-speech-score-all-samples-3splits-seedv2-dropout005-epochs-10 | dgalik | 2023-05-23T22:31:48Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-05-23T21:28:22Z | ---
tags:
- generated_from_trainer
model-index:
- name: distilbert-finetuning-hate-speech-score-all-samples-3splits-seedv2-dropout005-epochs-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuning-hate-speech-score-all-samples-3splits-seedv2-dropout005-epochs-10
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3501
- Mse: 0.3501
- Rmse: 0.5917
- Mae: 0.2542
- R2: 0.9374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Qt20Classic_with_cleaning | YakovElm | 2023-05-23T22:11:51Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T22:10:56Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt20Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt20Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1619
- Train Accuracy: 0.9500
- Validation Loss: 0.1838
- Validation Accuracy: 0.9554
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2142 | 0.9462 | 0.1640 | 0.9586 | 0 |
| 0.1934 | 0.9462 | 0.1576 | 0.9586 | 1 |
| 0.1619 | 0.9500 | 0.1838 | 0.9554 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SenY/shoujo | SenY | 2023-05-23T21:51:54Z | 0 | 58 | null | [
"art",
"region:us"
] | null | 2023-03-12T02:08:45Z | ---
tags:
- art
---
# shoujo.safetensors
Concepts: Shoujo Manga

```prompt example
<lora:shoujo:1> 1girl
```
```prompt example
<lora:shoujo:1.4> 1girl
```
|<div style="width:16rem">name</div>|90s|00s|10s|
|-|-|-|-|
|shoujo_c - more juvenile girl||||
|shoujo_r - more romantic girl||||
|shoujo_n - more fantastic girl||||
|
JoBuettner/rl_course_vizdoom_health_gathering_supreme | JoBuettner | 2023-05-23T21:44:18Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T21:20:00Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.73 +/- 5.75
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r JoBuettner/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
damapika/roberta-base_mod_quoref | damapika | 2023-05-23T21:20:20Z | 113 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:quoref",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-23T19:19:39Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- quoref
model-index:
- name: roberta-base_mod_quoref
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base_mod_quoref
This model is a fine-tuned version of [damapika/roberta-base_mod_squad](https://huggingface.co/damapika/roberta-base_mod_squad) on the quoref dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1263 | 1.0 | 1213 | 1.2665 |
| 0.7404 | 2.0 | 2426 | 1.3567 |
| 0.5172 | 3.0 | 3639 | 1.5566 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
manish1993hf/sms_class_test3 | manish1993hf | 2023-05-23T21:04:56Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T21:00:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sms_class_test3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sms_class_test3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0078
- Accuracy: 0.9990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 272 | 0.0093 | 0.9979 |
| 0.0472 | 2.0 | 544 | 0.0080 | 0.9990 |
| 0.0472 | 3.0 | 816 | 0.0074 | 0.9990 |
| 0.0005 | 4.0 | 1088 | 0.0078 | 0.9990 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
azetaaa/ppo-ML-Agents-Pyramids | azetaaa | 2023-05-23T20:50:53Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-05-23T20:50:47Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: azetaaa/ppo-ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
raulc0399/bloomz_3b_marketmail | raulc0399 | 2023-05-23T20:49:57Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-05-23T20:46:28Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DarioLopes/marian-finetuned-kde4-en-to-fr | DarioLopes | 2023-05-23T20:49:23Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-05-21T22:21:16Z | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.752028933869816
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8559
- Bleu: 52.7520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zib16/alpaca-lora | zib16 | 2023-05-23T20:42:28Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-23T15:50:44Z |
An Alpaca LoRA model fine-tuned as described by Sam Witteveen in https://www.youtube.com/watch?v=LSoqyynKU9E. \
The base model is the llama-7b and the data from Stanford Alpaca have been used for the fine-tuning. \
These data can be found in https://github.com/tloen/alpaca-lora.
Date: April 2023 |
emmanuel17/a2c-PandaReachDense-v2 | emmanuel17 | 2023-05-23T20:34:27Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T20:31:39Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.99 +/- 0.54
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
manish1993hf/sms_class_test1 | manish1993hf | 2023-05-23T20:30:38Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T20:25:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sms_class_test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sms_class_test1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0314
- eval_accuracy: 0.9964
- eval_runtime: 0.5458
- eval_samples_per_second: 511.138
- eval_steps_per_second: 16.488
- epoch: 5.0
- step: 830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
YakovElm/Qt10Classic_with_cleaning | YakovElm | 2023-05-23T20:30:37Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T20:29:00Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt10Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt10Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2156
- Train Accuracy: 0.9208
- Validation Loss: 0.2238
- Validation Accuracy: 0.9416
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2830 | 0.9159 | 0.2121 | 0.9416 | 0 |
| 0.2515 | 0.9210 | 0.2015 | 0.9416 | 1 |
| 0.2156 | 0.9208 | 0.2238 | 0.9416 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
whorseman/author | whorseman | 2023-05-23T20:13:17Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T18:55:00Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: author
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# author
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0925
- Accuracy: 0.1111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 182 | 3.1296 | 0.0915 |
| No log | 2.0 | 364 | 3.0925 | 0.1111 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Santonu001/results | Santonu001 | 2023-05-23T20:13:12Z | 174 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-23T19:26:03Z | ---
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [mrm8488/spanbert-finetuned-squadv2](https://huggingface.co/mrm8488/spanbert-finetuned-squadv2) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.01
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.01 | 82 | 3.4944 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
azetaaa/ppo-ML-Agents-SnowballTarget | azetaaa | 2023-05-23T20:11:04Z | 9 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2023-05-23T20:09:21Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: azetaaa/ppo-ML-Agents-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Govindaramani/food_classifier | Govindaramani | 2023-05-23T19:45:19Z | 64 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-20T23:01:18Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Govindaramani/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Govindaramani/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3746
- Validation Loss: 0.3760
- Train Accuracy: 0.899
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7487 | 1.6239 | 0.828 | 0 |
| 1.2189 | 0.8179 | 0.889 | 1 |
| 0.6889 | 0.5344 | 0.906 | 2 |
| 0.4833 | 0.5014 | 0.878 | 3 |
| 0.3746 | 0.3760 | 0.899 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Apache10SetFitModel_clean_data | YakovElm | 2023-05-23T19:38:28Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-23T19:37:54Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Apache10SetFitModel_clean_data
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Apache10SetFitModel_clean_data")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
kribby/cats-mobilenet-imagenet-v3 | kribby | 2023-05-23T19:35:12Z | 4 | 0 | tf-keras | [
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] | image-classification | 2023-05-23T19:33:34Z | ---
pipeline_tag: image-classification
--- |
oransom48/pretrained_bert_fordiseaseclassif_1 | oransom48 | 2023-05-23T19:34:04Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T19:12:22Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: pretrained_bert_fordiseaseclassif_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# pretrained_bert_fordiseaseclassif_1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
henryscheible/t5-small_stereoset_finetuned_HBRPOI | henryscheible | 2023-05-23T19:30:01Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:stereoset",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-23T19:13:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- stereoset
metrics:
- accuracy
model-index:
- name: t5-small_stereoset_finetuned_HBRPOI
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: stereoset
type: stereoset
config: intersentence
split: validation
args: intersentence
metrics:
- name: Accuracy
type: accuracy
value: 0.6028257456828885
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_stereoset_finetuned_HBRPOI
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the stereoset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4383
- Accuracy: 0.6028
- Tp: 0.4890
- Tn: 0.1138
- Fp: 0.3854
- Fn: 0.0118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Tp | Tn | Fp | Fn |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:------:|:------:|
| 0.4447 | 0.43 | 20 | 0.3978 | 0.5008 | 0.5008 | 0.0 | 0.4992 | 0.0 |
| 0.3776 | 0.85 | 40 | 0.3448 | 0.6232 | 0.5008 | 0.1224 | 0.3768 | 0.0 |
| 0.3649 | 1.28 | 60 | 0.3269 | 0.5612 | 0.5 | 0.0612 | 0.4380 | 0.0008 |
| 0.3275 | 1.7 | 80 | 0.3218 | 0.5330 | 0.4992 | 0.0338 | 0.4655 | 0.0016 |
| 0.2969 | 2.13 | 100 | 0.3104 | 0.6256 | 0.4961 | 0.1295 | 0.3697 | 0.0047 |
| 0.3283 | 2.55 | 120 | 0.3111 | 0.5730 | 0.4992 | 0.0738 | 0.4254 | 0.0016 |
| 0.3046 | 2.98 | 140 | 0.3040 | 0.5416 | 0.4992 | 0.0424 | 0.4568 | 0.0016 |
| 0.2603 | 3.4 | 160 | 0.3057 | 0.5447 | 0.4992 | 0.0455 | 0.4537 | 0.0016 |
| 0.2828 | 3.83 | 180 | 0.3186 | 0.5479 | 0.4984 | 0.0495 | 0.4498 | 0.0024 |
| 0.2326 | 4.26 | 200 | 0.3036 | 0.6193 | 0.4937 | 0.1256 | 0.3736 | 0.0071 |
| 0.2289 | 4.68 | 220 | 0.3328 | 0.5479 | 0.4976 | 0.0502 | 0.4490 | 0.0031 |
| 0.2234 | 5.11 | 240 | 0.3140 | 0.5777 | 0.4976 | 0.0801 | 0.4192 | 0.0031 |
| 0.2225 | 5.53 | 260 | 0.3245 | 0.5691 | 0.4976 | 0.0714 | 0.4278 | 0.0031 |
| 0.187 | 5.96 | 280 | 0.3300 | 0.5785 | 0.4961 | 0.0824 | 0.4168 | 0.0047 |
| 0.179 | 6.38 | 300 | 0.3344 | 0.5848 | 0.4961 | 0.0887 | 0.4105 | 0.0047 |
| 0.1523 | 6.81 | 320 | 0.3528 | 0.5895 | 0.4969 | 0.0926 | 0.4066 | 0.0039 |
| 0.1499 | 7.23 | 340 | 0.3788 | 0.6232 | 0.4906 | 0.1327 | 0.3666 | 0.0102 |
| 0.1292 | 7.66 | 360 | 0.3889 | 0.5942 | 0.4914 | 0.1028 | 0.3964 | 0.0094 |
| 0.13 | 8.09 | 380 | 0.3959 | 0.5903 | 0.4937 | 0.0965 | 0.4027 | 0.0071 |
| 0.1216 | 8.51 | 400 | 0.4169 | 0.5856 | 0.4922 | 0.0934 | 0.4058 | 0.0086 |
| 0.1306 | 8.94 | 420 | 0.4227 | 0.6005 | 0.4898 | 0.1107 | 0.3885 | 0.0110 |
| 0.0968 | 9.36 | 440 | 0.4334 | 0.5965 | 0.4914 | 0.1052 | 0.3940 | 0.0094 |
| 0.1044 | 9.79 | 460 | 0.4383 | 0.6028 | 0.4890 | 0.1138 | 0.3854 | 0.0118 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
YakovElm/Qt20Classic | YakovElm | 2023-05-23T19:29:42Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T19:29:07Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt20Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt20Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1836
- Train Accuracy: 0.9462
- Validation Loss: 0.1813
- Validation Accuracy: 0.9594
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2163 | 0.9454 | 0.1596 | 0.9586 | 0 |
| 0.2044 | 0.9462 | 0.1554 | 0.9586 | 1 |
| 0.1836 | 0.9462 | 0.1813 | 0.9594 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Apache5SetFitModel_clean_data | YakovElm | 2023-05-23T19:25:18Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-23T19:24:40Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Apache5SetFitModel_clean_data
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Apache5SetFitModel_clean_data")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
henryscheible/t5-small_winobias_finetuned_HBRPOI | henryscheible | 2023-05-23T19:24:19Z | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-23T19:20:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-small_winobias_finetuned_HBRPOI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_winobias_finetuned_HBRPOI
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3333
- Accuracy: 0.5
- Tp: 0.5
- Tn: 0.0
- Fp: 0.5
- Fn: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Tp | Tn | Fp | Fn |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:------:|:------:|:---:|
| 0.437 | 0.8 | 20 | 0.3545 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3996 | 1.6 | 40 | 0.3565 | 0.5025 | 0.5 | 0.0025 | 0.4975 | 0.0 |
| 0.3844 | 2.4 | 60 | 0.3498 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3728 | 3.2 | 80 | 0.3529 | 0.5013 | 0.5 | 0.0013 | 0.4987 | 0.0 |
| 0.3732 | 4.0 | 100 | 0.3482 | 0.5006 | 0.5 | 0.0006 | 0.4994 | 0.0 |
| 0.3798 | 4.8 | 120 | 0.3484 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3607 | 5.6 | 140 | 0.3475 | 0.5006 | 0.5 | 0.0006 | 0.4994 | 0.0 |
| 0.3688 | 6.4 | 160 | 0.3456 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3597 | 7.2 | 180 | 0.3445 | 0.5006 | 0.5 | 0.0006 | 0.4994 | 0.0 |
| 0.3658 | 8.0 | 200 | 0.3402 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3629 | 8.8 | 220 | 0.3362 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
| 0.3393 | 9.6 | 240 | 0.3333 | 0.5 | 0.5 | 0.0 | 0.5 | 0.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.10.1
- Tokenizers 0.13.2
|
YakovElm/Apache20Classic_with_cleaning | YakovElm | 2023-05-23T19:11:07Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T19:10:31Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1300
- Train Accuracy: 0.9622
- Validation Loss: 0.4258
- Validation Accuracy: 0.9055
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1764 | 0.9548 | 0.3066 | 0.9055 | 0 |
| 0.1518 | 0.9624 | 0.3933 | 0.9055 | 1 |
| 0.1300 | 0.9622 | 0.4258 | 0.9055 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
xzuyn/Pythia-Deduped-70M-GGML | xzuyn | 2023-05-23T18:57:00Z | 0 | 1 | null | [
"gpt_neox",
"region:us"
] | null | 2023-05-23T06:14:02Z | ---
tags:
- gpt_neox
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/EleutherAI/pythia-70m-deduped |
Heilgeirr/heilgeirr2 | Heilgeirr | 2023-05-23T18:47:54Z | 31 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-05-23T18:44:31Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### heilgeirr2 Dreambooth model trained by Heilgeirr with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
clulab/roberta-base-motivational-interviewing | clulab | 2023-05-23T18:47:06Z | 105 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"motivational-interviewing",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T17:14:58Z | ---
language:
- en
license: apache-2.0
library_name: transformers
pipeline_tag: text-classification
tags:
- motivational-interviewing
metrics:
- f1
widget:
- text: >-
I'm planning on having tuna, ground tuna, chopped celery, and chopped black
pepper, and half a apple.
example_title: change_talk_goal_talk_and_opportunities
---
# Model Card for roberta-base-motivational-interviewing
⚠ WARNING: This is a preliminary model that is still actively under development. ⚠
This is a [roBERTa-base](https://huggingface.co/roberta-base) model fine-tuned on a small dataset of conversations between health coaches and cancer survivors.
# How to Get Started with the Model
You can use this model directly with a pipeline for text classification:
```python
>>> import transformers
>>> model_name = "clulab/roberta-base-motivational-interviewing"
>>> classifier = transformers.TextClassificationPipeline(
... tokenizer=transformers.AutoTokenizer.from_pretrained(model_name),
... model=transformers.AutoModelForSequenceClassification.from_pretrained(model_name))
>>> classifier("I'm planning on having tuna, ground tuna, chopped celery, and chopped black pepper, and half a apple.")
[{'label': 'change_talk_goal_talk_and_opportunities', 'score': 0.9995419979095459}]
```
# Model Details
- **Developed by:** [Steven Bethard](https://bethard.github.io/)
- **Parent Model:** [roBERTa-base](https://huggingface.co/roberta-base)
- **GitHub Repo:** [LIvES repo](https://github.com/clulab/lives)
# Uses
The model is intended to be used for text classification, taking as input conversational utterances and predicting as output different categories of motivational interviewing behaviors.
It is intended for use by health coaches to assist when reviewing their past calls with participants. Its predictions should not be used without manual review.
# Training Details
The model was trained on data annotated under the grant [Using Natural Language Processing to Determine Predictors of Healthy Diet and Physical Activity Behavior Change in Ovarian Cancer Survivors (NIH NCI R21CA256680)](https://reporter.nih.gov/project-details/10510666). A [roberta-base](https://huggingface.co/roberta-base) model was fine-tuned on that dataset, with texts tokenized using the standard [roberta-base](https://huggingface.co/roberta-base) tokenizer.
# Evaluation
On the test partition of the R21CA256680 dataset, the model achieves 0.60 precision and 0.46 recall. |
YakovElm/Qt15Classic | YakovElm | 2023-05-23T18:39:04Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T18:38:29Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Qt15Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Qt15Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2046
- Train Accuracy: 0.9367
- Validation Loss: 0.2038
- Validation Accuracy: 0.9505
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2400 | 0.9354 | 0.1896 | 0.9505 | 0 |
| 0.2235 | 0.9367 | 0.1826 | 0.9505 | 1 |
| 0.2046 | 0.9367 | 0.2038 | 0.9505 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
VinsmokeMir/Method2_E13B_SC_BS4_LR3e5 | VinsmokeMir | 2023-05-23T18:18:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T16:27:50Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Method2_E13B_SC_BS4_LR3e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Method2_E13B_SC_BS4_LR3e5
This model is a fine-tuned version of [rafsankabir/Pretrained_E13B_Method2](https://huggingface.co/rafsankabir/Pretrained_E13B_Method2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5641
- Accuracy: 0.6803
- F1 Macro: 0.6446
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:--------:|
| No log | 0.16 | 500 | 1.0767 | 0.3976 | 0.1896 |
| 1.075 | 0.32 | 1000 | 1.0769 | 0.3976 | 0.1896 |
| 1.075 | 0.48 | 1500 | 1.0183 | 0.5539 | 0.4151 |
| 1.0246 | 0.64 | 2000 | 0.8956 | 0.5916 | 0.4745 |
| 1.0246 | 0.8 | 2500 | 0.8743 | 0.6082 | 0.5120 |
| 0.8948 | 0.95 | 3000 | 0.8365 | 0.6216 | 0.5546 |
| 0.8948 | 1.11 | 3500 | 0.8635 | 0.6311 | 0.5752 |
| 0.8069 | 1.27 | 4000 | 0.9060 | 0.6158 | 0.5398 |
| 0.8069 | 1.43 | 4500 | 0.8231 | 0.6388 | 0.5924 |
| 0.7969 | 1.59 | 5000 | 0.8368 | 0.6331 | 0.5935 |
| 0.7969 | 1.75 | 5500 | 0.8262 | 0.6477 | 0.5981 |
| 0.7804 | 1.91 | 6000 | 0.8299 | 0.6579 | 0.6208 |
| 0.7804 | 2.07 | 6500 | 0.8197 | 0.6579 | 0.6364 |
| 0.715 | 2.23 | 7000 | 0.8498 | 0.6624 | 0.5955 |
| 0.715 | 2.39 | 7500 | 0.8357 | 0.6669 | 0.6218 |
| 0.6953 | 2.54 | 8000 | 0.8438 | 0.6560 | 0.6269 |
| 0.6953 | 2.7 | 8500 | 0.8528 | 0.6669 | 0.6022 |
| 0.7074 | 2.86 | 9000 | 0.8009 | 0.6745 | 0.6457 |
| 0.7074 | 3.02 | 9500 | 0.8222 | 0.6720 | 0.6402 |
| 0.6598 | 3.18 | 10000 | 0.9347 | 0.6650 | 0.6062 |
| 0.6598 | 3.34 | 10500 | 0.9053 | 0.6803 | 0.6510 |
| 0.6634 | 3.5 | 11000 | 0.8902 | 0.6720 | 0.6434 |
| 0.6634 | 3.66 | 11500 | 0.9370 | 0.6733 | 0.6415 |
| 0.6182 | 3.82 | 12000 | 0.8914 | 0.6745 | 0.6519 |
| 0.6182 | 3.98 | 12500 | 0.8938 | 0.6752 | 0.6389 |
| 0.6043 | 4.13 | 13000 | 1.0143 | 0.6745 | 0.6413 |
| 0.6043 | 4.29 | 13500 | 1.0768 | 0.6765 | 0.6543 |
| 0.587 | 4.45 | 14000 | 1.1154 | 0.6790 | 0.6421 |
| 0.587 | 4.61 | 14500 | 1.1295 | 0.6828 | 0.6525 |
| 0.6345 | 4.77 | 15000 | 1.1210 | 0.6822 | 0.6390 |
| 0.6345 | 4.93 | 15500 | 1.0062 | 0.6726 | 0.6380 |
| 0.6 | 5.09 | 16000 | 1.1504 | 0.6739 | 0.6369 |
| 0.6 | 5.25 | 16500 | 1.3298 | 0.6733 | 0.6280 |
| 0.5667 | 5.41 | 17000 | 1.2751 | 0.6662 | 0.6308 |
| 0.5667 | 5.57 | 17500 | 1.4070 | 0.6567 | 0.6069 |
| 0.614 | 5.73 | 18000 | 1.2956 | 0.6694 | 0.6284 |
| 0.614 | 5.88 | 18500 | 1.2795 | 0.6822 | 0.6382 |
| 0.5651 | 6.04 | 19000 | 1.3021 | 0.6739 | 0.6478 |
| 0.5651 | 6.2 | 19500 | 1.4076 | 0.6682 | 0.6333 |
| 0.5347 | 6.36 | 20000 | 1.3917 | 0.6733 | 0.6344 |
| 0.5347 | 6.52 | 20500 | 1.4203 | 0.6790 | 0.6285 |
| 0.5278 | 6.68 | 21000 | 1.3340 | 0.6860 | 0.6628 |
| 0.5278 | 6.84 | 21500 | 1.3521 | 0.6873 | 0.6489 |
| 0.5796 | 7.0 | 22000 | 1.2835 | 0.6847 | 0.6567 |
| 0.5796 | 7.16 | 22500 | 1.4437 | 0.6879 | 0.6563 |
| 0.4627 | 7.32 | 23000 | 1.5052 | 0.6835 | 0.6435 |
| 0.4627 | 7.47 | 23500 | 1.4991 | 0.6707 | 0.6434 |
| 0.518 | 7.63 | 24000 | 1.5436 | 0.6656 | 0.6403 |
| 0.518 | 7.79 | 24500 | 1.5247 | 0.6784 | 0.6433 |
| 0.5373 | 7.95 | 25000 | 1.4743 | 0.6835 | 0.6537 |
| 0.5373 | 8.11 | 25500 | 1.5379 | 0.6777 | 0.6385 |
| 0.4539 | 8.27 | 26000 | 1.5548 | 0.6739 | 0.6393 |
| 0.4539 | 8.43 | 26500 | 1.6174 | 0.6669 | 0.6378 |
| 0.4519 | 8.59 | 27000 | 1.5949 | 0.6816 | 0.6504 |
| 0.4519 | 8.75 | 27500 | 1.5558 | 0.6816 | 0.6357 |
| 0.4813 | 8.91 | 28000 | 1.5826 | 0.6739 | 0.6553 |
| 0.4813 | 9.06 | 28500 | 1.5929 | 0.6867 | 0.6540 |
| 0.4121 | 9.22 | 29000 | 1.6260 | 0.6886 | 0.6545 |
| 0.4121 | 9.38 | 29500 | 1.5950 | 0.6841 | 0.6500 |
| 0.4451 | 9.54 | 30000 | 1.6146 | 0.6854 | 0.6481 |
| 0.4451 | 9.7 | 30500 | 1.6587 | 0.6796 | 0.6493 |
| 0.4039 | 9.86 | 31000 | 1.6173 | 0.6758 | 0.6400 |
| 0.4039 | 10.02 | 31500 | 1.5952 | 0.6803 | 0.6517 |
| 0.3921 | 10.18 | 32000 | 1.7298 | 0.6694 | 0.6413 |
| 0.3921 | 10.34 | 32500 | 1.7106 | 0.6796 | 0.6467 |
| 0.3799 | 10.5 | 33000 | 1.6695 | 0.6867 | 0.6505 |
| 0.3799 | 10.66 | 33500 | 1.6907 | 0.6803 | 0.6550 |
| 0.4003 | 10.81 | 34000 | 1.6811 | 0.6809 | 0.6413 |
| 0.4003 | 10.97 | 34500 | 1.6644 | 0.6771 | 0.6352 |
| 0.3812 | 11.13 | 35000 | 1.7371 | 0.6822 | 0.6386 |
| 0.3812 | 11.29 | 35500 | 1.7405 | 0.6841 | 0.6516 |
| 0.3399 | 11.45 | 36000 | 1.6981 | 0.6822 | 0.6503 |
| 0.3399 | 11.61 | 36500 | 1.6536 | 0.6847 | 0.6483 |
| 0.3653 | 11.77 | 37000 | 1.7461 | 0.6790 | 0.6475 |
| 0.3653 | 11.93 | 37500 | 1.7247 | 0.6790 | 0.6485 |
| 0.338 | 12.09 | 38000 | 1.7433 | 0.6905 | 0.6532 |
| 0.338 | 12.25 | 38500 | 1.7331 | 0.6765 | 0.6558 |
| 0.3302 | 12.4 | 39000 | 1.7603 | 0.6796 | 0.6456 |
| 0.3302 | 12.56 | 39500 | 1.7784 | 0.6726 | 0.6505 |
| 0.3195 | 12.72 | 40000 | 1.8032 | 0.6784 | 0.6469 |
| 0.3195 | 12.88 | 40500 | 1.7869 | 0.6822 | 0.6553 |
| 0.3508 | 13.04 | 41000 | 1.7761 | 0.6752 | 0.6506 |
| 0.3508 | 13.2 | 41500 | 1.7806 | 0.6847 | 0.6454 |
| 0.2915 | 13.36 | 42000 | 1.8542 | 0.6707 | 0.6528 |
| 0.2915 | 13.52 | 42500 | 1.8365 | 0.6796 | 0.6520 |
| 0.3023 | 13.68 | 43000 | 1.8563 | 0.6828 | 0.6524 |
| 0.3023 | 13.84 | 43500 | 1.7947 | 0.6752 | 0.6495 |
| 0.3213 | 13.99 | 44000 | 1.8130 | 0.6796 | 0.6546 |
| 0.3213 | 14.15 | 44500 | 1.8288 | 0.6841 | 0.6502 |
| 0.2644 | 14.31 | 45000 | 1.8140 | 0.6726 | 0.6453 |
| 0.2644 | 14.47 | 45500 | 1.8711 | 0.6809 | 0.6552 |
| 0.2739 | 14.63 | 46000 | 1.8439 | 0.6873 | 0.6534 |
| 0.2739 | 14.79 | 46500 | 1.8302 | 0.6828 | 0.6460 |
| 0.3012 | 14.95 | 47000 | 1.8708 | 0.6752 | 0.6454 |
| 0.3012 | 15.11 | 47500 | 1.8498 | 0.6822 | 0.6487 |
| 0.2805 | 15.27 | 48000 | 1.8908 | 0.6803 | 0.6453 |
| 0.2805 | 15.43 | 48500 | 1.9480 | 0.6790 | 0.6406 |
| 0.2895 | 15.59 | 49000 | 1.8994 | 0.6675 | 0.6392 |
| 0.2895 | 15.74 | 49500 | 1.9135 | 0.6790 | 0.6461 |
| 0.2444 | 15.9 | 50000 | 1.9387 | 0.6841 | 0.6480 |
| 0.2444 | 16.06 | 50500 | 1.9175 | 0.6745 | 0.6463 |
| 0.2569 | 16.22 | 51000 | 1.9332 | 0.6745 | 0.6472 |
| 0.2569 | 16.38 | 51500 | 1.9400 | 0.6771 | 0.6445 |
| 0.2251 | 16.54 | 52000 | 1.9596 | 0.6745 | 0.6441 |
| 0.2251 | 16.7 | 52500 | 1.9959 | 0.6835 | 0.6464 |
| 0.2686 | 16.86 | 53000 | 1.9879 | 0.6777 | 0.6456 |
| 0.2686 | 17.02 | 53500 | 1.9882 | 0.6828 | 0.6471 |
| 0.2168 | 17.18 | 54000 | 2.0254 | 0.6886 | 0.6520 |
| 0.2168 | 17.33 | 54500 | 2.0432 | 0.6777 | 0.6442 |
| 0.2735 | 17.49 | 55000 | 1.9843 | 0.6745 | 0.6443 |
| 0.2735 | 17.65 | 55500 | 2.0330 | 0.6828 | 0.6451 |
| 0.2159 | 17.81 | 56000 | 2.0698 | 0.6682 | 0.6423 |
| 0.2159 | 17.97 | 56500 | 1.9797 | 0.6771 | 0.6426 |
| 0.245 | 18.13 | 57000 | 2.0008 | 0.6726 | 0.6383 |
| 0.245 | 18.29 | 57500 | 2.0425 | 0.6816 | 0.6473 |
| 0.2036 | 18.45 | 58000 | 2.0482 | 0.6720 | 0.6356 |
| 0.2036 | 18.61 | 58500 | 2.0950 | 0.6675 | 0.6384 |
| 0.2336 | 18.77 | 59000 | 2.0167 | 0.6854 | 0.6458 |
| 0.2336 | 18.92 | 59500 | 1.9984 | 0.6809 | 0.6406 |
| 0.2332 | 19.08 | 60000 | 2.0552 | 0.6739 | 0.6441 |
| 0.2332 | 19.24 | 60500 | 2.0450 | 0.6784 | 0.6459 |
| 0.1984 | 19.4 | 61000 | 2.0599 | 0.6752 | 0.6434 |
| 0.1984 | 19.56 | 61500 | 2.0704 | 0.6784 | 0.6417 |
| 0.1945 | 19.72 | 62000 | 2.0755 | 0.6758 | 0.6445 |
| 0.1945 | 19.88 | 62500 | 2.0660 | 0.6809 | 0.6428 |
| 0.2143 | 20.04 | 63000 | 2.0670 | 0.6739 | 0.6448 |
| 0.2143 | 20.2 | 63500 | 2.0581 | 0.6873 | 0.6509 |
| 0.1878 | 20.36 | 64000 | 2.1272 | 0.6752 | 0.6452 |
| 0.1878 | 20.52 | 64500 | 2.1002 | 0.6803 | 0.6511 |
| 0.2144 | 20.67 | 65000 | 2.1383 | 0.6713 | 0.6438 |
| 0.2144 | 20.83 | 65500 | 2.1070 | 0.6809 | 0.6419 |
| 0.2121 | 20.99 | 66000 | 2.1273 | 0.6726 | 0.6412 |
| 0.2121 | 21.15 | 66500 | 2.1605 | 0.6707 | 0.6395 |
| 0.1835 | 21.31 | 67000 | 2.2891 | 0.6567 | 0.6331 |
| 0.1835 | 21.47 | 67500 | 2.2472 | 0.6765 | 0.6402 |
| 0.1991 | 21.63 | 68000 | 2.2238 | 0.6752 | 0.6412 |
| 0.1991 | 21.79 | 68500 | 2.1965 | 0.6669 | 0.6372 |
| 0.2018 | 21.95 | 69000 | 2.2050 | 0.6669 | 0.6395 |
| 0.2018 | 22.11 | 69500 | 2.1795 | 0.6803 | 0.6467 |
| 0.151 | 22.26 | 70000 | 2.2214 | 0.6777 | 0.6430 |
| 0.151 | 22.42 | 70500 | 2.1754 | 0.6867 | 0.6513 |
| 0.2078 | 22.58 | 71000 | 2.1959 | 0.6822 | 0.6488 |
| 0.2078 | 22.74 | 71500 | 2.1933 | 0.6860 | 0.6481 |
| 0.2004 | 22.9 | 72000 | 2.2001 | 0.6816 | 0.6500 |
| 0.2004 | 23.06 | 72500 | 2.2159 | 0.6784 | 0.6490 |
| 0.1773 | 23.22 | 73000 | 2.2603 | 0.6790 | 0.6462 |
| 0.1773 | 23.38 | 73500 | 2.2331 | 0.6777 | 0.6470 |
| 0.174 | 23.54 | 74000 | 2.2554 | 0.6765 | 0.6471 |
| 0.174 | 23.7 | 74500 | 2.2000 | 0.6854 | 0.6517 |
| 0.2071 | 23.85 | 75000 | 2.1896 | 0.6790 | 0.6500 |
| 0.2071 | 24.01 | 75500 | 2.2270 | 0.6828 | 0.6479 |
| 0.1419 | 24.17 | 76000 | 2.2776 | 0.6765 | 0.6426 |
| 0.1419 | 24.33 | 76500 | 2.2895 | 0.6809 | 0.6437 |
| 0.1564 | 24.49 | 77000 | 2.2746 | 0.6828 | 0.6515 |
| 0.1564 | 24.65 | 77500 | 2.3156 | 0.6765 | 0.6356 |
| 0.1802 | 24.81 | 78000 | 2.2891 | 0.6726 | 0.6426 |
| 0.1802 | 24.97 | 78500 | 2.2610 | 0.6835 | 0.6502 |
| 0.1795 | 25.13 | 79000 | 2.2856 | 0.6777 | 0.6478 |
| 0.1795 | 25.29 | 79500 | 2.2410 | 0.6828 | 0.6478 |
| 0.1753 | 25.45 | 80000 | 2.2738 | 0.6701 | 0.6451 |
| 0.1753 | 25.6 | 80500 | 2.2679 | 0.6847 | 0.6440 |
| 0.1517 | 25.76 | 81000 | 2.2667 | 0.6796 | 0.6525 |
| 0.1517 | 25.92 | 81500 | 2.3471 | 0.6682 | 0.6455 |
| 0.1593 | 26.08 | 82000 | 2.2945 | 0.6816 | 0.6504 |
| 0.1593 | 26.24 | 82500 | 2.3202 | 0.6841 | 0.6456 |
| 0.1332 | 26.4 | 83000 | 2.3667 | 0.6733 | 0.6405 |
| 0.1332 | 26.56 | 83500 | 2.3295 | 0.6771 | 0.6377 |
| 0.1765 | 26.72 | 84000 | 2.3680 | 0.6720 | 0.6394 |
| 0.1765 | 26.88 | 84500 | 2.3246 | 0.6828 | 0.6456 |
| 0.1578 | 27.04 | 85000 | 2.3192 | 0.6745 | 0.6453 |
| 0.1578 | 27.19 | 85500 | 2.3216 | 0.6822 | 0.6471 |
| 0.1355 | 27.35 | 86000 | 2.3730 | 0.6796 | 0.6490 |
| 0.1355 | 27.51 | 86500 | 2.3650 | 0.6758 | 0.6415 |
| 0.1308 | 27.67 | 87000 | 2.4015 | 0.6784 | 0.6471 |
| 0.1308 | 27.83 | 87500 | 2.3700 | 0.6809 | 0.6403 |
| 0.1446 | 27.99 | 88000 | 2.3748 | 0.6796 | 0.6483 |
| 0.1446 | 28.15 | 88500 | 2.3575 | 0.6809 | 0.6497 |
| 0.1135 | 28.31 | 89000 | 2.3663 | 0.6835 | 0.6438 |
| 0.1135 | 28.47 | 89500 | 2.3817 | 0.6809 | 0.6490 |
| 0.1354 | 28.63 | 90000 | 2.4026 | 0.6739 | 0.6436 |
| 0.1354 | 28.78 | 90500 | 2.3825 | 0.6745 | 0.6392 |
| 0.1661 | 28.94 | 91000 | 2.3461 | 0.6771 | 0.6482 |
| 0.1661 | 29.1 | 91500 | 2.3496 | 0.6771 | 0.6422 |
| 0.1188 | 29.26 | 92000 | 2.3568 | 0.6790 | 0.6488 |
| 0.1188 | 29.42 | 92500 | 2.3496 | 0.6828 | 0.6430 |
| 0.1433 | 29.58 | 93000 | 2.4252 | 0.6707 | 0.6378 |
| 0.1433 | 29.74 | 93500 | 2.3805 | 0.6847 | 0.6459 |
| 0.1328 | 29.9 | 94000 | 2.3918 | 0.6860 | 0.6495 |
| 0.1328 | 30.06 | 94500 | 2.4026 | 0.6828 | 0.6495 |
| 0.1317 | 30.22 | 95000 | 2.4319 | 0.6841 | 0.6483 |
| 0.1317 | 30.38 | 95500 | 2.4375 | 0.6828 | 0.6492 |
| 0.122 | 30.53 | 96000 | 2.4401 | 0.6822 | 0.6475 |
| 0.122 | 30.69 | 96500 | 2.4397 | 0.6860 | 0.6473 |
| 0.1266 | 30.85 | 97000 | 2.4572 | 0.6847 | 0.6504 |
| 0.1266 | 31.01 | 97500 | 2.4506 | 0.6847 | 0.6513 |
| 0.1437 | 31.17 | 98000 | 2.4251 | 0.6822 | 0.6496 |
| 0.1437 | 31.33 | 98500 | 2.4420 | 0.6822 | 0.6521 |
| 0.1205 | 31.49 | 99000 | 2.4446 | 0.6816 | 0.6464 |
| 0.1205 | 31.65 | 99500 | 2.4408 | 0.6790 | 0.6450 |
| 0.1188 | 31.81 | 100000 | 2.4522 | 0.6765 | 0.6487 |
| 0.1188 | 31.97 | 100500 | 2.4313 | 0.6828 | 0.6495 |
| 0.1326 | 32.12 | 101000 | 2.4577 | 0.6784 | 0.6466 |
| 0.1326 | 32.28 | 101500 | 2.4524 | 0.6822 | 0.6479 |
| 0.1103 | 32.44 | 102000 | 2.4665 | 0.6765 | 0.6426 |
| 0.1103 | 32.6 | 102500 | 2.4642 | 0.6777 | 0.6431 |
| 0.118 | 32.76 | 103000 | 2.4628 | 0.6771 | 0.6451 |
| 0.118 | 32.92 | 103500 | 2.4671 | 0.6835 | 0.6474 |
| 0.1214 | 33.08 | 104000 | 2.4613 | 0.6771 | 0.6503 |
| 0.1214 | 33.24 | 104500 | 2.4833 | 0.6771 | 0.6475 |
| 0.0965 | 33.4 | 105000 | 2.4888 | 0.6803 | 0.6450 |
| 0.0965 | 33.56 | 105500 | 2.4910 | 0.6816 | 0.6476 |
| 0.1207 | 33.72 | 106000 | 2.4806 | 0.6860 | 0.6482 |
| 0.1207 | 33.87 | 106500 | 2.4741 | 0.6771 | 0.6445 |
| 0.1277 | 34.03 | 107000 | 2.5050 | 0.6790 | 0.6409 |
| 0.1277 | 34.19 | 107500 | 2.4809 | 0.6777 | 0.6402 |
| 0.1164 | 34.35 | 108000 | 2.5006 | 0.6777 | 0.6428 |
| 0.1164 | 34.51 | 108500 | 2.4889 | 0.6822 | 0.6474 |
| 0.1103 | 34.67 | 109000 | 2.4852 | 0.6822 | 0.6457 |
| 0.1103 | 34.83 | 109500 | 2.4923 | 0.6771 | 0.6418 |
| 0.1013 | 34.99 | 110000 | 2.4662 | 0.6784 | 0.6437 |
| 0.1013 | 35.15 | 110500 | 2.4755 | 0.6822 | 0.6483 |
| 0.0922 | 35.31 | 111000 | 2.4908 | 0.6816 | 0.6465 |
| 0.0922 | 35.46 | 111500 | 2.4922 | 0.6809 | 0.6502 |
| 0.0856 | 35.62 | 112000 | 2.5096 | 0.6828 | 0.6422 |
| 0.0856 | 35.78 | 112500 | 2.5035 | 0.6828 | 0.6463 |
| 0.1005 | 35.94 | 113000 | 2.5231 | 0.6828 | 0.6452 |
| 0.1005 | 36.1 | 113500 | 2.5196 | 0.6796 | 0.6469 |
| 0.0884 | 36.26 | 114000 | 2.5187 | 0.6796 | 0.6444 |
| 0.0884 | 36.42 | 114500 | 2.5180 | 0.6790 | 0.6454 |
| 0.0891 | 36.58 | 115000 | 2.5407 | 0.6771 | 0.6442 |
| 0.0891 | 36.74 | 115500 | 2.5349 | 0.6765 | 0.6417 |
| 0.1082 | 36.9 | 116000 | 2.5451 | 0.6777 | 0.6427 |
| 0.1082 | 37.05 | 116500 | 2.5349 | 0.6803 | 0.6469 |
| 0.1072 | 37.21 | 117000 | 2.5507 | 0.6816 | 0.6457 |
| 0.1072 | 37.37 | 117500 | 2.5485 | 0.6790 | 0.6459 |
| 0.0882 | 37.53 | 118000 | 2.5477 | 0.6809 | 0.6448 |
| 0.0882 | 37.69 | 118500 | 2.5620 | 0.6790 | 0.6401 |
| 0.0852 | 37.85 | 119000 | 2.5597 | 0.6790 | 0.6447 |
| 0.0852 | 38.01 | 119500 | 2.5545 | 0.6796 | 0.6436 |
| 0.1029 | 38.17 | 120000 | 2.5519 | 0.6796 | 0.6436 |
| 0.1029 | 38.33 | 120500 | 2.5539 | 0.6822 | 0.6463 |
| 0.0903 | 38.49 | 121000 | 2.5590 | 0.6822 | 0.6490 |
| 0.0903 | 38.65 | 121500 | 2.5658 | 0.6803 | 0.6457 |
| 0.092 | 38.8 | 122000 | 2.5590 | 0.6803 | 0.6433 |
| 0.092 | 38.96 | 122500 | 2.5620 | 0.6803 | 0.6449 |
| 0.094 | 39.12 | 123000 | 2.5634 | 0.6796 | 0.6436 |
| 0.094 | 39.28 | 123500 | 2.5677 | 0.6790 | 0.6435 |
| 0.0801 | 39.44 | 124000 | 2.5662 | 0.6803 | 0.6445 |
| 0.0801 | 39.6 | 124500 | 2.5648 | 0.6796 | 0.6440 |
| 0.103 | 39.76 | 125000 | 2.5641 | 0.6809 | 0.6451 |
| 0.103 | 39.92 | 125500 | 2.5641 | 0.6803 | 0.6446 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
bahmanreza/keras-dummy-sequential-demo | bahmanreza | 2023-05-23T18:17:12Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2023-05-23T18:17:09Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> |
YakovElm/Apache15Classic_with_cleaning | YakovElm | 2023-05-23T18:05:44Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T18:04:54Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1583
- Train Accuracy: 0.9535
- Validation Loss: 0.3355
- Validation Accuracy: 0.8924
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1921 | 0.9542 | 0.3429 | 0.8924 | 0 |
| 0.1792 | 0.9542 | 0.3336 | 0.8924 | 1 |
| 0.1583 | 0.9535 | 0.3355 | 0.8924 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
k22/1 | k22 | 2023-05-23T18:04:36Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-23T18:03:06Z | How to choose an edible tomato?
|
omegaodin/gpt2 | omegaodin | 2023-05-23T18:01:11Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"code",
"es",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:OpenAssistant/oasst1",
"dataset:bigcode/the-stack",
"dataset:bigcode/the-stack-dedup",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"region:us"
] | null | 2023-05-23T17:57:04Z | ---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
- togethercomputer/RedPajama-Data-1T
- OpenAssistant/oasst1
- bigcode/the-stack
- bigcode/the-stack-dedup
- databricks/databricks-dolly-15k
language:
- es
metrics:
- accuracy
- code_eval
- character
library_name: adapter-transformers
tags:
- code
--- |
ArinaRomashova/summarisation-pegasus-pubmed | ArinaRomashova | 2023-05-23T17:39:39Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-23T14:36:58Z | ## Validation Metrics
- Loss: 2.243
- Rouge1: 37.779
- Rouge2: 14.441
- RougeL: 24.108
- RougeLsum: 33.163
- Gen Len: 125.550
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/ArinaRomashova/autotrain-summarisation-pegasus-pubmed-61004134636
``` |
jjhonny/qtable-taxi-v3 | jjhonny | 2023-05-23T17:38:13Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T17:38:11Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: qtable-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jjhonny/qtable-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
takuma104/lora_unetonly_rank128 | takuma104 | 2023-05-23T17:35:35Z | 3 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-23T17:27:33Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - takuma104/lora_unetonly_rank128
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
takuma104/lora_unetonly_rank4 | takuma104 | 2023-05-23T17:25:06Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-23T17:19:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - takuma104/lora_unetonly_rank4
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
kudailang/kosasih | kudailang | 2023-05-23T17:22:19Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T17:20:36Z | ---
license: creativeml-openrail-m
---
|
nemuwn/bert-base-multilingual-cased-mongolian-ner | nemuwn | 2023-05-23T17:04:43Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"mn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-23T15:39:30Z | ---
language:
- mn
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-multilingual-cased-mongolian-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-mongolian-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1423
- Precision: 0.9057
- Recall: 0.9188
- F1: 0.9122
- Accuracy: 0.9753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1726 | 1.0 | 477 | 0.1052 | 0.8531 | 0.8851 | 0.8688 | 0.9664 |
| 0.0827 | 2.0 | 954 | 0.0975 | 0.8722 | 0.8987 | 0.8852 | 0.9699 |
| 0.0571 | 3.0 | 1431 | 0.0926 | 0.8847 | 0.9054 | 0.8950 | 0.9719 |
| 0.0376 | 4.0 | 1908 | 0.1052 | 0.8980 | 0.9119 | 0.9049 | 0.9727 |
| 0.0271 | 5.0 | 2385 | 0.1137 | 0.9021 | 0.9158 | 0.9089 | 0.9746 |
| 0.0182 | 6.0 | 2862 | 0.1304 | 0.8839 | 0.9106 | 0.8970 | 0.9712 |
| 0.0145 | 7.0 | 3339 | 0.1274 | 0.9042 | 0.9187 | 0.9114 | 0.9748 |
| 0.0097 | 8.0 | 3816 | 0.1375 | 0.9009 | 0.9169 | 0.9088 | 0.9739 |
| 0.0063 | 9.0 | 4293 | 0.1421 | 0.9017 | 0.9171 | 0.9093 | 0.9748 |
| 0.0049 | 10.0 | 4770 | 0.1423 | 0.9057 | 0.9188 | 0.9122 | 0.9753 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Hinova/q-FrozenLake-v1-4x4-noSlippery | Hinova | 2023-05-23T17:00:49Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T17:00:44Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Hinova/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
YakovElm/Apache10Classic_with_cleaning | YakovElm | 2023-05-23T17:00:42Z | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T17:00:05Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache10Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache10Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1824
- Train Accuracy: 0.9385
- Validation Loss: 0.5452
- Validation Accuracy: 0.8644
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2431 | 0.9340 | 0.4461 | 0.8644 | 0 |
| 0.2183 | 0.9383 | 0.4053 | 0.8644 | 1 |
| 0.1824 | 0.9385 | 0.5452 | 0.8644 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Artur01/mit-b0-finetuned-sidewalks | Artur01 | 2023-05-23T16:53:01Z | 31 | 0 | transformers | [
"transformers",
"tf",
"segformer",
"generated_from_keras_callback",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T20:23:44Z | ---
license: other
tags:
- generated_from_keras_callback
model-index:
- name: Artur01/mit-b0-finetuned-sidewalks
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Artur01/mit-b0-finetuned-sidewalks
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3943
- Validation Loss: 0.8293
- Validation Mean Iou: 0.2097
- Validation Mean Accuracy: 0.2617
- Validation Overall Accuracy: 0.7688
- Validation Accuracy Unlabeled: 0.0
- Validation Accuracy Flat-road: 0.6688
- Validation Accuracy Flat-sidewalk: 0.9258
- Validation Accuracy Flat-crosswalk: 0.4607
- Validation Accuracy Flat-cyclinglane: 0.7716
- Validation Accuracy Flat-parkingdriveway: 0.2329
- Validation Accuracy Flat-railtrack: nan
- Validation Accuracy Flat-curb: 0.1001
- Validation Accuracy Human-person: 0.0011
- Validation Accuracy Human-rider: 0.0
- Validation Accuracy Vehicle-car: 0.8806
- Validation Accuracy Vehicle-truck: 0.0
- Validation Accuracy Vehicle-bus: 0.0
- Validation Accuracy Vehicle-tramtrain: nan
- Validation Accuracy Vehicle-motorcycle: 0.0
- Validation Accuracy Vehicle-bicycle: 0.2446
- Validation Accuracy Vehicle-caravan: 0.0
- Validation Accuracy Vehicle-cartrailer: nan
- Validation Accuracy Construction-building: 0.8260
- Validation Accuracy Construction-door: 0.0
- Validation Accuracy Construction-wall: 0.2769
- Validation Accuracy Construction-fenceguardrail: 0.0618
- Validation Accuracy Construction-bridge: 0.0
- Validation Accuracy Construction-tunnel: nan
- Validation Accuracy Construction-stairs: 0.0
- Validation Accuracy Object-pole: 0.0127
- Validation Accuracy Object-trafficsign: 0.0
- Validation Accuracy Object-trafficlight: 0.0
- Validation Accuracy Nature-vegetation: 0.9125
- Validation Accuracy Nature-terrain: 0.7600
- Validation Accuracy Sky: 0.9223
- Validation Accuracy Void-ground: 0.0
- Validation Accuracy Void-dynamic: 0.0122
- Validation Accuracy Void-static: 0.0431
- Validation Accuracy Void-unclear: 0.0
- Validation Iou Unlabeled: 0.0
- Validation Iou Flat-road: 0.5491
- Validation Iou Flat-sidewalk: 0.7881
- Validation Iou Flat-crosswalk: 0.4034
- Validation Iou Flat-cyclinglane: 0.4981
- Validation Iou Flat-parkingdriveway: 0.1731
- Validation Iou Flat-railtrack: nan
- Validation Iou Flat-curb: 0.0869
- Validation Iou Human-person: 0.0011
- Validation Iou Human-rider: 0.0
- Validation Iou Vehicle-car: 0.6311
- Validation Iou Vehicle-truck: 0.0
- Validation Iou Vehicle-bus: 0.0
- Validation Iou Vehicle-tramtrain: nan
- Validation Iou Vehicle-motorcycle: 0.0
- Validation Iou Vehicle-bicycle: 0.2027
- Validation Iou Vehicle-caravan: 0.0
- Validation Iou Vehicle-cartrailer: nan
- Validation Iou Construction-building: 0.5921
- Validation Iou Construction-door: 0.0
- Validation Iou Construction-wall: 0.2356
- Validation Iou Construction-fenceguardrail: 0.0570
- Validation Iou Construction-bridge: 0.0
- Validation Iou Construction-tunnel: nan
- Validation Iou Construction-stairs: 0.0
- Validation Iou Object-pole: 0.0125
- Validation Iou Object-trafficsign: 0.0
- Validation Iou Object-trafficlight: 0.0
- Validation Iou Nature-vegetation: 0.7724
- Validation Iou Nature-terrain: 0.5932
- Validation Iou Sky: 0.8602
- Validation Iou Void-ground: 0.0
- Validation Iou Void-dynamic: 0.0120
- Validation Iou Void-static: 0.0323
- Validation Iou Void-unclear: 0.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 6e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Accuracy Unlabeled | Validation Accuracy Flat-road | Validation Accuracy Flat-sidewalk | Validation Accuracy Flat-crosswalk | Validation Accuracy Flat-cyclinglane | Validation Accuracy Flat-parkingdriveway | Validation Accuracy Flat-railtrack | Validation Accuracy Flat-curb | Validation Accuracy Human-person | Validation Accuracy Human-rider | Validation Accuracy Vehicle-car | Validation Accuracy Vehicle-truck | Validation Accuracy Vehicle-bus | Validation Accuracy Vehicle-tramtrain | Validation Accuracy Vehicle-motorcycle | Validation Accuracy Vehicle-bicycle | Validation Accuracy Vehicle-caravan | Validation Accuracy Vehicle-cartrailer | Validation Accuracy Construction-building | Validation Accuracy Construction-door | Validation Accuracy Construction-wall | Validation Accuracy Construction-fenceguardrail | Validation Accuracy Construction-bridge | Validation Accuracy Construction-tunnel | Validation Accuracy Construction-stairs | Validation Accuracy Object-pole | Validation Accuracy Object-trafficsign | Validation Accuracy Object-trafficlight | Validation Accuracy Nature-vegetation | Validation Accuracy Nature-terrain | Validation Accuracy Sky | Validation Accuracy Void-ground | Validation Accuracy Void-dynamic | Validation Accuracy Void-static | Validation Accuracy Void-unclear | Validation Iou Unlabeled | Validation Iou Flat-road | Validation Iou Flat-sidewalk | Validation Iou Flat-crosswalk | Validation Iou Flat-cyclinglane | Validation Iou Flat-parkingdriveway | Validation Iou Flat-railtrack | Validation Iou Flat-curb | Validation Iou Human-person | Validation Iou Human-rider | Validation Iou Vehicle-car | Validation Iou Vehicle-truck | Validation Iou Vehicle-bus | Validation Iou Vehicle-tramtrain | Validation Iou Vehicle-motorcycle | Validation Iou Vehicle-bicycle | Validation Iou Vehicle-caravan | Validation Iou Vehicle-cartrailer | Validation Iou Construction-building | Validation Iou Construction-door | Validation Iou Construction-wall | Validation Iou Construction-fenceguardrail | Validation Iou Construction-bridge | Validation Iou Construction-tunnel | Validation Iou Construction-stairs | Validation Iou Object-pole | Validation Iou Object-trafficsign | Validation Iou Object-trafficlight | Validation Iou Nature-vegetation | Validation Iou Nature-terrain | Validation Iou Sky | Validation Iou Void-ground | Validation Iou Void-dynamic | Validation Iou Void-static | Validation Iou Void-unclear | Epoch |
|:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:-----------------------------:|:-----------------------------:|:---------------------------------:|:----------------------------------:|:------------------------------------:|:----------------------------------------:|:----------------------------------:|:-----------------------------:|:--------------------------------:|:-------------------------------:|:-------------------------------:|:---------------------------------:|:-------------------------------:|:-------------------------------------:|:--------------------------------------:|:-----------------------------------:|:-----------------------------------:|:--------------------------------------:|:-----------------------------------------:|:-------------------------------------:|:-------------------------------------:|:-----------------------------------------------:|:---------------------------------------:|:---------------------------------------:|:---------------------------------------:|:-------------------------------:|:--------------------------------------:|:---------------------------------------:|:-------------------------------------:|:----------------------------------:|:-----------------------:|:-------------------------------:|:--------------------------------:|:-------------------------------:|:--------------------------------:|:------------------------:|:------------------------:|:----------------------------:|:-----------------------------:|:-------------------------------:|:-----------------------------------:|:-----------------------------:|:------------------------:|:---------------------------:|:--------------------------:|:--------------------------:|:----------------------------:|:--------------------------:|:--------------------------------:|:---------------------------------:|:------------------------------:|:------------------------------:|:---------------------------------:|:------------------------------------:|:--------------------------------:|:--------------------------------:|:------------------------------------------:|:----------------------------------:|:----------------------------------:|:----------------------------------:|:--------------------------:|:---------------------------------:|:----------------------------------:|:--------------------------------:|:-----------------------------:|:------------------:|:--------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:-----:|
| 1.3943 | 0.8293 | 0.2097 | 0.2617 | 0.7688 | 0.0 | 0.6688 | 0.9258 | 0.4607 | 0.7716 | 0.2329 | nan | 0.1001 | 0.0011 | 0.0 | 0.8806 | 0.0 | 0.0 | nan | 0.0 | 0.2446 | 0.0 | nan | 0.8260 | 0.0 | 0.2769 | 0.0618 | 0.0 | nan | 0.0 | 0.0127 | 0.0 | 0.0 | 0.9125 | 0.7600 | 0.9223 | 0.0 | 0.0122 | 0.0431 | 0.0 | 0.0 | 0.5491 | 0.7881 | 0.4034 | 0.4981 | 0.1731 | nan | 0.0869 | 0.0011 | 0.0 | 0.6311 | 0.0 | 0.0 | nan | 0.0 | 0.2027 | 0.0 | nan | 0.5921 | 0.0 | 0.2356 | 0.0570 | 0.0 | nan | 0.0 | 0.0125 | 0.0 | 0.0 | 0.7724 | 0.5932 | 0.8602 | 0.0 | 0.0120 | 0.0323 | 0.0 | 0 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Nebyx/ppo_LunarLander-v2 | Nebyx | 2023-05-23T16:46:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T16:46:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.42 +/- 18.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
perfectino/rinahhh | perfectino | 2023-05-23T16:36:40Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T16:35:13Z | ---
license: creativeml-openrail-m
---
|
Iulian277/ro-bart-512 | Iulian277 | 2023-05-23T16:15:03Z | 309 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"ro",
"autotrain_compatible",
"region:us"
] | summarization | 2023-04-13T10:00:11Z | ---
tags:
- summarization
- bart
language:
- ro
inference: false
---
This is a pretrained-from-scratch **BART base** model (**140M** parameters).
Training was performed on a clean **50GB Romanian** text corpus for 3M steps with these [scripts](https://github.com/cosmoquester/transformers-bart-pretrain). The model was trained with a maximum sequence length of **512**.
**!! IMPORTANT !!** This model was pretrained on the text corruption task, meaning this model is **not usable** in any downstream task **without finetuning** first! |
mnavas/roberta-finetuned-qa-reqzarv0 | mnavas | 2023-05-23T16:03:55Z | 102 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-23T15:37:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-qa-reqzarv0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-qa-reqzarv0
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
rakgesh/image-classifier-one-piece-v03_01 | rakgesh | 2023-05-23T16:02:00Z | 3 | 0 | tf-keras | [
"tf-keras",
"mobilenet",
"image-classification",
"region:us"
] | image-classification | 2023-05-23T15:59:09Z | ---
pipeline_tag: image-classification
--- |
YakovElm/Apache5Classic_with_cleaning | YakovElm | 2023-05-23T15:56:00Z | 62 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T15:55:23Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache5Classic_with_cleaning
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache5Classic_with_cleaning
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2193
- Train Accuracy: 0.9235
- Validation Loss: 0.6107
- Validation Accuracy: 0.8194
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3142 | 0.9001 | 0.4816 | 0.8233 | 0 |
| 0.2820 | 0.9099 | 0.4622 | 0.8233 | 1 |
| 0.2193 | 0.9235 | 0.6107 | 0.8194 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
khosro111/test1 | khosro111 | 2023-05-23T15:53:10Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2023-05-23T15:52:34Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xzuyn/LLaMa-1-MedicWizard-7B-GGML | xzuyn | 2023-05-23T15:53:05Z | 0 | 3 | null | [
"llama",
"alpaca",
"region:us"
] | null | 2023-05-21T16:18:21Z | ---
tags:
- llama
- alpaca
---
# For use with [KoboldCPP](https://github.com/LostRuins/koboldcpp)
Original Model: https://huggingface.co/xzuyn/MedicWizard-7B |
davanstrien/imdb_bertopic_ten_topics | davanstrien | 2023-05-23T15:52:38Z | 5 | 0 | bertopic | [
"bertopic",
"region:us"
] | null | 2023-05-23T15:52:31Z |
---
tags:
- bertopic
library_name: bertopic
---
# imdb_bertopic_ten_topics
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("davanstrien/imdb_bertopic_ten_topics")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 10
* Number of training documents: 103062
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | film - movie - movies - character - characters | 44 | -1_film_movie_movies_character |
| 0 | film - movie - films - movies - too | 25087 | 0_film_movie_films_movies |
| 1 | episodes - shows - watching - tv - episode | 20955 | 1_episodes_shows_watching_tv |
| 2 | films - film - movies - godzilla - movie | 2037 | 2_films_film_movies_godzilla |
| 3 | cinderella - disney - cartoon - animation - cartoons | 895 | 3_cinderella_disney_cartoon_animation |
| 4 | gameplay - games - game - adventure - starcraft | 465 | 4_gameplay_games_game_adventure |
| 5 | holmes - sherlock - watson - doyle - conan | 228 | 5_holmes_sherlock_watson_doyle |
| 6 | panther - film - films - clouseau - movies | 184 | 6_panther_film_films_clouseau |
| 7 | metallica - metal - genres - genre - headbanger | 55 | 7_metallica_metal_genres_genre |
| 8 | che - ernesto - castro - biopic - film | 50 | 8_che_ernesto_castro_biopic |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 40
* n_gram_range: (1, 1)
* nr_topics: 10
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.11
|
dawoz/a2c-PandaReachDense-v2 | dawoz | 2023-05-23T15:49:16Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T15:46:27Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -0.36 +/- 0.17
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VinsmokeMir/Fine_Tuning_SC_Method_2_Epoch_13B | VinsmokeMir | 2023-05-23T15:44:19Z | 184 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T15:28:29Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Fine_Tuning_SC_Method_2_Epoch_13B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine_Tuning_SC_Method_2_Epoch_13B
This model is a fine-tuned version of [rafsankabir/Pretrained_E13B_Method2](https://huggingface.co/rafsankabir/Pretrained_E13B_Method2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4244
- Accuracy: 0.6873
- F1 Macro: 0.6544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| No log | 1.27 | 500 | 1.0673 | 0.3976 | 0.1896 |
| 1.0138 | 2.54 | 1000 | 0.8217 | 0.6331 | 0.5569 |
| 1.0138 | 3.82 | 1500 | 0.7889 | 0.6662 | 0.6049 |
| 0.7305 | 5.09 | 2000 | 0.7821 | 0.6765 | 0.6382 |
| 0.7305 | 6.36 | 2500 | 0.7867 | 0.6918 | 0.6457 |
| 0.5856 | 7.63 | 3000 | 0.8236 | 0.6892 | 0.6623 |
| 0.5856 | 8.91 | 3500 | 0.8490 | 0.6835 | 0.6551 |
| 0.4723 | 10.18 | 4000 | 0.9057 | 0.6854 | 0.6533 |
| 0.4723 | 11.45 | 4500 | 0.9237 | 0.6796 | 0.6455 |
| 0.3896 | 12.72 | 5000 | 0.9814 | 0.6879 | 0.6499 |
| 0.3896 | 13.99 | 5500 | 0.9984 | 0.6745 | 0.6487 |
| 0.3299 | 15.27 | 6000 | 1.0226 | 0.6822 | 0.6545 |
| 0.3299 | 16.54 | 6500 | 1.0579 | 0.6758 | 0.6485 |
| 0.2783 | 17.81 | 7000 | 1.0932 | 0.6796 | 0.6487 |
| 0.2783 | 19.08 | 7500 | 1.1047 | 0.6950 | 0.6609 |
| 0.2455 | 20.36 | 8000 | 1.1643 | 0.6860 | 0.6559 |
| 0.2455 | 21.63 | 8500 | 1.1953 | 0.6841 | 0.6548 |
| 0.2181 | 22.9 | 9000 | 1.2043 | 0.6835 | 0.6516 |
| 0.2181 | 24.17 | 9500 | 1.2603 | 0.6867 | 0.6502 |
| 0.1894 | 25.45 | 10000 | 1.2652 | 0.6860 | 0.6552 |
| 0.1894 | 26.72 | 10500 | 1.2860 | 0.6790 | 0.6474 |
| 0.1757 | 27.99 | 11000 | 1.2892 | 0.6854 | 0.6541 |
| 0.1757 | 29.26 | 11500 | 1.3400 | 0.6803 | 0.6496 |
| 0.1599 | 30.53 | 12000 | 1.3630 | 0.6828 | 0.6493 |
| 0.1599 | 31.81 | 12500 | 1.3688 | 0.6854 | 0.6538 |
| 0.1531 | 33.08 | 13000 | 1.3962 | 0.6854 | 0.6534 |
| 0.1531 | 34.35 | 13500 | 1.4021 | 0.6841 | 0.6523 |
| 0.1452 | 35.62 | 14000 | 1.4029 | 0.6847 | 0.6524 |
| 0.1452 | 36.9 | 14500 | 1.4130 | 0.6886 | 0.6562 |
| 0.1391 | 38.17 | 15000 | 1.4203 | 0.6879 | 0.6553 |
| 0.1391 | 39.44 | 15500 | 1.4244 | 0.6873 | 0.6544 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DaniloTrotta/Kandinsky_2.1 | DaniloTrotta | 2023-05-23T15:43:07Z | 1 | 0 | open_clip | [
"open_clip",
"Kandinsky",
"text-image",
"text-to-image",
"license:apache-2.0",
"region:us"
] | text-to-image | 2023-05-23T12:00:50Z | ---
license: apache-2.0
tags:
- Kandinsky
- text-image
inference: true
pipeline_tag: text-to-image
library_name: open_clip
---
# Kandinsky 2.1
[Open In Colab](https://colab.research.google.com/drive/1xSbu-b-EwYd6GdaFPRVgvXBX_mciZ41e?usp=sharing)
[GitHub repository](https://github.com/ai-forever/Kandinsky-2)
[Habr post](https://habr.com/ru/company/sberbank/blog/725282/)
[Demo](https://rudalle.ru/)
## Architecture
Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas.
As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
For diffusion mapping of latent spaces we use transformer with num_layers=20, num_heads=32 and hidden_size=2048.

Other architecture parts:
+ Text encoder (XLM-Roberta-Large-Vit-L-14) - 560M
+ Diffusion Image Prior — 1B
+ CLIP image encoder (ViT-L/14) - 427M
+ Latent Diffusion U-Net - 1.22B
+ MoVQ encoder/decoder - 67M

# Authors
+ Arseniy Shakhmatov: [Github](https://github.com/cene555), [Blog](https://t.me/gradientdip)
+ Anton Razzhigaev: [Github](https://github.com/razzant), [Blog](https://t.me/abstractDL)
+ Aleksandr Nikolich: [Github](https://github.com/AlexWortega), [Blog](https://t.me/lovedeathtransformers)
+ Vladimir Arkhipkin: [Github](https://github.com/oriBetelgeuse)
+ Igor Pavlov: [Github](https://github.com/boomb0om)
+ Andrey Kuznetsov: [Github](https://github.com/kuznetsoffandrey)
+ Denis Dimitrov: [Github](https://github.com/denndimitrov) |
openllmplayground/pandagpt_7b_max_len_512 | openllmplayground | 2023-05-23T15:42:41Z | 0 | 1 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-05-17T18:13:29Z | ---
license: cc-by-nc-sa-4.0
---
This model contains the delta weights of PandaGPT built upon the version-0 of Vicuna-7B model with maximum sequence length of 512. For more details on the model usage, please refer to our [main project repository](https://github.com/yxuansu/PandaGPT). |
openllmplayground/pandagpt_13b_max_len_256 | openllmplayground | 2023-05-23T15:42:15Z | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-05-18T16:33:04Z | ---
license: cc-by-nc-sa-4.0
---
This model contains the delta weights of PandaGPT built upon the version-0 of Vicuna-13B model with maximum sequence length of 256. For more details on the model usage, please refer to our [main project repository](https://github.com/yxuansu/PandaGPT).
|
openllmplayground/pandagpt_13b_max_len_400 | openllmplayground | 2023-05-23T15:41:35Z | 0 | 6 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-05-21T05:50:07Z | ---
license: cc-by-nc-sa-4.0
---
This model contains the delta weights of PandaGPT built upon the version-0 of Vicuna-13B model with maximum sequence length of 400. For more details on the model usage, please refer to our [main project repository](https://github.com/yxuansu/PandaGPT). |
openllmplayground/pandagpt_7b_max_len_1024 | openllmplayground | 2023-05-23T15:41:04Z | 0 | 8 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-05-22T07:37:53Z | ---
license: cc-by-nc-sa-4.0
---
This model contains the delta weights of PandaGPT built upon the version-0 of Vicuna-7B model with maximum sequence length of 1024. For more details on the model usage, please refer to our [main project repository](https://github.com/yxuansu/PandaGPT). |
gerardoalemanm/roberta-large-peft-p-tunning | gerardoalemanm | 2023-05-23T15:37:56Z | 0 | 0 | null | [
"license:c-uda",
"region:us"
] | null | 2023-05-23T15:29:47Z | ---
license: c-uda
---
Model fine tunned with peft and Roberta-large model
Uses the dataset glue
|
YakovElm/Qt15SetFitModel | YakovElm | 2023-05-23T15:36:58Z | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2023-05-23T15:36:20Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Qt15SetFitModel
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Qt15SetFitModel")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
skyadmin/cog-webui-sd | skyadmin | 2023-05-23T15:36:11Z | 67 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-05-10T13:36:02Z | # Chill Watcher
consider deploy on:
- huggingface inference point
- replicate api
- lightning.ai
# platform comparison
> all support autoscaling
|platform|prediction speed|charges|deploy handiness|
|-|-|-|-|
|huggingface|fast:20s|high:$0.6/hr (without autoscaling)|easy:git push|
|replicate|fast if used frequently: 30s, slow if needs initialization: 5min|low: $0.02 per generation|difficult: build image and upload|
|lightning.ai|fast with app running: 20s, slow if idle: XXs|low: free $30 per month, $0.18 per init, $0.02 per run|easy: one command|
# platform deploy options
## huggingface
> [docs](https://huggingface.co/docs/inference-endpoints/guides/custom_handler)
- requirements: use pip packages in `requirements.txt`
- `init()` and `predict()` function: use `handler.py`, implement the `EndpointHandler` class
- more: modify `handler.py` for requests and inference and explore more highly-customized features
- deploy: git (lfs) push to huggingface repository(the whole directory including models and weights, etc.), and use inference endpoints to deploy. Click and deploy automaticly, very simple.
- call api: use the url provide by inference endpoints after endpoint is ready(build, initialize and in a "running" state), make a post request to the url using request schema definied in the `handler.py`
## replicate
> [docs](https://replicate.com/docs/guides/push-a-model)
- requirements: specify all requirements(pip packages, system packages, python version, cuda, etc.) in `cog.yaml`
- `init()` and `predict()` function: use `predict.py`, implement the `Predictor` class
- more: modify `predict.py`
- deploy:
1. get a linux GPU machine with 60GB disk space;
2. install [cog](https://replicate.com/docs/guides/push-a-model) and [docker](https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository)
3. `git pull` the current repository from huggingface, including large model files
4. after `predict.py` and `cog.yaml` is correctly coded, run `cog login`, `cog push`, then cog will build a docker image locally and push the image to replicate. As the image could take 30GB or so disk space, it would cost a lot network bandwidth.
- call api: if everything runs successfully and the docker image is pushed to replicate, you will see a web-ui and an API example directly in your replicate repository
## lightning.ai
> docs: [code](https://lightning.ai/docs/app/stable/levels/basic/real_lightning_component_implementations.html), [deploy](https://lightning.ai/docs/app/stable/workflows/run_app_on_cloud/)
- requirements:
- pip packages are listed in `requirements_lightning.txt`, because some requirements are different from those in huggingface. Rename it to `requirements.txt`
- other pip packages, system packages and some big model weight files download commands, can be listed using a custom build config. Checkout `class CustomBuildConfig(BuildConfig)` in `app.py`. In a custom build config you can use many linux commands such as `wget` and `sudo apt-get update`. The custom build config will be executed on the `__init__()` of the `PythonServer` class
- `init()` and `predict()` function: use `app.py`, implement the `PythonServer` class. Note:
- some packages haven't been installed when the file is called(these packages may be installed when `__init__()` is called), so some import code should be in the function, not at the top of the file, or you may get import errors.
- you can't save your own value to `PythonServer.self` unless it's predifined in the variables, so don't assign any self-defined variables to `self`
- if you use the custom build config, you should implement `PythonServer`'s `__init()__` yourself, so don't forget to use the correct function signature
- more: ...
- deploy:
- `pip install lightning`
- prepare the directory on your local computer(no need to have a GPU)
- list big files in the `.lightningignore` file to avoid big file upload and save deploy time cost
- run `lightning run app app.py --cloud` in the local terminal, and it will upload the files in the directory to lightning cloud, and start deploying on the cloud
- check error logs on the web-ui, use `all logs`
- call api: only if the app starts successfully, you can see a valid url in the `settings` page of the web-ui. Open that url, and you can see the api
### some stackoverflow:
install docker:
- https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository
install git-lfs:
- https://github.com/git-lfs/git-lfs/blob/main/INSTALLING.md
linux:
```
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
```
---
license: apache-2.0
---
|
kitrakrev/ppo-LunarLander-v2 | kitrakrev | 2023-05-23T15:29:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T15:29:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.88 +/- 16.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bailefan/dqn-SpaceInvadersNoFrameskip-v4 | Bailefan | 2023-05-23T15:20:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-23T15:19:37Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 687.00 +/- 135.15
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bailefan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bailefan -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Bailefan
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
da2el/slskmz | da2el | 2023-05-23T15:14:14Z | 0 | 3 | null | [
"stable-diffusion",
"lora",
"school uniform",
"school swimsuit",
"ja",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-23T14:06:11Z | ---
license: creativeml-openrail-m
language:
- ja
tags:
- stable-diffusion
- lora
- school uniform
- school swimsuit
---
# Sailor School Swimsuit
セーラー服+スク水は人類の宝です。
Sailor school uniform + school swimsuits are treasures of humanity.
水手服+校服泳装是人类的宝藏。
## Features
スクール水着の上からセーラー服を着用した姿が生成されます。
※このLoRAを使わなくてもセーラー服とスク水の組み合わせは生成できます。ちょっと確率を上げる程度だと思ってください
The generated image depicts a person wearing a sailor uniform over a school swimsuit.
※Even without using LoRA, it is possible to generate a combination of sailor uniforms and school swimsuits. Please consider it as just slightly increasing the probability.
生成的图像描绘了一个穿着水手服的人在校服泳装上面。
※即使不使用 LoRA,也可以生成帆船制服和学校泳装的组合。请将其视为稍微增加了概率
## How to Use
- Clip skip: 2
- LoRA weight: 0.3 - 0.5
- prompt:
- Short sleeves: slskmz short
- Sleeveless: slskmz sleeveless
```
<lora:slskmz_v2:0.3>, slskmz short, sailor school uniform, school swimsuit, white shirt
```
`sailor school uniform, school swimsuit, white shirt` も入力したほうが出現率が上がります
If you also input `sailor school uniform, school swimsuit, white shirt`, the probability of them appearing will increase
如果您还输入了 `sailor school uniform, school swimsuit, white shirt` ,它们出现的概率将会增加。

## Author
daniel, Japan
|
braintML123/Tania | braintML123 | 2023-05-23T15:03:23Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-04-26T03:07:26Z | ---
license: creativeml-openrail-m
---
|
Jakehova/SimpleClassifierWithLLMs | Jakehova | 2023-05-23T14:56:28Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-23T14:21:48Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
This is a simple classifier using the 20 Newgroups dataset
## Model Details
Uses sklearn.datasets to pull 20 Newsgroups data.
It runs through a variety of transformers (I'm not sure if this is the right terminology) to classify the data provided.
### Model Description
The focus of this is to fine tune a model with Text Classification.
- **Developed by:** FourthBrain
- **Model type:** Simple Classifier
- **Language(s) (NLP):** distilbert-base-uncased
- **License:** MIT (?)
## Uses
This is for a class so shouldnt be used for anything more than learning.
### Direct Use
Learning
## Training Details
### Training Data
[20 Newgroups](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html)
[More Information Needed]
|
VinsmokeMir/FineTuning_Method_2_SC | VinsmokeMir | 2023-05-23T14:49:02Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-23T13:55:32Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: FineTuning_Method_2_SC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FineTuning_Method_2_SC
This model is a fine-tuned version of [rafsankabir/Pretrained_E13_Method2](https://huggingface.co/rafsankabir/Pretrained_E13_Method2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3223
- Accuracy: 0.6790
- F1 Macro: 0.6487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| No log | 0.32 | 500 | 1.0745 | 0.3976 | 0.1896 |
| 1.0543 | 0.64 | 1000 | 0.9059 | 0.5967 | 0.4614 |
| 1.0543 | 0.95 | 1500 | 0.8259 | 0.6414 | 0.5633 |
| 0.8389 | 1.27 | 2000 | 0.8177 | 0.6394 | 0.5715 |
| 0.8389 | 1.59 | 2500 | 0.8269 | 0.6356 | 0.5724 |
| 0.7713 | 1.91 | 3000 | 0.7916 | 0.6631 | 0.6238 |
| 0.7713 | 2.23 | 3500 | 0.7996 | 0.6745 | 0.6155 |
| 0.6734 | 2.54 | 4000 | 0.7921 | 0.6624 | 0.6307 |
| 0.6734 | 2.86 | 4500 | 0.7743 | 0.6726 | 0.6459 |
| 0.6309 | 3.18 | 5000 | 0.8343 | 0.6803 | 0.6382 |
| 0.6309 | 3.5 | 5500 | 0.8233 | 0.6784 | 0.6390 |
| 0.5582 | 3.82 | 6000 | 0.8678 | 0.6631 | 0.6273 |
| 0.5582 | 4.13 | 6500 | 0.8621 | 0.6758 | 0.6368 |
| 0.4988 | 4.45 | 7000 | 0.9389 | 0.6720 | 0.6386 |
| 0.4988 | 4.77 | 7500 | 0.9067 | 0.6918 | 0.6505 |
| 0.4885 | 5.09 | 8000 | 0.9116 | 0.6937 | 0.6583 |
| 0.4885 | 5.41 | 8500 | 1.0357 | 0.6822 | 0.6459 |
| 0.427 | 5.73 | 9000 | 0.9428 | 0.6847 | 0.6479 |
| 0.427 | 6.04 | 9500 | 1.0233 | 0.6752 | 0.6531 |
| 0.4034 | 6.36 | 10000 | 1.1578 | 0.6835 | 0.6515 |
| 0.4034 | 6.68 | 10500 | 1.1870 | 0.6790 | 0.6545 |
| 0.4053 | 7.0 | 11000 | 1.0370 | 0.7007 | 0.6651 |
| 0.4053 | 7.32 | 11500 | 1.2087 | 0.6822 | 0.6497 |
| 0.3545 | 7.63 | 12000 | 1.2255 | 0.6847 | 0.6605 |
| 0.3545 | 7.95 | 12500 | 1.2710 | 0.6905 | 0.6609 |
| 0.3437 | 8.27 | 13000 | 1.3646 | 0.6918 | 0.6618 |
| 0.3437 | 8.59 | 13500 | 1.3767 | 0.6879 | 0.6563 |
| 0.3407 | 8.91 | 14000 | 1.2705 | 0.6796 | 0.6506 |
| 0.3407 | 9.22 | 14500 | 1.4605 | 0.6803 | 0.6496 |
| 0.2876 | 9.54 | 15000 | 1.4202 | 0.6860 | 0.6555 |
| 0.2876 | 9.86 | 15500 | 1.4151 | 0.6847 | 0.6517 |
| 0.3035 | 10.18 | 16000 | 1.4536 | 0.6713 | 0.6514 |
| 0.3035 | 10.5 | 16500 | 1.4806 | 0.6828 | 0.6469 |
| 0.2733 | 10.81 | 17000 | 1.4596 | 0.6899 | 0.6552 |
| 0.2733 | 11.13 | 17500 | 1.6183 | 0.6886 | 0.6557 |
| 0.2562 | 11.45 | 18000 | 1.6054 | 0.6771 | 0.6591 |
| 0.2562 | 11.77 | 18500 | 1.5966 | 0.6701 | 0.6503 |
| 0.2582 | 12.09 | 19000 | 1.5659 | 0.6822 | 0.6531 |
| 0.2582 | 12.4 | 19500 | 1.6146 | 0.6867 | 0.6575 |
| 0.2368 | 12.72 | 20000 | 1.6207 | 0.6899 | 0.6629 |
| 0.2368 | 13.04 | 20500 | 1.5220 | 0.6918 | 0.6640 |
| 0.245 | 13.36 | 21000 | 1.6572 | 0.6720 | 0.6489 |
| 0.245 | 13.68 | 21500 | 1.6443 | 0.6860 | 0.6590 |
| 0.2226 | 13.99 | 22000 | 1.6238 | 0.6847 | 0.6589 |
| 0.2226 | 14.31 | 22500 | 1.7241 | 0.6777 | 0.6521 |
| 0.2117 | 14.63 | 23000 | 1.6134 | 0.6867 | 0.6580 |
| 0.2117 | 14.95 | 23500 | 1.6723 | 0.6911 | 0.6618 |
| 0.2056 | 15.27 | 24000 | 1.6257 | 0.6892 | 0.6529 |
| 0.2056 | 15.59 | 24500 | 1.7072 | 0.6796 | 0.6531 |
| 0.1859 | 15.9 | 25000 | 1.7174 | 0.6771 | 0.6554 |
| 0.1859 | 16.22 | 25500 | 1.6951 | 0.6879 | 0.6555 |
| 0.1725 | 16.54 | 26000 | 1.7240 | 0.6905 | 0.6632 |
| 0.1725 | 16.86 | 26500 | 1.7126 | 0.6879 | 0.6608 |
| 0.1817 | 17.18 | 27000 | 1.7949 | 0.6847 | 0.6520 |
| 0.1817 | 17.49 | 27500 | 1.7694 | 0.6911 | 0.6622 |
| 0.1617 | 17.81 | 28000 | 1.7891 | 0.6828 | 0.6527 |
| 0.1617 | 18.13 | 28500 | 1.7860 | 0.6790 | 0.6526 |
| 0.1628 | 18.45 | 29000 | 1.8127 | 0.6867 | 0.6605 |
| 0.1628 | 18.77 | 29500 | 1.7317 | 0.6892 | 0.6610 |
| 0.1736 | 19.08 | 30000 | 1.7273 | 0.6899 | 0.6569 |
| 0.1736 | 19.4 | 30500 | 1.7853 | 0.6854 | 0.6584 |
| 0.1441 | 19.72 | 31000 | 1.7866 | 0.6918 | 0.6624 |
| 0.1441 | 20.04 | 31500 | 1.7842 | 0.6873 | 0.6580 |
| 0.1392 | 20.36 | 32000 | 1.8669 | 0.6860 | 0.6597 |
| 0.1392 | 20.67 | 32500 | 1.8392 | 0.6899 | 0.6639 |
| 0.159 | 20.99 | 33000 | 1.8412 | 0.6784 | 0.6552 |
| 0.159 | 21.31 | 33500 | 1.8673 | 0.6854 | 0.6584 |
| 0.1275 | 21.63 | 34000 | 1.8622 | 0.6854 | 0.6571 |
| 0.1275 | 21.95 | 34500 | 1.8622 | 0.6796 | 0.6583 |
| 0.1216 | 22.26 | 35000 | 1.9509 | 0.6854 | 0.6604 |
| 0.1216 | 22.58 | 35500 | 1.9425 | 0.6809 | 0.6550 |
| 0.1351 | 22.9 | 36000 | 1.9496 | 0.6784 | 0.6559 |
| 0.1351 | 23.22 | 36500 | 1.9685 | 0.6847 | 0.6582 |
| 0.1221 | 23.54 | 37000 | 1.9112 | 0.6911 | 0.6642 |
| 0.1221 | 23.85 | 37500 | 1.9341 | 0.6726 | 0.6526 |
| 0.1155 | 24.17 | 38000 | 1.9573 | 0.6899 | 0.6614 |
| 0.1155 | 24.49 | 38500 | 1.9853 | 0.6873 | 0.6580 |
| 0.1139 | 24.81 | 39000 | 1.9915 | 0.6790 | 0.6533 |
| 0.1139 | 25.13 | 39500 | 1.9997 | 0.6796 | 0.6539 |
| 0.1166 | 25.45 | 40000 | 1.9994 | 0.6847 | 0.6592 |
| 0.1166 | 25.76 | 40500 | 1.9848 | 0.6745 | 0.6513 |
| 0.1128 | 26.08 | 41000 | 2.0095 | 0.6867 | 0.6578 |
| 0.1128 | 26.4 | 41500 | 2.0585 | 0.6822 | 0.6547 |
| 0.1048 | 26.72 | 42000 | 2.0293 | 0.6777 | 0.6510 |
| 0.1048 | 27.04 | 42500 | 2.0797 | 0.6758 | 0.6512 |
| 0.1 | 27.35 | 43000 | 2.1162 | 0.6822 | 0.6544 |
| 0.1 | 27.67 | 43500 | 2.0569 | 0.6835 | 0.6538 |
| 0.1106 | 27.99 | 44000 | 2.0991 | 0.6828 | 0.6565 |
| 0.1106 | 28.31 | 44500 | 2.0976 | 0.6841 | 0.6563 |
| 0.0886 | 28.63 | 45000 | 2.1305 | 0.6854 | 0.6532 |
| 0.0886 | 28.94 | 45500 | 2.1015 | 0.6867 | 0.6564 |
| 0.1027 | 29.26 | 46000 | 2.1105 | 0.6867 | 0.6559 |
| 0.1027 | 29.58 | 46500 | 2.1396 | 0.6765 | 0.6499 |
| 0.1057 | 29.9 | 47000 | 2.1237 | 0.6790 | 0.6501 |
| 0.1057 | 30.22 | 47500 | 2.1849 | 0.6790 | 0.6518 |
| 0.0876 | 30.53 | 48000 | 2.1346 | 0.6841 | 0.6533 |
| 0.0876 | 30.85 | 48500 | 2.1441 | 0.6828 | 0.6540 |
| 0.0856 | 31.17 | 49000 | 2.1528 | 0.6911 | 0.6600 |
| 0.0856 | 31.49 | 49500 | 2.1725 | 0.6847 | 0.6509 |
| 0.0869 | 31.81 | 50000 | 2.2085 | 0.6771 | 0.6503 |
| 0.0869 | 32.12 | 50500 | 2.2606 | 0.6688 | 0.6434 |
| 0.0848 | 32.44 | 51000 | 2.2510 | 0.6745 | 0.6451 |
| 0.0848 | 32.76 | 51500 | 2.2528 | 0.6739 | 0.6496 |
| 0.0816 | 33.08 | 52000 | 2.2532 | 0.6758 | 0.6503 |
| 0.0816 | 33.4 | 52500 | 2.2356 | 0.6803 | 0.6500 |
| 0.0793 | 33.72 | 53000 | 2.2579 | 0.6745 | 0.6483 |
| 0.0793 | 34.03 | 53500 | 2.2126 | 0.6816 | 0.6520 |
| 0.0767 | 34.35 | 54000 | 2.2504 | 0.6803 | 0.6497 |
| 0.0767 | 34.67 | 54500 | 2.2601 | 0.6803 | 0.6524 |
| 0.0844 | 34.99 | 55000 | 2.2785 | 0.6733 | 0.6470 |
| 0.0844 | 35.31 | 55500 | 2.2756 | 0.6784 | 0.6520 |
| 0.0755 | 35.62 | 56000 | 2.2813 | 0.6816 | 0.6542 |
| 0.0755 | 35.94 | 56500 | 2.2752 | 0.6803 | 0.6518 |
| 0.077 | 36.26 | 57000 | 2.2815 | 0.6796 | 0.6518 |
| 0.077 | 36.58 | 57500 | 2.2861 | 0.6803 | 0.6514 |
| 0.0752 | 36.9 | 58000 | 2.2929 | 0.6771 | 0.6505 |
| 0.0752 | 37.21 | 58500 | 2.2859 | 0.6816 | 0.6537 |
| 0.0698 | 37.53 | 59000 | 2.3117 | 0.6796 | 0.6525 |
| 0.0698 | 37.85 | 59500 | 2.3038 | 0.6816 | 0.6511 |
| 0.0613 | 38.17 | 60000 | 2.3176 | 0.6765 | 0.6477 |
| 0.0613 | 38.49 | 60500 | 2.3131 | 0.6796 | 0.6493 |
| 0.0706 | 38.8 | 61000 | 2.3161 | 0.6777 | 0.6477 |
| 0.0706 | 39.12 | 61500 | 2.3127 | 0.6784 | 0.6484 |
| 0.0678 | 39.44 | 62000 | 2.3174 | 0.6765 | 0.6467 |
| 0.0678 | 39.76 | 62500 | 2.3223 | 0.6790 | 0.6487 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thomasavare/distilbert-ft-test3 | thomasavare | 2023-05-23T14:41:21Z | 66 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-07T10:08:10Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-ft-test3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-ft-test3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [thomasavare/waste-classification-v2](https://huggingface.co/datasets/thomasavare/waste-classification-v2).
It is part of my master thesis at Politecnico di Torino in partenership with ReLearn.
It achieves the following results on the test set:
accuracy | precision | recall | f1 |
---------|-----------|--------|--------|
0.974 | 0.9805 | 0.9732 | 0.9725 |
## Model description
DistilBERT finetuned for waste classification on 50 different classes as part of my master thesis at Politecnico di Torino.
## Intended uses & limitations
Use for waste classification on 50 different waste classes (see [dataset](https://huggingface.co/datasets/thomasavare/waste-classification-v2))
## Training and evaluation data
[waste-classification-v2 dataset](https://huggingface.co/datasets/thomasavare/waste-classification-v2)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AlexPerkin/distilbert-base-uncased-finetuned-squad | AlexPerkin | 2023-05-23T14:35:13Z | 115 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-23T12:23:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2334 | 1.0 | 5533 | 1.1622 |
| 0.9541 | 2.0 | 11066 | 1.1228 |
| 0.7519 | 3.0 | 16599 | 1.1517 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
devraj4522/sentiment-analys | devraj4522 | 2023-05-23T14:32:10Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-23T13:08:17Z | # Social Media Sentiment analysis
@app.route('/predict-str', methods=['POST'])
def predict_message():
data = request.json
message = data.get('message', '')
start_time = time.time()
prediction = predictor.predict(message)
response = {
'message': message,
'prediction': prediction,
'elapsed_time': time.time() - start_time
}
return jsonify(response)
@app.route('/predict-list', methods=['POST'])
def predict_list():
data = request.json
messages = data.get('messages', [])
start_time = time.time()
predictions = predictor.predict(messages)
response = {
'messages': messages,
'predictions': predictions,
'elapsed_time': time.time() - start_time
}
return jsonify(response)
if __name__ == '__main__':
app.run()
## API Endpoints
- predict-str: Predicts the sentiment of a single message
- parameters: message
- json response: {message, prediction, elapsed_time}
- predict-list: Predicts the sentiment of a list of messages
- parameters: messages (list)
- json response: {messages, predictions, elapsed_time}
|
HanNayeoniee/my_awesome_eli5_clm-model | HanNayeoniee | 2023-05-23T14:31:42Z | 211 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-23T13:45:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8888 | 1.0 | 565 | 3.7433 |
| 3.8141 | 2.0 | 1130 | 3.7292 |
| 3.7701 | 3.0 | 1695 | 3.7255 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.7.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
aravind-selvam/x_rotated | aravind-selvam | 2023-05-23T14:20:23Z | 45 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-05-23T13:07:14Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: x_rotated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# x_rotated
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1574
- Cer: 0.0377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2067 | 1.0 | 500 | 0.2103 | 0.0538 |
| 0.1043 | 2.0 | 1000 | 0.1667 | 0.0445 |
| 0.0667 | 3.0 | 1500 | 0.1571 | 0.0388 |
| 0.0489 | 4.0 | 2000 | 0.1574 | 0.0377 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits