modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-02 12:29:30
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
548 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-02 12:29:18
card
stringlengths
11
1.01M
sai1881/flan-t5-small-Forecast
sai1881
2023-05-14T07:54:59Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-14T07:44:45Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: flan-t5-small-Forecast results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-small-Forecast This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 189 | 0.0145 | | No log | 2.0 | 378 | 0.0122 | | 0.0938 | 3.0 | 567 | 0.0116 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
50stars/fine-tuned-model
50stars
2023-05-14T07:18:38Z
157
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-14T07:04:23Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: fine-tuned-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Score | Jaccard Score | Average Precision Score | Percentage Examples At Least 1 True | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:-------------:|:-----------------------:|:-----------------------------------:| | No log | 1.0 | 5 | 0.6742 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2880 | 0.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.0+cu118 - Tokenizers 0.13.3
trachi123/CK_T5
trachi123
2023-05-14T06:40:22Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:mt_eng_vietnamese", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-13T18:14:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - mt_eng_vietnamese metrics: - bleu model-index: - name: CK_T5 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: mt_eng_vietnamese type: mt_eng_vietnamese config: iwslt2015-vi-en split: test args: iwslt2015-vi-en metrics: - name: Bleu type: bleu value: 0.1851 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CK_T5 This model is a fine-tuned version of [T5-small](https://huggingface.co/T5-small) on the mt_eng_vietnamese dataset. It achieves the following results on the evaluation set: - Loss: 1.4489 - Bleu: 0.1851 - Gen Len: 18.751 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.6572 | 1.0 | 8333 | 1.5155 | 0.0992 | 18.7864 | | 1.5895 | 2.0 | 16666 | 1.4489 | 0.1851 | 18.751 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
digitous/GPT-ClutserFUsion
digitous
2023-05-14T06:39:06Z
14
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "alpaca", "merge", "mix", "alpacino", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-13T20:29:55Z
--- tags: - llama - alpaca - merge - mix - alpacino --- This is an even Octa merge of: Alpacino+Elina+MedAlpaca+Story+GPT4Aalpaca+VincunaUnlocked+COT+HH Then for good measure, Chansung's Alpaca was 50/50 merged with the result. A fun experiment. ChanSung's Alpaca seems fairly uncensored, so the final pass was done to give Alpaca prompting a dominant edge. For now only a Cuda GPTQ quant compatible with: git clone https://github.com/0cc4m/KoboldAI -b latestgptq and very likely Text Generation WebUI. Original Weights and Original Author Credits will be added in the coming days.
doitopojare/latulip
doitopojare
2023-05-14T06:15:45Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-14T05:47:48Z
--- license: creativeml-openrail-m ---
huggingliang/distilbert-finetuned-squad
huggingliang
2023-05-14T05:50:26Z
59
0
transformers
[ "transformers", "tf", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-14T04:49:01Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: huggingliang/distilbert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # huggingliang/distilbert-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5491 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5532, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.5491 | 0 | ### Framework versions - Transformers 4.29.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
NathanS-HuggingFace/A2C-ReachDense
NathanS-HuggingFace
2023-05-14T05:26:46Z
3
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-01T02:07:06Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.54 +/- 0.47 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jasonsurya0/BART_ELEVEN
jasonsurya0
2023-05-14T05:10:36Z
106
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-14T04:24:35Z
BART MODEL #11 PRETRAINED ON XSUM AND FINETUNED ON SAMSUM
rudrransh/tweet_generator
rudrransh
2023-05-14T04:58:01Z
0
0
null
[ "text2text-generation", "license:apache-2.0", "region:us" ]
text2text-generation
2023-05-14T04:13:42Z
--- license: apache-2.0 pipeline_tag: text2text-generation ---
NathanS-HuggingFace/SpaceInvadersNoFrameskip
NathanS-HuggingFace
2023-05-14T04:23:18Z
8
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-04-16T14:22:01Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 694.50 +/- 243.63 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NathanS-HuggingFace -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NathanS-HuggingFace -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NathanS-HuggingFace ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Vandy/phobert_shared-vietnews
Vandy
2023-05-14T03:59:11Z
102
0
transformers
[ "transformers", "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-14T02:23:08Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: phobert_shared-vietnews results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phobert_shared-vietnews This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4872 - Rouge1: 47.252 - Rouge2: 12.3801 - Rougel: 27.9535 - Rougelsum: 31.1165 - Gen Len: 25.3494 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 4.1717 | 1.0 | 619 | 3.7817 | 46.287 | 10.7692 | 27.3606 | 30.4956 | 25.2656 | | 3.6535 | 2.0 | 1239 | 3.5421 | 46.9278 | 11.7595 | 27.5206 | 30.7512 | 25.9089 | | 3.434 | 3.0 | 1857 | 3.4872 | 47.252 | 12.3801 | 27.9535 | 31.1165 | 25.3494 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Woohun/finetuned-facebook-bart-base
Woohun
2023-05-14T03:44:40Z
104
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-14T03:16:16Z
--- tags: - generated_from_trainer model-index: - name: finetuned-facebook-bart-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-facebook-bart-base This model is a fine-tuned version of [../tmp/bart-abst-summarization](https://huggingface.co/../tmp/bart-abst-summarization) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.30.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
cs608/billsum-model
cs608
2023-05-14T03:28:24Z
111
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "summarization", "generated_from_trainer", "dataset:billsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-05-14T02:20:50Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer datasets: - billsum metrics: - rouge model-index: - name: CS685-text-summarizer-2 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: billsum type: billsum config: default split: train[:20%] args: default metrics: - name: Rouge1 type: rouge value: 17.1607 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS685-text-summarizer-2 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 1.7651 - Rouge1: 17.1607 - Rouge2: 13.943 - Rougel: 16.6793 - Rougelsum: 16.8422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 2.4547 | 1.0 | 569 | 1.9895 | 16.6343 | 13.0432 | 16.1262 | 16.2449 | | 2.0246 | 2.0 | 1138 | 1.8688 | 16.939 | 13.4711 | 16.4359 | 16.5797 | | 1.818 | 3.0 | 1707 | 1.8075 | 17.1388 | 13.827 | 16.6136 | 16.7574 | | 1.6831 | 4.0 | 2276 | 1.7744 | 17.2292 | 13.9353 | 16.6961 | 16.8786 | | 1.5956 | 5.0 | 2845 | 1.7651 | 17.1607 | 13.943 | 16.6793 | 16.8422 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
jokyere49/q-FrozenLake-v1-4x4-noSlippery
jokyere49
2023-05-14T03:23:39Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-14T03:23:36Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jokyere49/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
dian34323/raishajkt48
dian34323
2023-05-14T03:08:07Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-14T03:06:19Z
--- license: creativeml-openrail-m ---
Felix555/LunarLander-v2
Felix555
2023-05-14T02:41:55Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-05-14T02:41:45Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -126.53 +/- 47.03 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Felix555/LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
tamhuynh27/xlmroberta-finetuned-recipeqa-modified
tamhuynh27
2023-05-14T02:36:35Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-05-03T19:16:15Z
--- license: mit tags: - generated_from_trainer model-index: - name: xlmroberta-finetuned-recipeqa-modified results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlmroberta-finetuned-recipeqa-modified This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
tapias/layoutlmv3-finetuned-cord_100
tapias
2023-05-14T02:34:11Z
76
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "generated_from_trainer", "dataset:cord-layoutlmv3", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-14T02:16:47Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - cord-layoutlmv3 metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-cord_100 results: - task: name: Token Classification type: token-classification dataset: name: cord-layoutlmv3 type: cord-layoutlmv3 config: cord split: test args: cord metrics: - name: Precision type: precision value: 0.9430473372781065 - name: Recall type: recall value: 0.9543413173652695 - name: F1 type: f1 value: 0.9486607142857143 - name: Accuracy type: accuracy value: 0.9579796264855688 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-cord_100 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 0.2188 - Precision: 0.9430 - Recall: 0.9543 - F1: 0.9487 - Accuracy: 0.9580 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.56 | 250 | 1.0024 | 0.7392 | 0.7957 | 0.7664 | 0.8060 | | 1.3949 | 3.12 | 500 | 0.5684 | 0.8330 | 0.8660 | 0.8492 | 0.8727 | | 1.3949 | 4.69 | 750 | 0.3929 | 0.8931 | 0.9072 | 0.9001 | 0.9160 | | 0.3964 | 6.25 | 1000 | 0.3312 | 0.9236 | 0.9326 | 0.9281 | 0.9321 | | 0.3964 | 7.81 | 1250 | 0.2754 | 0.9275 | 0.9386 | 0.9330 | 0.9410 | | 0.216 | 9.38 | 1500 | 0.2447 | 0.9328 | 0.9454 | 0.9390 | 0.9478 | | 0.216 | 10.94 | 1750 | 0.2467 | 0.9363 | 0.9461 | 0.9412 | 0.9478 | | 0.1534 | 12.5 | 2000 | 0.2300 | 0.9436 | 0.9521 | 0.9478 | 0.9537 | | 0.1534 | 14.06 | 2250 | 0.2155 | 0.9459 | 0.9558 | 0.9509 | 0.9597 | | 0.119 | 15.62 | 2500 | 0.2188 | 0.9430 | 0.9543 | 0.9487 | 0.9580 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
thu-coai/blenderbot-1B-augesc
thu-coai
2023-05-14T02:30:19Z
18
3
transformers
[ "transformers", "pytorch", "safetensors", "blenderbot", "text2text-generation", "conversational", "en", "arxiv:2202.13047", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-01-12T11:08:57Z
--- language: - en pipeline_tag: conversational tags: - pytorch license: cc-by-nc-4.0 --- [blenderbot-1B-distill](https://huggingface.co/facebook/blenderbot-1B-distill) fine-tuned on the [ESConv dataset](https://github.com/thu-coai/Emotional-Support-Conversation) and [**AugESC dataset**](https://github.com/thu-coai/AugESC). See the [original paper](https://arxiv.org/abs/2202.13047) for details. Usage example: ```python import torch from transformers import AutoTokenizer from transformers.models.blenderbot import BlenderbotTokenizer, BlenderbotForConditionalGeneration def _norm(x): return ' '.join(x.strip().split()) tokenizer = BlenderbotTokenizer.from_pretrained('thu-coai/blenderbot-1B-augesc') model = BlenderbotForConditionalGeneration.from_pretrained('thu-coai/blenderbot-1B-augesc') model.eval() utterances = [ "I am having a lot of anxiety about quitting my current job. It is too stressful but pays well", "What makes your job stressful for you?", "I have to deal with many people in hard financial situations and it is upsetting", "Do you help your clients to make it to a better financial situation?", "I do, but often they are not going to get back to what they want. Many people are going to lose their home when safeguards are lifted", ] input_sequence = ' '.join([' ' + e for e in utterances]) + tokenizer.eos_token # add space prefix and separate utterances with two spaces input_ids = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(input_sequence))[-128:] input_ids = torch.LongTensor([input_ids]) model_output = model.generate(input_ids, num_beams=1, do_sample=True, top_p=0.9, num_return_sequences=5, return_dict=False) generation = tokenizer.batch_decode(model_output, skip_special_tokens=True) generation = [_norm(e) for e in generation] print(generation) utterances.append(generation[0]) # for future loop ``` Please kindly cite our papers if you use this model: ```bib @inproceedings{liu-etal-2021-towards, title={Towards Emotional Support Dialog Systems}, author={Liu, Siyang and Zheng, Chujie and Demasi, Orianna and Sabour, Sahand and Li, Yu and Yu, Zhou and Jiang, Yong and Huang, Minlie}, booktitle={ACL}, year={2021} } @inproceedings{zheng-etal-2023-augesc, title={AugESC: Dialogue Augmentation with Large Language Models for Emotional Support Conversation}, author={Zheng, Chujie and Sabour, Sahand and Wen, Jiaxin and Zhang, Zheng and Huang, Minlie}, booktitle={Findings of ACL}, year={2023} } ```
jontromanab/a2c-AntBulletEnv-v0
jontromanab
2023-05-14T02:23:54Z
4
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-14T02:23:28Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1586.99 +/- 92.87 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jojo0616/my_SA_distilbert_model_finalversion
jojo0616
2023-05-14T02:19:30Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-14T01:29:32Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_SA_distilbert_model_finalversion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_SA_distilbert_model_finalversion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3031 - Accuracy: 0.9115 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3696 | 1.0 | 2248 | 0.3310 | 0.8852 | | 0.2624 | 2.0 | 4496 | 0.3118 | 0.9063 | | 0.1817 | 3.0 | 6744 | 0.3314 | 0.9072 | | 0.1398 | 4.0 | 8992 | 0.3031 | 0.9115 | | 0.1294 | 5.0 | 11240 | 0.3801 | 0.9110 | | 0.0974 | 6.0 | 13488 | 0.3968 | 0.9059 | | 0.0662 | 7.0 | 15736 | 0.4742 | 0.9177 | | 0.0634 | 8.0 | 17984 | 0.5182 | 0.9150 | | 0.0377 | 9.0 | 20232 | 0.5356 | 0.9159 | | 0.0298 | 10.0 | 22480 | 0.5717 | 0.9139 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
nolanaatama/kdllora
nolanaatama
2023-05-14T01:32:43Z
0
14
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-10T22:55:15Z
--- license: creativeml-openrail-m ---
guoguangjie/my_wikilingua_model2
guoguangjie
2023-05-14T00:40:25Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-14T00:32:44Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: my_wikilingua_model2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_wikilingua_model2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5821 - Rouge1: 0.2402 - Rouge2: 0.0747 - Rougel: 0.1991 - Rougelsum: 0.1993 - Gen Len: 18.82 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 100 | 2.6861 | 0.2284 | 0.0646 | 0.1832 | 0.183 | 18.9375 | | No log | 2.0 | 200 | 2.6137 | 0.2343 | 0.0704 | 0.1919 | 0.1916 | 18.84 | | No log | 3.0 | 300 | 2.5890 | 0.2384 | 0.0729 | 0.1967 | 0.1966 | 18.88 | | No log | 4.0 | 400 | 2.5821 | 0.2402 | 0.0747 | 0.1991 | 0.1993 | 18.82 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
BebyJenita/bebyjenita
BebyJenita
2023-05-14T00:03:03Z
0
0
null
[ "id", "arxiv:1910.09700", "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T22:21:06Z
--- license: creativeml-openrail-m language: - id --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
messerb5467/Taxi-v3
messerb5467
2023-05-13T23:45:48Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T23:45:42Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="messerb5467/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ymmttks/ShoppingArcade
ymmttks
2023-05-13T23:42:40Z
0
0
null
[ "region:us" ]
null
2023-05-13T23:24:18Z
# ShoppingArcade(Japan) ## Trigger Word ``` AKEDO ``` ## Sample <img src="https://huggingface.co/ymmttks/ShoppingArcade/resolve/main/samples/00014-1girl AKEDO.png" width="512"> <img src="https://huggingface.co/ymmttks/ShoppingArcade/resolve/main/samples/00031-1girl AKEDO.png" width="512">
Kardbord/openjourney-unsafe
Kardbord
2023-05-13T23:12:09Z
18
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-13T21:10:07Z
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - text-to-image inference: true --- # Overview This is simply prompthero/openjourney with the safety checker disabled. **DO NOT** attempt to use this model to generate harmful or illegal content. # Openjourney is an open source Stable Diffusion fine tuned model on Midjourney images, by [PromptHero](https://prompthero.com/poolsuite-diffusion-prompts?utm_source=huggingface&utm_medium=referral) Include **'mdjrny-v4 style'** in prompt. Here you'll find hundreds of [Openjourney prompts](https://prompthero.com/openjourney-prompts?utm_source=huggingface&utm_medium=referral) # Openjourney Links - [Lora version](https://huggingface.co/prompthero/openjourney-lora) - [Openjourney v4](https://huggingface.co/prompthero/openjourney-v2) # Want to learn AI art generation?: - [Crash course in AI art generation](https://prompthero.com/academy/prompt-engineering-course?utm_source=huggingface&utm_medium=referral) - [Learn to fine-tune Stable Diffusion for photorealism](https://prompthero.com/academy/dreambooth-stable-diffusion-train-fine-tune-course?utm_source=huggingface&utm_medium=referral) # Use it for free: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/midjourney-v4-diffusion) ### Stable Diffusion v1.5 vs Openjourney (Same parameters, just added "mdjrny-v4 style" at the beginning): <img src="https://s3.amazonaws.com/moonup/production/uploads/1667904587642-63265d019f9d19bfd4f45031.png" width="100%"/> <img src="https://s3.amazonaws.com/moonup/production/uploads/1667904587623-63265d019f9d19bfd4f45031.png" width="100%"/> <img src="https://s3.amazonaws.com/moonup/production/uploads/1667904587609-63265d019f9d19bfd4f45031.png" width="100%"/> <img src="https://s3.amazonaws.com/moonup/production/uploads/1667904587646-63265d019f9d19bfd4f45031.png" width="100%"/> ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "prompthero/openjourney" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "retro serie of different cars with different colors and shapes, mdjrny-v4 style" image = pipe(prompt).images[0] image.save("./retro_cars.png") ```
swadesh7/finetuning-l3-bert-latest
swadesh7
2023-05-13T23:11:18Z
111
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-13T23:04:32Z
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: finetuning-l3-bert-latest results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-l3-bert-latest This model is a fine-tuned version of [l3cube-pune/telugu-bert](https://huggingface.co/l3cube-pune/telugu-bert) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6283 - eval_accuracy: 0.7558 - eval_f1: 0.7529 - eval_runtime: 79.9067 - eval_samples_per_second: 51.61 - eval_steps_per_second: 6.458 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.29.0 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
minoosh/videomae-base-finetuned-IEMOCAP_videos
minoosh
2023-05-13T22:41:23Z
62
0
transformers
[ "transformers", "pytorch", "videomae", "video-classification", "generated_from_trainer", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2023-05-13T15:52:08Z
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-IEMOCAP_videos results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-IEMOCAP_videos This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3194 - Accuracy: 0.3761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 4070 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4161 | 0.1 | 408 | 1.4228 | 0.2115 | | 1.3522 | 1.1 | 816 | 1.3968 | 0.2363 | | 1.2575 | 2.1 | 1224 | 1.4228 | 0.3115 | | 1.2897 | 3.1 | 1632 | 1.4101 | 0.2984 | | 1.3398 | 4.1 | 2040 | 1.4176 | 0.2599 | | 1.3621 | 5.1 | 2448 | 1.3590 | 0.2830 | | 1.2824 | 6.1 | 2856 | 1.3133 | 0.3610 | | 1.3064 | 7.1 | 3264 | 1.3195 | 0.3077 | | 1.378 | 8.1 | 3672 | 1.3562 | 0.2643 | | 1.1909 | 9.1 | 4070 | 1.3917 | 0.2621 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
tamhuynh27/ernie-base-2.0en-finetuned-recipeqa-modified
tamhuynh27
2023-05-13T22:23:02Z
88
2
transformers
[ "transformers", "pytorch", "tensorboard", "ernie", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2023-05-13T21:07:54Z
--- tags: - generated_from_trainer model-index: - name: ernie-base-2.0en-finetuned-recipeqa-modified-updated results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ernie-base-2.0en-finetuned-recipeqa-modified-updated This model is a fine-tuned version of [nghuyong/ernie-2.0-base-en](https://huggingface.co/nghuyong/ernie-2.0-base-en) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
DeadBeast/random-animals-birds
DeadBeast
2023-05-13T22:22:58Z
30
3
diffusers
[ "diffusers", "pytorch", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "animal", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-05-13T22:04:19Z
--- license: creativeml-openrail-m tags: - pytorch - diffusers - stable-diffusion - text-to-image - diffusion-models-class - dreambooth-hackathon - animal widget: - text: a photo of lion in new york --- # DreamBooth model for the animal concept trained by DeadBeast on the DeadBeast/dreambooth-images dataset. This is a Stable Diffusion model fine-tuned on the animal concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of animal dog** ## Description This is a Stable Diffusion model fine-tuned on random animal,birds unsplash images for the animals theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('DeadBeast/random-animals-birds') image = pipeline().images[0] image ```
notstoic/OPT-13B-Erebus-4bit-128g
notstoic
2023-05-13T22:14:55Z
19
16
transformers
[ "transformers", "opt", "text-generation", "en", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-04-07T07:16:31Z
--- language: en license: other commercial: no inference: false --- # OPT-13B-Erebus-4bit-128g ## Model description **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.** This is a 4-bit GPTQ quantization of OPT-13B-Erebus, original model: **https://huggingface.co/KoboldAI/OPT-13B-Erebus** ### Quantization Information Quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa ``` python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.pt python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save_safetensors models/KoboldAI_OPT-13B-Erebus/OPT-13B-Erebus-4bit-128g.safetensors ``` ### License OPT-13B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
keldenl/RedPajama-INCITE-Instruct-3B-v1-GGML
keldenl
2023-05-13T22:07:26Z
12
10
transformers
[ "transformers", "gpt_neox", "text-generation", "red_pajama", "en", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:Muennighoff/P3", "dataset:Muennighoff/natural-instructions", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-08T08:51:10Z
--- license: apache-2.0 language: - en datasets: - togethercomputer/RedPajama-Data-1T - Muennighoff/P3 - Muennighoff/natural-instructions pipeline_tag: text-generation tags: - gpt_neox - red_pajama --- **Original Model Link: https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1** This will NOT work with llama.cpp as of 5/13/2023, but this NOW works (5/13/2023) with the GGML in https://github.com/ggerganov/ggml/ via gpt-neox This also works in my project https://github.com/keldenl/gpt-llama.cpp (uses ggml as an InferenceEngine). # RedPajama-INCITE-Instruct-3B-v1 RedPajama-INCITE-Instruct-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION. The model was fine-tuned for few-shot applications on the data of [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1), with exclusion of tasks that overlap with the HELM core scenarios. ## Model Details - **Developed by**: Together Computer. - **Model type**: Language Model - **Language(s)**: English - **License**: Apache 2.0 - **Model Description**: A 2.8B parameter pretrained language model. ## Prompt Template To prompt the chat model, use a typical instruction format + few shot prompting, for example: ``` Paraphrase the given sentence into a different sentence. Input: Can you recommend some upscale restaurants in New York? Output: What upscale restaurants do you recommend in New York? Input: What are the famous places we should not miss in Paris? Output: Recommend some of the best places to visit in Paris? Input: Could you recommend some hotels that have cheap price in Zurich? Output: ``` ## Which model to download? * The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp * The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below. * The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0. * The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
lrthomps/Reinforce-CartPole-v1
lrthomps
2023-05-13T22:00:12Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T21:59:59Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
agestau/dummy-fashion-classification
agestau
2023-05-13T21:58:05Z
210
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-13T20:52:01Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: dummy-fashion-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-fashion-classification This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1122 - Accuracy: 0.9665 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3331 | 1.0 | 294 | 0.1725 | 0.9519 | | 0.296 | 2.0 | 588 | 0.1323 | 0.9591 | | 0.2484 | 3.0 | 882 | 0.1122 | 0.9665 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
scepter/pygmalion7b
scepter
2023-05-13T21:57:27Z
8
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-12T05:51:25Z
--- duplicated_from: gozfarb/pygmalion-7b-4bit-128g-cuda --- Quantized from https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b
MohammedNasri/whisper_large_ar
MohammedNasri
2023-05-13T21:49:43Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "ar", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-05-13T17:33:25Z
--- language: - ar license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper_large_v2_arabic results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: ar split: test args: 'config: ar, split: test' metrics: - name: Wer type: wer value: 12.773732872855585 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper_large_v2_arabic This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2119 - Wer: 12.7737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0259 | 0.83 | 500 | 0.2119 | 12.7737 | ### Framework versions - Transformers 4.29.1 - Pytorch 1.13.1 - Datasets 2.12.0 - Tokenizers 0.13.3
kahlebr/1
kahlebr
2023-05-13T21:29:31Z
0
0
null
[ "summarization", "region:us" ]
summarization
2023-05-13T21:28:42Z
--- pipeline_tag: summarization ---
tatwan/ppo-LunarLander-v2
tatwan
2023-05-13T21:07:06Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T21:06:44Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.70 +/- 38.96 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
jojo0616/my_Misinformation_distilbert_model
jojo0616
2023-05-13T21:07:05Z
37
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-04-14T17:03:25Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: my_Misinformation_distilbert_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_Misinformation_distilbert_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1879 - Accuracy: 0.9661 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 214 | 0.1283 | 0.9544 | | No log | 2.0 | 428 | 0.1528 | 0.9498 | | 0.1645 | 3.0 | 642 | 0.1276 | 0.9685 | | 0.1645 | 4.0 | 856 | 0.1650 | 0.9614 | | 0.0306 | 5.0 | 1070 | 0.1653 | 0.9661 | | 0.0306 | 6.0 | 1284 | 0.1739 | 0.9673 | | 0.0306 | 7.0 | 1498 | 0.1771 | 0.9661 | | 0.0053 | 8.0 | 1712 | 0.1795 | 0.9661 | | 0.0053 | 9.0 | 1926 | 0.1860 | 0.9626 | | 0.0018 | 10.0 | 2140 | 0.1879 | 0.9661 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
tamhuynh27/roberta-base-fine-tuned-recipeqa-modified
tamhuynh27
2023-05-13T21:01:09Z
133
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-05-04T01:27:12Z
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-base-fine-tuned-recipeqa-modified results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-fine-tuned-recipeqa-modified This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the RecipeQA dataset that has been modified for the purpose of Extractive Question Answering task ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
gugomea/deportes
gugomea
2023-05-13T20:52:35Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-05-13T20:52:27Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Amirmnsh/ppo-LunarLander-v2
Amirmnsh
2023-05-13T20:11:44Z
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T20:11:26Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 260.63 +/- 17.14 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
damapika/roberta-base_ms-marco_mod
damapika
2023-05-13T19:56:43Z
43
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "dataset:generator", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-05-06T20:53:10Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: roberta-base_ms-marco_mod results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_ms-marco_mod This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 3.5359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.5498 | 1.0 | 18861 | 3.5603 | | 3.4253 | 2.0 | 37722 | 3.5359 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dark844/alleymix
dark844
2023-05-13T18:37:50Z
0
0
nemo
[ "nemo", "art", "text-to-image", "en", "dataset:OpenAssistant/oasst1", "arxiv:1910.09700", "license:openrail", "region:us" ]
text-to-image
2023-05-13T18:15:34Z
--- license: openrail datasets: - OpenAssistant/oasst1 language: - en metrics: - accuracy library_name: nemo pipeline_tag: text-to-image tags: - art --- # Model Card for Model ID <!-- Provide a quick summalleymixary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bilal01/segformer-b0-finetuned-segments-test
bilal01
2023-05-13T18:36:07Z
162
0
transformers
[ "transformers", "pytorch", "tensorboard", "segformer", "vision", "image-segmentation", "generated_from_trainer", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-05-13T15:30:14Z
--- license: other tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-test This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the bilal01/stamp-verification-test dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
LarryAIDraw/HSR_Natasha4
LarryAIDraw
2023-05-13T18:28:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T18:15:55Z
--- license: creativeml-openrail-m --- https://civitai.com/models/64271/natashahonkai-star-rail
LarryAIDraw/tifa_lockhart_offset
LarryAIDraw
2023-05-13T18:27:41Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T18:17:45Z
--- license: creativeml-openrail-m --- https://civitai.com/models/6100/tifa-lockhart-lessall-outfitsgreater-lora
LarryAIDraw/Miorine-000009
LarryAIDraw
2023-05-13T18:27:27Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T18:16:58Z
--- license: creativeml-openrail-m --- https://civitai.com/models/64841/miorine-rembran-or-the-witch-from-mercury
LarryAIDraw/tomoe_koga-01
LarryAIDraw
2023-05-13T18:27:14Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T18:16:37Z
--- license: creativeml-openrail-m --- https://civitai.com/models/64850/koga-tomoe-from-bunny-girl-senpai
Tribbiani/robin-7b-v2
Tribbiani
2023-05-13T18:27:07Z
8
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "generated_from_trainer", "dataset:customized", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-13T17:47:37Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - customized model-index: - name: h34 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # h34 This model is a fine-tuned version of [pinkmanlove/llama-7b-hf](https://huggingface.co/pinkmanlove/llama-7b-hf) on the customized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.10.1 - Tokenizers 0.13.3
LarryAIDraw/SwiftsureMaidBikiniV1
LarryAIDraw
2023-05-13T18:26:55Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T18:16:14Z
--- license: creativeml-openrail-m --- https://civitai.com/models/64385/swiftsure-azur-lane-midsummer-special-service-swimsuit
LarryAIDraw/nodoka-01
LarryAIDraw
2023-05-13T18:26:21Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T18:15:34Z
--- license: creativeml-openrail-m --- https://civitai.com/models/64161/toyohama-nodoka-from-bunny-girl-senpai
LarryAIDraw/meinaalter_v1
LarryAIDraw
2023-05-13T18:25:38Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T18:05:54Z
--- license: creativeml-openrail-m --- https://civitai.com/models/20945?modelVersionId=24933
kasunw/ppo-PyramidsRND
kasunw
2023-05-13T18:23:38Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-05-13T18:23:33Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: kasunw/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
KostiuchenkoArtem/my_bart_base_test_model
KostiuchenkoArtem
2023-05-13T18:15:31Z
60
0
transformers
[ "transformers", "tf", "bart", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-13T16:50:19Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: TianMu/my_bart_base_test_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TianMu/my_bart_base_test_model This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.4782 - Validation Loss: 1.6553 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5517 | 1.6689 | 0 | | 1.4782 | 1.6553 | 1 | ### Framework versions - Transformers 4.29.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
ChrisOfLondon/Reinforce-Heli
ChrisOfLondon
2023-05-13T18:04:32Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T18:04:28Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Heli results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 41.20 +/- 28.25 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
nergaldarski/KojiV2
nergaldarski
2023-05-13T17:46:46Z
0
1
null
[ "region:us" ]
null
2023-05-13T17:31:01Z
CivitAI: https://civitai.com/models/41916/koji
vop020506/entregable2
vop020506
2023-05-13T17:43:44Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-05-13T16:47:38Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Trwgg/Rufinox
Trwgg
2023-05-13T17:35:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T17:23:22Z
--- license: creativeml-openrail-m ---
messerb5467/ppo-Huggy
messerb5467
2023-05-13T17:34:14Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-05-13T17:34:08Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: messerb5467/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nergaldarski/hassakuV1.2
nergaldarski
2023-05-13T17:26:56Z
0
1
null
[ "region:us" ]
null
2023-05-13T17:11:16Z
CivitAI: https://civitai.com/models/2583?modelVersionId=62528
Johnhex/Clam1.1
Johnhex
2023-05-13T17:19:20Z
1
1
diffusers
[ "diffusers", "stable diffusion", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-04-19T14:58:35Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable diffusion ---
fgiauna/peft-lora-jul
fgiauna
2023-05-13T17:11:02Z
0
0
null
[ "tensorboard", "generated_from_trainer", "license:mit", "region:us" ]
null
2023-05-13T17:05:06Z
--- license: mit tags: - generated_from_trainer model-index: - name: peft-lora-jul results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # peft-lora-jul This model is a fine-tuned version of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0817 - Loc: {'precision': 0.5887445887445888, 'recall': 0.6296296296296297, 'f1': 0.6085011185682326, 'number': 216} - Misc: {'precision': 0.6111111111111112, 'recall': 0.275, 'f1': 0.3793103448275862, 'number': 40} - Org: {'precision': 0.7004830917874396, 'recall': 0.725, 'f1': 0.7125307125307125, 'number': 200} - Per: {'precision': 0.7540106951871658, 'recall': 0.7193877551020408, 'f1': 0.7362924281984334, 'number': 196} - Overall Precision: 0.6734 - Overall Recall: 0.6641 - Overall F1: 0.6687 - Overall Accuracy: 0.9772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Multi-Domain-Expert-Learning/merged-pubmed-freelaw
Multi-Domain-Expert-Learning
2023-05-13T17:08:12Z
170
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "MDEL", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-05-13T16:47:30Z
--- tags: - MDEL --- # Model Name Multi-Domain-Expert-Layers/merged-pubmed-freelaw # Model Description This model was generated by averaging the weights of the following models - [Multi-Domain-Expert-Layers/expert-freelaw](https://huggingface.co/Multi-Domain-Expert-Layers/expert-freelaw) - [Multi-Domain-Expert-Layers/expert-pubmed_central](https://huggingface.co/Multi-Domain-Expert-Layers/expert-pubmed_central)
vitouphy/wav2vec2-xls-r-300m-phoneme
vitouphy
2023-05-13T17:04:45Z
60,319
3
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-19T03:03:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-300m-phoneme results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-phoneme This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3327 - Cer: 0.1332 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 7000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4324 | 1.32 | 1000 | 3.3693 | 0.9091 | | 2.1751 | 2.65 | 2000 | 1.1382 | 0.2397 | | 1.3986 | 3.97 | 3000 | 0.4886 | 0.1452 | | 1.2285 | 5.3 | 4000 | 0.3842 | 0.1351 | | 1.142 | 6.62 | 5000 | 0.3505 | 0.1349 | | 1.1075 | 7.95 | 6000 | 0.3323 | 0.1317 | | 1.0867 | 9.27 | 7000 | 0.3265 | 0.1315 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
vitouphy/wav2vec2-xls-r-300m-timit-phoneme
vitouphy
2023-05-13T17:04:31Z
9,515
28
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "generated_from_trainer", "doi:10.57967/hf/0125", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-05-08T06:41:55Z
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - pytorch - transformers - en - generated_from_trainer model-index: - name: wav2vec2-xls-r-300m-phoneme results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: DARPA TIMIT type: timit args: en metrics: - name: Test CER type: cer value: 7.996 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## Model This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the Timit dataset. Check [this notebook](https://www.kaggle.com/code/vitouphy/phoneme-recognition-with-wav2vec2) for training detail. ## Usage **Approach 1:** Using HuggingFace's pipeline, this will cover everything end-to-end from raw audio input to text output. ```python from transformers import pipeline # Load the model pipe = pipeline(model="vitouphy/wav2vec2-xls-r-300m-timit-phoneme") # Process raw audio output = pipe("audio_file.wav", chunk_length_s=10, stride_length_s=(4, 2)) ``` **Approach 2:** More custom way to predict phonemes. ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch import soundfile as sf # load model and processor processor = Wav2Vec2Processor.from_pretrained("vitouphy/wav2vec2-xls-r-300m-timit-phoneme") model = Wav2Vec2ForCTC.from_pretrained("vitouphy/wav2vec2-xls-r-300m-timit-phoneme") # Read and process the input audio_input, sample_rate = sf.read("audio_file.wav") inputs = processor(audio_input, sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits # Decode id into string predicted_ids = torch.argmax(logits, axis=-1) predicted_sentences = processor.batch_decode(predicted_ids) print(predicted_sentences) ``` ## Training and evaluation data We use [DARPA TIMIT dataset](https://www.kaggle.com/datasets/mfekadu/darpa-timit-acousticphonetic-continuous-speech) for this model. - We split into **80/10/10** for training, validation, and testing respectively. - That roughly corresponds to about **137/17/17** minutes. - The model obtained **7.996%** on this test set. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 10000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0 ### Citation ``` @misc { phy22-phoneme, author = {Phy, Vitou}, title = {{Automatic Phoneme Recognition on TIMIT Dataset with Wav2Vec 2.0}}, year = 2022, note = {{If you use this model, please cite it using these metadata.}}, publisher = {Hugging Face}, version = {1.0}, doi = {10.57967/hf/0125}, url = {https://huggingface.co/vitouphy/wav2vec2-xls-r-300m-timit-phoneme} } ```
vitouphy/wav2vec2-xls-r-300m-english
vitouphy
2023-05-13T17:04:05Z
94
3
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "en", "generated_from_trainer", "hf-asr-leaderboard", "librispeech_asr", "robust-speech-event", "dataset:librispeech_asr", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - en license: apache-2.0 tags: - automatic-speech-recognition - en - generated_from_trainer - hf-asr-leaderboard - librispeech_asr - robust-speech-event datasets: - librispeech_asr model-index: - name: XLS-R-300M - English results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 12.29 - name: Test CER type: cer value: 3.34 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: en metrics: - name: Validation WER type: wer value: 36.75 - name: Validation CER type: cer value: 14.83 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 8.0 type: mozilla-foundation/common_voice_8_0 config: en split: test args: language: en metrics: - name: Test WER type: wer value: 37.81 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: en metrics: - name: Test WER type: wer value: 38.8 --- # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 0.1444 - Wer: 0.1167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9365 | 4.17 | 500 | 2.9398 | 0.9999 | | 1.5444 | 8.33 | 1000 | 0.5947 | 0.4289 | | 1.1367 | 12.5 | 1500 | 0.2751 | 0.2366 | | 0.9972 | 16.66 | 2000 | 0.2032 | 0.1797 | | 0.9118 | 20.83 | 2500 | 0.1786 | 0.1479 | | 0.8664 | 24.99 | 3000 | 0.1641 | 0.1408 | | 0.8251 | 29.17 | 3500 | 0.1537 | 0.1267 | | 0.793 | 33.33 | 4000 | 0.1525 | 0.1244 | | 0.785 | 37.5 | 4500 | 0.1470 | 0.1184 | | 0.7612 | 41.66 | 5000 | 0.1446 | 0.1177 | | 0.7478 | 45.83 | 5500 | 0.1449 | 0.1176 | | 0.7443 | 49.99 | 6000 | 0.1444 | 0.1167 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
akaneshiro/q-Taxi-v3
akaneshiro
2023-05-13T16:46:31Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T16:46:29Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="akaneshiro/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
vop020506/hojas_uva
vop020506
2023-05-13T16:37:52Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-05-13T16:37:49Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
messerb5467/ppo-LunarLander-v2
messerb5467
2023-05-13T16:30:17Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T16:29:57Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.83 +/- 18.72 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LarryAIDraw/shiina_mashiro_v1
LarryAIDraw
2023-05-13T16:18:53Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-04-28T06:14:19Z
--- license: creativeml-openrail-m --- https://civitai.com/models/50851?modelVersionId=55367
walter2/imdb_model2
walter2
2023-05-13T16:10:17Z
59
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-13T16:09:15Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: imdb_model2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # imdb_model2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0909 - Validation Loss: 0.0370 - Train Accuracy: 0.992 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 625, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.4551 | 0.2036 | 0.924 | 0 | | 0.1960 | 0.1091 | 0.976 | 1 | | 0.0909 | 0.0370 | 0.992 | 2 | ### Framework versions - Transformers 4.29.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
Staticaliza/TestModel
Staticaliza
2023-05-13T15:55:56Z
0
0
null
[ "aa", "dataset:databricks/databricks-dolly-15k", "license:openrail", "region:us" ]
null
2023-05-13T15:50:06Z
--- license: openrail datasets: - databricks/databricks-dolly-15k language: - aa ---
LukeMich/my_awesome_model
LukeMich
2023-05-13T15:38:44Z
60
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-13T15:03:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: LukeMich/my_awesome_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # LukeMich/my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7595 - Validation Loss: 0.5417 - Train Accuracy: 0.8762 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 275, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 1.0643 | 0.8268 | 0.6571 | 0 | | 0.7595 | 0.5417 | 0.8762 | 1 | ### Framework versions - Transformers 4.29.1 - TensorFlow 2.12.0 - Datasets 2.12.0 - Tokenizers 0.13.3
divers/e2e-flan-large-noscore-totalds
divers
2023-05-13T15:10:50Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-02T05:34:46Z
<table> <thead> <tr> <th>Epoch</th> <th>Training Loss</th> <th>Validation Loss</th> <th>Rouge1</th> <th>Rouge2</th> <th>Rougel</th> <th>Rougelsum</th> <th>Gen Len</th> </tr> </thead> <tr> <td>0</td> <td>0.630300</td> <td>0.412157</td> <td>0.417600</td> <td>0.263800</td> <td>0.332800</td> <td>0.406200</td> <td>794.000000</td> </tr> <tr> <td>1</td> <td>0.445600</td> <td>0.371808</td> <td>0.516700</td> <td>0.336200</td> <td>0.415500</td> <td>0.508000</td> <td>560.642900</td> </tr> <tr> <td>2</td> <td>0.398800</td> <td>0.350914</td> <td>0.562700</td> <td>0.375400</td> <td>0.443900</td> <td>0.552700</td> <td>523.714300</td> </tr> <tr> <td>4</td> <td>0.350600</td> <td>0.334888</td> <td>0.553300</td> <td>0.364900</td> <td>0.427100</td> <td>0.538800</td> <td>464.035700</td> </tr> <tr> <td>5</td> <td>0.334300</td> <td>0.326556</td> <td>0.552100</td> <td>0.361400</td> <td>0.429900</td> <td>0.540300</td> <td>517.821400</td> </tr> <tr> <td>6</td> <td>0.322300</td> <td>0.321693</td> <td>0.596600</td> <td>0.400800</td> <td>0.469400</td> <td>0.586400</td> <td>414.892900</td> </tr> <tr> <td>8</td> <td>0.308800</td> <td>0.321562</td> <td>0.594200</td> <td>0.389100</td> <td>0.458500</td> <td>0.581800</td> <td>401.357100</td> </tr> <tr> <td>8</td> <td>0.300100</td> <td>0.319800</td> <td>0.586200</td> <td>0.376100</td> <td>0.453400</td> <td>0.571500</td> <td>381.357100</td> </tr> <tr> <td>9</td> <td>0.291200</td> <td>0.319443</td> <td>0.611500</td> <td>0.399600</td> <td>0.468600</td> <td>0.597500</td> <td>368.821400</td> </tr> <tr> <td>10</td> <td>0.282900</td> <td>0.318927</td> <td>0.593200</td> <td>0.388700</td> <td>0.459100</td> <td>0.579800</td> <td>354.285700</td> </tr> <tr> <td>12</td> <td>0.273700</td> <td>0.319651</td> <td>0.594000</td> <td>0.394200</td> <td>0.457000</td> <td>0.580800</td> <td>386.785700</td> </tr> <tr> <td>12</td> <td>0.268100</td> <td>0.315178</td> <td>0.603700</td> <td>0.396100</td> <td>0.465300</td> <td>0.588500</td> <td>365.714300</td> </tr> <tr> <td>13</td> <td>0.262000</td> <td>0.312819</td> <td>0.601500</td> <td>0.402800</td> <td>0.471700</td> <td>0.586000</td> <td>377.250000</td> </tr> <tr> <td>14</td> <td>0.254900</td> <td>0.316255</td> <td>0.601200</td> <td>0.397600</td> <td>0.469700</td> <td>0.587900</td> <td>353.071400</td> </tr> <tr> <td>16</td> <td>0.248500</td> <td>0.316413</td> <td>0.610300</td> <td>0.407900</td> <td>0.476000</td> <td>0.597400</td> <td>341.464300</td> </tr> <tr> <td>16</td> <td>0.243600</td> <td>0.315982</td> <td>0.611400</td> <td>0.404900</td> <td>0.483200</td> <td>0.598300</td> <td>379.571400</td> </tr> <tr> <td>17</td> <td>0.238900</td> <td>0.318108</td> <td>0.608100</td> <td>0.408200</td> <td>0.486100</td> <td>0.594000</td> <td>375.964300</td> </tr> <tr> <td>18</td> <td>0.233900</td> <td>0.317792</td> <td>0.600200</td> <td>0.406300</td> <td>0.471700</td> <td>0.587600</td> <td>346.964300</td> </tr> <tr> <td>19</td> <td>0.229600</td> <td>0.322435</td> <td>0.599100</td> <td>0.407100</td> <td>0.479600</td> <td>0.586600</td> <td>362.571400</td> </tr> </table>
divers/flan-base-req-extractor
divers
2023-05-13T15:01:01Z
3
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-05T20:23:32Z
html''' <table> <tr> <th>Epoch</th> <th>Training Loss</th> <th>Validation Loss</th> <th>Rouge1</th> <th>Rouge2</th> <th>Rougel</th> <th>Rougelsum</th> <th>Gen Len</th> </tr> <tr> <td>0</td> <td>0.357300</td> <td>0.280200</td> <td>0.732700</td> <td>0.685700</td> <td>0.695100</td> <td>0.700500</td> <td>303.733300</td> </tr> <tr> <td>2</td> <td>0.257200</td> <td>0.244938</td> <td>0.742900</td> <td>0.702100</td> <td>0.712600</td> <td>0.717700</td> <td>330.200000</td> </tr> <tr> <td>2</td> <td>0.229900</td> <td>0.230673</td> <td>0.789800</td> <td>0.747500</td> <td>0.759500</td> <td>0.765300</td> <td>267.666700</td> </tr> <tr> <td>4</td> <td>0.209900</td> <td>0.213156</td> <td>0.800300</td> <td>0.759900</td> <td>0.766400</td> <td>0.771700</td> <td>274.466700</td> </tr> <tr> <td>4</td> <td>0.196200</td> <td>0.207821</td> <td>0.782800</td> <td>0.745000</td> <td>0.754900</td> <td>0.756200</td> <td>288.333300</td> </tr> <tr> <td>6</td> <td>0.183900</td> <td>0.203908</td> <td>0.752000</td> <td>0.715000</td> <td>0.726300</td> <td>0.727100</td> <td>309.755600</td> </tr> <tr> <td>6</td> <td>0.174500</td> <td>0.203386</td> <td>0.786100</td> <td>0.743400</td> <td>0.750800</td> <td>0.756200</td> <td>252.422200</td> </tr> <tr> <td>8</td> <td>0.165500</td> <td>0.190161</td> <td>0.771100</td> <td>0.733500</td> <td>0.735600</td> <td>0.740400</td> <td>292.288900</td> </tr> <tr> <td>8</td> <td>0.158300</td> <td>0.192600</td> <td>0.774900</td> <td>0.737300</td> <td>0.743300</td> <td>0.744300</td> <td>285.800000</td> </tr> <tr> <td>9</td> <td>0.152200</td> <td>0.192426</td> <td>0.795200</td> <td>0.758900</td> <td>0.754700</td> <td>0.759000</td> <td>284.266700</td> </tr> <tr> <td>15</td> <td>0.124400</td> <td>0.182381</td> <td>0.787800</td> <td>0.742800</td> <td>0.745100</td> <td>0.746900</td> <td>274.533300</td> </tr> <tr> <td>17</td> <td>0.120300</td> <td>0.183192</td> <td>0.779400</td> <td>0.739000</td> <td>0.734500</td> <td>0.739300</td> <td>289.266700</td> </tr> </table> '''
divers/ans-scorer-flan-large
divers
2023-05-13T14:48:52Z
6
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-04-24T09:32:14Z
html''' <table> <thead> <tr> <th>Epoch</th> <th>Training Loss</th> <th>Validation Loss</th> <th>Rouge1</th> <th>Rouge2</th> <th>Rougel</th> <th>Rougelsum</th> <th>Gen Len</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0.128000</td> <td>0.069041</td> <td>0.932900</td> <td>0.876200</td> <td>0.924700</td> <td>0.926800</td> <td>16.829500</td> </tr> <tr> <td>1</td> <td>0.073900</td> <td>0.061863</td> <td>0.935600</td> <td>0.881900</td> <td>0.927700</td> <td>0.929800</td> <td>16.827600</td> </tr> <tr> <td>2</td> <td>0.065500</td> <td>0.062592</td> <td>0.932900</td> <td>0.876200</td> <td>0.924800</td> <td>0.927300</td> <td>16.825100</td> </tr> <tr> <td>3</td> <td>0.059700</td> <td>0.058368</td> <td>0.935900</td> <td>0.883800</td> <td>0.928100</td> <td>0.930100</td> <td>16.818100</td> </tr> <tr> <td>4</td> <td>0.055200</td> <td>0.057483</td> <td>0.936500</td> <td>0.887600</td> <td>0.930200</td> <td>0.932200</td> <td>16.825100</td> </tr> <tr> <td>5</td> <td>0.051500</td> <td>0.058953</td> <td>0.937300</td> <td>0.887200</td> <td>0.929800</td> <td>0.931600</td> <td>16.826300</td> </tr> </tbody> </table> '''
andyssj/entregable2
andyssj
2023-05-13T14:40:29Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-05-13T14:40:26Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Abrumu/output
Abrumu
2023-05-13T14:18:18Z
5
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "controlnet", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-05-12T16:02:42Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - controlnet inference: true --- # controlnet-Abrumu/output These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
Yahiael1/mymodel_v2_4
Yahiael1
2023-05-13T14:11:08Z
104
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-05-13T13:48:07Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: mymodel_v2_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mymodel_v2_4 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1383 - Rouge1: 0.5107 - Rouge2: 0.1818 - Rougel: 0.4557 - Rougelsum: 0.4753 - Gen Len: 19.4327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 111 | 1.6651 | 1.0836 | 0.9742 | 1.076 | 1.0681 | 19.4 | | No log | 2.0 | 222 | 1.6632 | 0.5545 | 0.3924 | 0.5312 | 0.5302 | 19.5855 | | No log | 3.0 | 333 | 1.7607 | 0.7463 | 0.5905 | 0.7663 | 0.7512 | 19.6982 | | No log | 4.0 | 444 | 1.8583 | 0.8352 | 0.7153 | 0.8546 | 0.8534 | 19.7018 | | 1.4574 | 5.0 | 555 | 1.9357 | 0.659 | 0.6196 | 0.6745 | 0.6962 | 19.3273 | | 1.4574 | 6.0 | 666 | 2.0241 | 0.4785 | 0.4545 | 0.4878 | 0.4997 | 19.6036 | | 1.4574 | 7.0 | 777 | 2.0663 | 0.2327 | 0.1818 | 0.2741 | 0.2741 | 19.2327 | | 1.4574 | 8.0 | 888 | 2.0969 | 0.3755 | 0.2916 | 0.3915 | 0.3956 | 19.4545 | | 1.4574 | 9.0 | 999 | 2.1291 | 0.7743 | 0.5592 | 0.7473 | 0.7881 | 19.3964 | | 0.3529 | 10.0 | 1110 | 2.1383 | 0.5107 | 0.1818 | 0.4557 | 0.4753 | 19.4327 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.13.3
intanm/mlm-20230513-indobert-large-p1-002-pt1
intanm
2023-05-13T14:10:09Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-05-13T13:19:03Z
--- license: mit tags: - generated_from_trainer model-index: - name: mlm-20230513-indobert-large-p1-002-pt1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mlm-20230513-indobert-large-p1-002-pt1 This model is a fine-tuned version of [indobenchmark/indobert-large-p1](https://huggingface.co/indobenchmark/indobert-large-p1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1513 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 284 | 3.7844 | | 4.4953 | 2.0 | 568 | 3.0374 | | 4.4953 | 3.0 | 852 | 2.7386 | | 2.9063 | 4.0 | 1136 | 2.5432 | | 2.9063 | 5.0 | 1420 | 2.3463 | | 2.4449 | 6.0 | 1704 | 2.3084 | | 2.4449 | 7.0 | 1988 | 2.2064 | | 2.2361 | 8.0 | 2272 | 2.1498 | | 2.1263 | 9.0 | 2556 | 2.1531 | | 2.1263 | 10.0 | 2840 | 2.1542 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
aalonso-developer/vit-base-patch16-224-in21k-euroSat
aalonso-developer
2023-05-13T14:02:42Z
62
0
transformers
[ "transformers", "tf", "tensorboard", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-13T13:19:23Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: aalonso-developer/vit-base-patch16-224-in21k-euroSat results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aalonso-developer/vit-base-patch16-224-in21k-euroSat This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0212 - Train Accuracy: 0.9992 - Train Top-3-accuracy: 1.0000 - Validation Loss: 0.0613 - Validation Accuracy: 0.9864 - Validation Top-3-accuracy: 0.9998 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3590, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 0.4737 | 0.9429 | 0.9862 | 0.1568 | 0.9788 | 0.9993 | 0 | | 0.0998 | 0.9878 | 0.9996 | 0.1010 | 0.9805 | 0.9993 | 1 | | 0.0503 | 0.9946 | 0.9999 | 0.0720 | 0.9857 | 0.9998 | 2 | | 0.0297 | 0.9978 | 1.0000 | 0.0606 | 0.9881 | 0.9995 | 3 | | 0.0212 | 0.9992 | 1.0000 | 0.0613 | 0.9864 | 0.9998 | 4 | ### Framework versions - Transformers 4.29.1 - TensorFlow 2.11.0 - Datasets 2.12.0 - Tokenizers 0.13.3
bdsqlsz/FaceBeauty
bdsqlsz
2023-05-13T13:50:06Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T13:47:22Z
--- license: creativeml-openrail-m ---
gokulh/ppo-LunarLander-v2
gokulh
2023-05-13T13:49:06Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T13:48:42Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 194.18 +/- 78.91 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
knlpscience/xlm-roberta-base-finetuned-panx-de
knlpscience
2023-05-13T13:07:43Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-05-13T13:02:54Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: validation args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8609120891618334 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1400 - F1: 0.8609 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2581 | 1.0 | 525 | 0.1584 | 0.8233 | | 0.1252 | 2.0 | 1050 | 0.1384 | 0.8491 | | 0.0811 | 3.0 | 1575 | 0.1400 | 0.8609 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
JustFrederik/m2m_100_418m_ct2_int8
JustFrederik
2023-05-13T13:07:09Z
5
0
transformers
[ "transformers", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-05-13T13:06:12Z
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit --- https://huggingface.co/facebook/m2m100_418M <br /> https://github.com/facebookresearch/fairseq/tree/nllb/examples/m2m_100 ``` ct2-fairseq-converter --data_dir . --model_path 418M_last_checkpoint.pt --fixed_dictionary model_dict.128k.txt --quantization int8 --output_dir converted/m2m_100_418m_ct2_int8 ``` External language dictionary is not provided; use lang-pairs to infer the set of supported languages. The language ordering is not stable which might cause misalignment in pretraining and finetuning. ``` wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt # 418M parameter model wget https://dl.fbaipublicfiles.com/m2m_100/418M_last_checkpoint.pt ```
JustFrederik/m2m_100_418m_ct2
JustFrederik
2023-05-13T13:03:58Z
4
0
transformers
[ "transformers", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-05-13T13:01:28Z
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit --- https://huggingface.co/facebook/m2m100_418M <br /> https://github.com/facebookresearch/fairseq/tree/nllb/examples/m2m_100 ``` ct2-fairseq-converter --data_dir . --model_path 418M_last_checkpoint.pt --fixed_dictionary model_dict.128k.txt --output_dir converted/m2m_100_418m_ct2 ``` External language dictionary is not provided; use lang-pairs to infer the set of supported languages. The language ordering is not stable which might cause misalignment in pretraining and finetuning. ``` wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt # 418M parameter model wget https://dl.fbaipublicfiles.com/m2m_100/418M_last_checkpoint.pt ```
JustFrederik/m2m_100_1.2b_ct2_float16
JustFrederik
2023-05-13T13:00:25Z
3
0
transformers
[ "transformers", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-05-13T12:53:19Z
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit --- https://huggingface.co/facebook/m2m100_1.2B <br /> https://github.com/facebookresearch/fairseq/tree/nllb/examples/m2m_100 ``` ct2-fairseq-converter --data_dir . --model_path 1.2B_last_checkpoint.pt --fixed_dictionary model_dict.128k.txt --quantization float16 --output_dir converted/m2m_100_1.2b_ct2_float16 ``` External language dictionary is not provided; use lang-pairs to infer the set of supported languages. The language ordering is not stable which might cause misalignment in pretraining and finetuning. ``` wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt # 1.2B parameter model wget https://dl.fbaipublicfiles.com/m2m_100/1.2B_last_checkpoint.pt ```
JustFrederik/m2m_100_1.2b_ct2_int8
JustFrederik
2023-05-13T12:59:21Z
2
0
transformers
[ "transformers", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-05-13T12:57:25Z
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit --- https://huggingface.co/facebook/m2m100_1.2B <br /> https://github.com/facebookresearch/fairseq/tree/nllb/examples/m2m_100 ``` ct2-fairseq-converter --data_dir . --model_path 1.2B_last_checkpoint.pt --fixed_dictionary model_dict.128k.txt --quantization int8 --output_dir converted/m2m_100_1.2b_ct2_int8 ``` External language dictionary is not provided; use lang-pairs to infer the set of supported languages. The language ordering is not stable which might cause misalignment in pretraining and finetuning. ``` wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt # 1.2B parameter model wget https://dl.fbaipublicfiles.com/m2m_100/1.2B_last_checkpoint.pt ```
JustFrederik/m2m_100_1.2b_ct2
JustFrederik
2023-05-13T12:52:25Z
2
0
transformers
[ "transformers", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-05-13T12:45:46Z
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit --- https://huggingface.co/facebook/m2m100_1.2B <br /> https://github.com/facebookresearch/fairseq/tree/nllb/examples/m2m_100 ``` ct2-fairseq-converter --data_dir . --model_path 1.2B_last_checkpoint.pt --fixed_dictionary model_dict.128k.txt --output_dir converted/m2m_100_1.2b_ct2 ``` External language dictionary is not provided; use lang-pairs to infer the set of supported languages. The language ordering is not stable which might cause misalignment in pretraining and finetuning. ``` wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt # 1.2B parameter model wget https://dl.fbaipublicfiles.com/m2m_100/1.2B_last_checkpoint.pt ```
autobots/pygmalion_6b_roleplay_lora
autobots
2023-05-13T12:45:35Z
0
3
null
[ "region:us" ]
null
2023-05-13T12:37:53Z
Trained in 4-bit on pygmalion-6b as POC Uses the GPTeacher roleplay dataset. ``` INFO:Getting model ready... INFO:Prepping for training... INFO:Creating LoRA model... INFO:Starting training... {'loss': 12.5737, 'learning_rate': 0.0002926829268292683, 'epoch': 0.33} {'loss': 8.5515, 'learning_rate': 0.0002560975609756097, 'epoch': 0.67} {'loss': 7.5768, 'learning_rate': 0.0002195121951219512, 'epoch': 1.0} {'loss': 6.9769, 'learning_rate': 0.00018292682926829266, 'epoch': 1.33} {'loss': 6.6842, 'learning_rate': 0.00014634146341463414, 'epoch': 1.66} {'loss': 6.3925, 'learning_rate': 0.0001097560975609756, 'epoch': 2.0} {'loss': 6.041, 'learning_rate': 7.317073170731707e-05, 'epoch': 2.33} {'loss': 5.6818, 'learning_rate': 3.6585365853658535e-05, 'epoch': 2.66} {'loss': 5.4639, 'learning_rate': 0.0, 'epoch': 2.99} {'train_runtime': 960.7748, 'train_samples_per_second': 6.005, 'train_steps_per_second': 0.047, 'train_loss': 7.326934729682074, 'epoch': 2.99} INFO:LoRA training run is completed and saved. INFO:Training complete! ``` I used the electricity so might as well post it.
Tingwen/ppo-Huggy
Tingwen
2023-05-13T12:39:54Z
12
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-05-13T11:57:05Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: Tingwen/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
paolorechia/wizard-lm-7b-react-medium-tasks-dirty-lora
paolorechia
2023-05-13T12:21:54Z
0
2
null
[ "license:other", "region:us" ]
null
2023-05-13T11:54:42Z
--- license: other --- This is a LLama LoRA fine tuned on top of WizardLM-7B with this dataset: https://huggingface.co/datasets/paolorechia/medium-size-generated-tasks It's meant mostly as an proof of concept to see how fine tuning may improve the performance of coding agents that rely on the Langchain framework. To use this LoRA, you can use my repo as starting point: https://github.com/paolorechia/learn-langchain
smile367/task_qa_distilbert
smile367
2023-05-13T11:46:46Z
109
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-05-13T10:45:01Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: task_qa_distilbert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # task_qa_distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6780 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.4280 | | 2.763 | 2.0 | 500 | 1.7672 | | 2.763 | 3.0 | 750 | 1.6780 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
leireher/BookGenrePredictionDBERT
leireher
2023-05-13T11:44:50Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "multilabel", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-11T15:58:45Z
--- language: - en metrics: - f1 pipeline_tag: text-classification tags: - multilabel ---
AlekseyKorshuk/roberta-with-topic
AlekseyKorshuk
2023-05-13T11:09:06Z
7
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-13T07:58:23Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-with-topic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-with-topic This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5283 - Ndcg: 0.4453 - Accuracy: 0.2941 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Ndcg | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:| | 1.5951 | 0.07 | 413 | 1.5693 | 0.4220 | 0.2766 | | 1.5721 | 0.13 | 826 | 1.5537 | 0.4308 | 0.2828 | | 1.5594 | 0.2 | 1239 | 1.5615 | 0.4236 | 0.2757 | | 1.5753 | 0.27 | 1652 | 1.5645 | 0.4272 | 0.2778 | | 1.5778 | 0.33 | 2065 | 1.5859 | 0.3736 | 0.2430 | | 1.5673 | 0.4 | 2478 | 1.5576 | 0.4262 | 0.2812 | | 1.5633 | 0.47 | 2891 | 1.5557 | 0.4294 | 0.2815 | | 1.5606 | 0.53 | 3304 | 1.5459 | 0.4321 | 0.2836 | | 1.5476 | 0.6 | 3717 | 1.5508 | 0.4269 | 0.2810 | | 1.552 | 0.67 | 4130 | 1.5479 | 0.4302 | 0.2831 | | 1.5469 | 0.73 | 4543 | 1.5430 | 0.4345 | 0.2882 | | 1.5538 | 0.8 | 4956 | 1.5410 | 0.4371 | 0.2877 | | 1.557 | 0.87 | 5369 | 1.5420 | 0.4368 | 0.2896 | | 1.5427 | 0.93 | 5782 | 1.5449 | 0.4269 | 0.2814 | | 1.5427 | 1.0 | 6195 | 1.5381 | 0.4380 | 0.2896 | | 1.5469 | 1.07 | 6608 | 1.5381 | 0.4362 | 0.2849 | | 1.5369 | 1.13 | 7021 | 1.5361 | 0.4383 | 0.2895 | | 1.5465 | 1.2 | 7434 | 1.5361 | 0.4415 | 0.2940 | | 1.5433 | 1.27 | 7847 | 1.5342 | 0.4399 | 0.2914 | | 1.5355 | 1.33 | 8260 | 1.5342 | 0.4409 | 0.2937 | | 1.5363 | 1.4 | 8673 | 1.5342 | 0.4414 | 0.2923 | | 1.5372 | 1.47 | 9086 | 1.5312 | 0.4440 | 0.2949 | | 1.5452 | 1.53 | 9499 | 1.5303 | 0.4439 | 0.2937 | | 1.5386 | 1.6 | 9912 | 1.5293 | 0.4434 | 0.2915 | | 1.5314 | 1.67 | 10325 | 1.5303 | 0.4443 | 0.2925 | | 1.5216 | 1.73 | 10738 | 1.5293 | 0.4447 | 0.2930 | | 1.5341 | 1.8 | 11151 | 1.5293 | 0.4450 | 0.2929 | | 1.5315 | 1.87 | 11564 | 1.5283 | 0.4456 | 0.2947 | | 1.5345 | 1.93 | 11977 | 1.5283 | 0.4455 | 0.2950 | | 1.5238 | 2.0 | 12390 | 1.5283 | 0.4453 | 0.2941 | ### Framework versions - Transformers 4.29.1 - Pytorch 2.0.0-rc1 - Datasets 2.12.0 - Tokenizers 0.13.3
hugogeraldes/q-FrozenLake-v1-4x4-noSlippery
hugogeraldes
2023-05-13T10:52:35Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T10:52:32Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="hugogeraldes/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
OKTAN94/293Ulzzangmodelsampe
OKTAN94
2023-05-13T10:45:22Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-13T10:34:57Z
--- license: creativeml-openrail-m ---
Neronuser/Reinforce-helicopter
Neronuser
2023-05-13T10:38:02Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-05-13T10:37:58Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-helicopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 30.50 +/- 25.11 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction