modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-06 00:40:20
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
468 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-06 00:38:53
card
stringlengths
11
1.01M
hafidikhsan/distilbert-base-uncased-english-cefr-lexical-evaluation-dt-v1
hafidikhsan
2023-07-22T10:45:27Z
105
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-22T10:44:47Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distilbert-base-uncased-english-cefr-lexical-evaluation-dt-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-english-cefr-lexical-evaluation-dt-v1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5309 - Accuracy: 0.8716 - F1: 0.8713 - Precision: 0.8714 - Recall: 0.8716 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.6225 | 1.0 | 3403 | 0.6408 | 0.7578 | 0.7538 | 0.7826 | 0.7578 | | 0.3645 | 2.0 | 6806 | 0.4180 | 0.8597 | 0.8573 | 0.8554 | 0.8597 | | 0.2349 | 3.0 | 10209 | 0.4452 | 0.8631 | 0.8621 | 0.8637 | 0.8631 | | 0.1269 | 4.0 | 13612 | 0.5257 | 0.8694 | 0.8690 | 0.8690 | 0.8694 | | 0.0605 | 5.0 | 17015 | 0.6865 | 0.8671 | 0.8668 | 0.8669 | 0.8671 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Mykolyt/ppo-CartPole-v1
Mykolyt
2023-07-22T10:38:40Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T10:38:35Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -149.09 +/- 85.20 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Mykolyt/ppo-CartPole-v1' 'batch_size': 512 'minibatch_size': 128} ```
klolayekar/flan_t5_base_peft_summary
klolayekar
2023-07-22T10:37:01Z
0
0
peft
[ "peft", "tensorboard", "summarization", "en", "license:apache-2.0", "region:us" ]
summarization
2023-07-22T10:01:37Z
--- license: apache-2.0 language: - en metrics: - bleu library_name: peft pipeline_tag: summarization --- Chat Summarization with Google's Flan-T5-Base: The Chat Summarization model based on Google's Flan-T5-Base is designed to generate concise summaries of chat conversations. It is trained to comprehend and condense dialogue data, making it an ideal choice for summarizing conversations in various applications. This model can be leveraged in chat-based applications, customer support systems, or any scenario where summarizing chat conversations is essential.
furqank8/bert-fine-tuned-cola
furqank8
2023-07-22T10:28:30Z
62
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-22T07:34:49Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_keras_callback model-index: - name: bert-fine-tuned-cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-fine-tuned-cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2953 - Validation Loss: 0.4732 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.4873 | 0.4312 | 0 | | 0.2953 | 0.4732 | 1 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
snob/TagMyBookmark-KoAlpaca-QLoRA-v1.0
snob
2023-07-22T10:27:06Z
3
0
peft
[ "peft", "region:us" ]
null
2023-07-22T10:26:59Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
Claaas/a2c-PandaReachDense-v2
Claaas
2023-07-22T10:17:06Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T10:14:03Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -3.46 +/- 0.93 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
EllaHong/km5.8_qlora_4b_exp1
EllaHong
2023-07-22T09:51:02Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-22T09:50:57Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
street4girls/servessss
street4girls
2023-07-22T09:47:37Z
0
0
null
[ "region:us" ]
null
2023-07-22T09:46:09Z
https://streetgirl.in/hotel1.html https://streetgirl.in/hotel2.html https://streetgirl.in/hotel3.html https://streetgirl.in/hotel4.html https://streetgirl.in/hotel5.html https://streetgirl.in/hotel6.html https://streetgirl.in/vashi.html https://streetgirl.in/versova.html https://streetgirl.in/vasai-virar.html https://streetgirl.in/juhu.html https://streetgirl.in/mulund.html https://streetgirl.in/powai.html https://streetgirl.in/dadar.html https://streetgirl.in/andheri-west.html https://streetgirl.in/alibaug.html https://streetgirl.in/lokhandwala.html https://streetgirl.in/titwala.html https://streetgirl.in/santacrcuz.html https://mumbaibeautie.com/kochi.html https://mumbaibeautie.com/visakhapatnam.html https://mumbaibeautie.com/daman.html https://mumbaibeautie.com/vapi.html http://tanyabhati.com/visakhapatnam.html https://mumbai4beautie.weebly.com/hotels1.html https://mumbai4beautie.weebly.com/hotels2.html https://mumbai4beautie.weebly.com/hotels3.html https://mumbai4beautie.weebly.com/hotels4.html https://mumbai4beautie.weebly.com/hotels5.html https://mumbai4beautie.weebly.com/hotels6.html https://mumbai4beautie.weebly.com/hotels7.html https://mumbai4beautie.weebly.com/hotels8.html https://mumbai4beautie.weebly.com/hotels9.html https://mumbai4beautie.weebly.com/hotels10.html https://mumbai4beautie.weebly.com/hotels11.html https://mumbai4beautie.weebly.com/hotels12.html https://mumbai4beautie.weebly.com/hotels13.html https://mumbai4beautie.weebly.com/hotels14.html https://mumbai4beautie.weebly.com/hotels15.html https://mumbai4beautie.weebly.com/hotels16.html https://mumbai4beautie.weebly.com/hotels17.html https://mumbai4beautie.weebly.com/hotels18.html https://mumbai4beautie.weebly.com/hotels19.html https://mumbai4beautie.weebly.com/hotels20.html https://mumbai4beautie.weebly.com/hotels21.html https://mumbai4beautie.weebly.com/hotels22.html https://mumbai4beautie.weebly.com/hotels23.html https://cheapkochigirls.weebly.com/ https://cheapkochigirls.weebly.com/hotel1.html https://cheapkochigirls.weebly.com/hotel2.html https://cheapkochigirls.weebly.com/hotel3.html https://cheapkochigirls.weebly.com/hotel4.html https://cheapkochigirls.weebly.com/hotel5.html https://cheapkochigirls.weebly.com/hotel6.html https://cheapkochigirls.weebly.com/hotel7.html https://cheapkochigirls.weebly.com/hotel8.html https://cheapkochigirls.weebly.com/hotel9.html https://cheapkochigirls.weebly.com/hotel10.html https://cheapkochigirls.weebly.com/hotel11.html https://cheapgirlsnumber.blogspot.com https://hotelstreetgirl.blogspot.com/
teilomillet/dqn-SpaceInvadersNoFrameskip-v4
teilomillet
2023-07-22T09:45:43Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T09:45:14Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 317.50 +/- 164.47 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga teilomillet -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga teilomillet -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga teilomillet ``` ## Hyperparameters ```python OrderedDict([('batch_size', 128), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0003), ('learning_starts', 100000), ('n_timesteps', 500000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
David2020/falcon-7b-renminnews-adapters
David2020
2023-07-22T09:45:40Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-22T07:32:59Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
Claaas/customppo-LunarLander-v2
Claaas
2023-07-22T09:32:30Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T09:32:22Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -167.18 +/- 114.08 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Claaas/customppo-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
vineetsharma/a2c-PandaReachDense-v2
vineetsharma
2023-07-22T09:24:48Z
1
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T08:20:25Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.69 +/- 0.65 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
quantumaikr/Llama-2-13b-ko-lora
quantumaikr
2023-07-22T09:19:49Z
4
1
peft
[ "peft", "region:us" ]
null
2023-07-22T08:21:10Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
l3cube-pune/hing-mbert-mixed
l3cube-pune
2023-07-22T09:08:59Z
121
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "hi", "en", "codemix", "multilingual", "dataset:L3Cube-HingCorpus", "arxiv:2204.08398", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-25T17:52:23Z
--- language: - hi - en - multilingual license: cc-by-4.0 tags: - hi - en - codemix datasets: - L3Cube-HingCorpus --- ## HingBERT-Mixed HingBERT-Mixed is a Hindi-English code-mixed BERT model trained on roman + devanagari text. It is a base BERT model fine-tuned on mixed script L3Cube-HingCorpus. <br> [dataset link] (https://github.com/l3cube-pune/code-mixed-nlp) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398) Other models from HingBERT family: <br> <a href="https://huggingface.co/l3cube-pune/hing-bert"> HingBERT </a> <br> <a href="https://huggingface.co/l3cube-pune/hing-mbert"> HingMBERT </a> <br> <a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed"> HingBERT-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/hing-mbert-mixed-v2"> HingBERT-Mixed-v2 </a> <br> <a href="https://huggingface.co/l3cube-pune/hing-roberta"> HingRoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/hing-roberta-mixed"> HingRoBERTa-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/hing-gpt"> HingGPT </a> <br> <a href="https://huggingface.co/l3cube-pune/hing-gpt-devanagari"> HingGPT-Devanagari </a> <br> <a href="https://huggingface.co/l3cube-pune/hing-bert-lid"> HingBERT-LID </a> <br> ``` @inproceedings{nayak-joshi-2022-l3cube, title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models", author = "Nayak, Ravindra and Joshi, Raviraj", booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.wildre-1.2", pages = "7--12", } ```
tinyorbit/llama2-qlora-finetunined-french
tinyorbit
2023-07-22T09:03:09Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-22T09:02:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
JadenJSJ/HarutyaRVC
JadenJSJ
2023-07-22T08:53:41Z
0
0
null
[ "harutya", "harucha", "春茶", "rvcV2", "audio-to-audio", "en", "ja", "license:openrail", "region:us" ]
audio-to-audio
2023-07-22T08:31:09Z
--- license: openrail language: - en - ja thumbnail: >- https://cdn.discordapp.com/attachments/660770162072485890/1132232063349686303/HarutyaRVC.png tags: - harutya - harucha - 春茶 - rvcV2 pipeline_tag: audio-to-audio --- # Harutya (春茶) RVC V2 Model Harutya/Harucha (春茶) channel link: https://www.youtube.com/@harutya Need the dataset for some reason? Feel free to DM @jsj5 (Old: JSJ_15#8999) at Discord.\ https://discord.com/users/572277409395638274 The dataset was about 40m long (raw 3hrs), it couldn't be short because Harutya\ rarely sings in low pitches and high pitches so the dataset needed to be long. Q: Why are there voice cracks in the higher ends?\ A: Harutya applies effects/dubs her voice on the parts which she sings in high & low pitches/tones\ so I need to cut them out to maintain quality and consistency. Q: Why so breathy and noisy?\ A: I think it is a special characteristic of her voice, and the reason why people usually listen to her covers.\ Maybe try to reduce the noise on the input audio to keep maybe make it better?? Checksums:\ 51bac1a2154863a182d501969990b6f3 harutyaV2_e100_s100.pth\ d275094bcee0e2dda668b363d35257c8 added_IVF3166_Flat_nprobe_1_harutyaV2_v2.index
chanderbalaji/llama2-7b-sharded-qlora-ft-guanaco
chanderbalaji
2023-07-22T08:44:36Z
3
0
peft
[ "peft", "text-generation", "dataset:timdettmers/openassistant-guanaco", "license:apache-2.0", "region:us" ]
text-generation
2023-07-22T07:08:25Z
--- library_name: peft license: apache-2.0 datasets: - timdettmers/openassistant-guanaco pipeline_tag: text-generation --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
MrDdz/mytuned_test_trainer-base-cased1
MrDdz
2023-07-22T08:44:11Z
93
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "base_model:xlnet/xlnet-base-cased", "base_model:finetune:xlnet/xlnet-base-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-22T08:37:01Z
--- license: mit base_model: xlnet-base-cased tags: - generated_from_trainer model-index: - name: mytuned_test_trainer-base-cased1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mytuned_test_trainer-base-cased1 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.7836 - eval_rmse: 0.7145 - eval_runtime: 5.7466 - eval_samples_per_second: 348.03 - eval_steps_per_second: 43.504 - epoch: 3.27 - step: 1633 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
l3cube-pune/me-sent-roberta
l3cube-pune
2023-07-22T08:40:08Z
113
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "mr", "en", "codemix", "multilingual", "dataset:L3Cube-MeCorpus", "dataset:L3Cube-MeSent", "arxiv:2306.14030", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-04T07:13:15Z
--- language: - mr - en - multilingual license: cc-by-4.0 tags: - mr - en - codemix datasets: - L3Cube-MeCorpus - L3Cube-MeSent --- ## MeSent-RoBERTa MeSent-RoBERTa is a MeRoBERTa model fine-tuned on L3Cube-MeSent, a codemixed Marathi-English sentiment analysis dataset. <br> [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030) Other models from the MeBERT family: <br> <a href="https://huggingface.co/l3cube-pune/me-bert"> MeBERT </a> <br> <a href="https://huggingface.co/l3cube-pune/me-roberta"> MeRoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-bert-mixed"> MeBERT-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/me-bert-mixed-v2"> MeBERT-Mixed-v2 </a> <br> <a href="https://huggingface.co/l3cube-pune/me-roberta-mixed"> MeRoBERTa-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/me-lid-roberta"> MeLID-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-hate-roberta"> MeHate-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-sent-roberta"> MeSent-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-hate-bert"> MeHate-BERT </a> <br> <a href="https://huggingface.co/l3cube-pune/me-lid-bert"> MeLID-BERT </a> <br> Citing: ``` @article{chavan2023my, title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks}, author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj}, journal={arXiv preprint arXiv:2306.14030}, year={2023} } ```
ZHR123/Chatglm2_WK
ZHR123
2023-07-22T08:34:09Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2023-07-09T13:02:38Z
--- license: apache-2.0 --- ## 介绍 该模型是以[ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B)为主干,采用Lora高效参数微调方法,基于[WebCPM_WK](https://huggingface.co/datasets/ZHR123/WebCPM_WK)数据集进行微调后的模型。 相比于原本的ChatGLM2-6B模型,本模型不仅具备原先的强大的多轮对话能力,并且还具备以下两点新的能力: 1. 给定问题和文档,抽取文档中与问题相关知识的能力。 2. 给定参考材料和问题,根据参考材料回答问题的能力。 本项目只开源lora参数,若要使用完整的模型,请再获取[Chatglm2](https://huggingface.co/THUDM/chatglm2-6b)基座模型的参数,并与本项目参数进行合并。
69NOUR69/BEBO-VIEWS
69NOUR69
2023-07-22T08:32:31Z
0
0
null
[ "arxiv:1910.09700", "doi:10.57967/hf/0914", "region:us" ]
null
2023-07-22T08:21:26Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
l3cube-pune/me-bert-mixed-v2
l3cube-pune
2023-07-22T08:27:43Z
324
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "mr", "en", "codemix", "multilingual", "dataset:L3Cube-MeCorpus", "arxiv:2306.14030", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-28T16:36:07Z
--- language: - mr - en - multilingual license: cc-by-4.0 tags: - mr - en - codemix datasets: - L3Cube-MeCorpus --- ## MeBERT-Mixed MeBERT-Mixed-v2 is a Marathi-English code-mixed BERT model trained on Roman + Devanagari text. It is a MuRIL model fine-tuned on L3Cube-MeCorpus. <br> [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030) Other models from MeBERT family: <br> <a href="https://huggingface.co/l3cube-pune/me-bert"> MeBERT </a> <br> <a href="https://huggingface.co/l3cube-pune/me-roberta"> MeRoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-bert-mixed"> MeBERT-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/me-bert-mixed-v2"> MeBERT-Mixed-v2 </a> <br> <a href="https://huggingface.co/l3cube-pune/me-roberta-mixed"> MeRoBERTa-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/me-lid-roberta"> MeLID-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-hate-roberta"> MeHate-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-sent-roberta"> MeSent-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-hate-bert"> MeHate-BERT </a> <br> <a href="https://huggingface.co/l3cube-pune/me-lid-bert"> MeLID-BERT </a> <br> Citing: ``` @article{chavan2023my, title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks}, author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj}, journal={arXiv preprint arXiv:2306.14030}, year={2023} } ```
l3cube-pune/me-bert-mixed
l3cube-pune
2023-07-22T08:26:18Z
195
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "mr", "en", "codemix", "multilingual", "dataset:L3Cube-MeCorpus", "arxiv:2306.14030", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-04-14T10:26:24Z
--- language: - mr - en - multilingual license: cc-by-4.0 tags: - mr - en - codemix datasets: - L3Cube-MeCorpus --- ## MeBERT-Mixed MeBERT-Mixed is a Marathi-English code-mixed BERT model trained on Roman + Devanagari text. It is a mBERT model fine-tuned on full L3Cube-MeCorpus. <br> [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030) Other models from MeBERT family: <br> <a href="https://huggingface.co/l3cube-pune/me-bert"> MeBERT </a> <br> <a href="https://huggingface.co/l3cube-pune/me-roberta"> MeRoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-bert-mixed"> MeBERT-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/me-bert-mixed-v2"> MeBERT-Mixed-v2 </a> <br> <a href="https://huggingface.co/l3cube-pune/me-roberta-mixed"> MeRoBERTa-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/me-lid-roberta"> MeLID-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-hate-roberta"> MeHate-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-sent-roberta"> MeSent-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-hate-bert"> MeHate-BERT </a> <br> <a href="https://huggingface.co/l3cube-pune/me-lid-bert"> MeLID-BERT </a> <br> Citing: ``` @article{chavan2023my, title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks}, author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj}, journal={arXiv preprint arXiv:2306.14030}, year={2023} } ```
l3cube-pune/me-bert
l3cube-pune
2023-07-22T08:24:54Z
115
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "mr", "en", "codemix", "multilingual", "dataset:L3Cube-MeCorpus", "arxiv:2306.14030", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-04-14T10:23:27Z
--- language: - mr - en - multilingual license: cc-by-4.0 tags: - mr - en - codemix datasets: - L3Cube-MeCorpus --- ## MeBERT MeBERT is a Marathi-English code-mixed BERT model trained on Roman text. It is a base BERT model fine-tuned on L3Cube-MeCorpus. <br> [dataset link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030) Other models from MeBERT family: <br> <a href="https://huggingface.co/l3cube-pune/me-bert"> MeBERT </a> <br> <a href="https://huggingface.co/l3cube-pune/me-roberta"> MeRoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-bert-mixed"> MeBERT-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/me-bert-mixed-v2"> MeBERT-Mixed-v2 </a> <br> <a href="https://huggingface.co/l3cube-pune/me-roberta-mixed"> MeRoBERTa-Mixed </a> <br> <a href="https://huggingface.co/l3cube-pune/me-lid-roberta"> MeLID-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-hate-roberta"> MeHate-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-sent-roberta"> MeSent-RoBERTa </a> <br> <a href="https://huggingface.co/l3cube-pune/me-hate-bert"> MeHate-BERT </a> <br> <a href="https://huggingface.co/l3cube-pune/me-lid-bert"> MeLID-BERT </a> <br> Citing: ``` @article{chavan2023my, title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks}, author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj}, journal={arXiv preprint arXiv:2306.14030}, year={2023} } ```
KimTarou/botu
KimTarou
2023-07-22T08:22:14Z
0
0
null
[ "region:us" ]
null
2023-07-22T08:01:24Z
040 クレイジーぶっかけ 体にぶっかけ多いバージョン(BR) 050 黒人女性化スライダー 065 おっぱいスライダー 067 ちんちんサイズスライダー
jjyaoao/speecht5_finetuned_voxpopuli_nl
jjyaoao
2023-07-22T08:16:56Z
76
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:common_voice_13_0", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-22T08:00:52Z
--- license: mit tags: - generated_from_trainer datasets: - common_voice_13_0 model-index: - name: speecht5_finetuned_voxpopuli_nl results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_nl This model is a fine-tuned version of [arham061/speecht5_finetuned_voxpopuli_nl](https://huggingface.co/arham061/speecht5_finetuned_voxpopuli_nl) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.0.5508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 3000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5058 | 7.74 | 1000 | 0.5431 | | 0.4938 | 15.49 | 2000 | 0.5487 | | 0.4909 | 23.23 | 3000 | 0.5508 | ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
teilomillet/Taxi-v3
teilomillet
2023-07-22T08:05:02Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T07:45:15Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="teilomillet/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Aspik101/Nous-Hermes-13b-pl-lora_GGML
Aspik101
2023-07-22T08:04:09Z
0
1
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-07-22T07:46:37Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
pooh/llama2-qlora-finetunined-french
pooh
2023-07-22T07:58:49Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-22T07:58:24Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
vineetsharma/a2c-AntBulletEnv-v0
vineetsharma
2023-07-22T07:47:29Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T07:46:54Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1480.41 +/- 128.67 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AIYIYA/my_1
AIYIYA
2023-07-22T07:47:28Z
62
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-chinese", "base_model:finetune:google-bert/bert-base-chinese", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-22T07:24:25Z
--- base_model: bert-base-chinese tags: - generated_from_keras_callback model-index: - name: AIYIYA/my_1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AIYIYA/my_1 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1600 - Validation Loss: 1.4880 - Train Accuracy: 0.7195 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 3.3536 | 3.0356 | 0.2195 | 0 | | 2.8571 | 2.6364 | 0.3902 | 1 | | 2.4461 | 2.2839 | 0.4634 | 2 | | 2.0491 | 2.0340 | 0.5122 | 3 | | 1.7890 | 1.7980 | 0.6463 | 4 | | 1.5356 | 1.6520 | 0.6951 | 5 | | 1.3215 | 1.5640 | 0.7195 | 6 | | 1.1600 | 1.4880 | 0.7195 | 7 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
josephrich/my_awesome_model_721_2
josephrich
2023-07-22T07:27:32Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-22T04:00:33Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: my_awesome_model_721_2 results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.93228 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model_721_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.5942 - Accuracy: 0.9323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4604 | 1.0 | 12500 | 0.6389 | 0.8761 | | 0.2442 | 2.0 | 25000 | 0.4233 | 0.9264 | | 0.1495 | 3.0 | 37500 | 0.4755 | 0.9303 | | 0.0516 | 4.0 | 50000 | 0.5942 | 0.9323 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.0 - Datasets 2.13.1 - Tokenizers 0.13.3
Ryukijano/Mujoco_rl_halfcheetah_Decision_Trasformer
Ryukijano
2023-07-22T07:27:15Z
62
0
transformers
[ "transformers", "pytorch", "decision_transformer", "Generated_From_Trainer", "reinforcement-learning", "Mujoco", "dataset:decision_transformer_gym_replay", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-07-19T15:13:27Z
--- base_model: '' tags: - Generated_From_Trainer - reinforcement-learning - Mujoco datasets: - decision_transformer_gym_replay model-index: - name: Mujoco_rl_halfcheetah_Decision_Trasformer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mujoco_rl_halfcheetah_Decision_Trasformer This model is a fine-tuned version of [](https://huggingface.co/) on the decision_transformer_gym_replay dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 250 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
qwerty8409/Medical_dataset
qwerty8409
2023-07-22T07:04:27Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-22T07:00:20Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
digiplay/CyberRealistic_Classic_v1.5
digiplay
2023-07-22T06:54:39Z
357
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-21T05:10:08Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- https://civitai.com/models/71185/cyberrealistic-classic Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/f9806dce-3845-4166-b9d1-0202f3033bc9/width=1424/V3.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7e7ed284-d116-45bc-9400-54d8e9a7eb89/width=1424/V1.59.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/43a1dd71-0b90-428b-ac6a-578e646156fc/width=1424/V1.52.jpeg)
nnpy/blip-image-captioning
nnpy
2023-07-22T06:37:04Z
125
6
transformers
[ "transformers", "pytorch", "safetensors", "blip", "image-text-to-text", "image-to-text", "dataset:MMInstruction/M3IT", "endpoints_compatible", "region:us" ]
image-to-text
2023-06-18T13:45:39Z
--- pipeline_tag: image-to-text datasets: - MMInstruction/M3IT --- ## Usage: ``` from transformers import BlipProcessor, BlipForConditionalGeneration import torch from PIL import Image processor = BlipProcessor.from_pretrained("prasanna2003/blip-image-captioning") if processor.tokenizer.eos_token is None: processor.tokenizer.eos_token = '<|eos|>' model = BlipForConditionalGeneration.from_pretrained("prasanna2003/blip-image-captioning") image = Image.open('file_name.jpg').convert('RGB') prompt = """Instruction: Generate a single line caption of the Image. output: """ inputs = processor(image, prompt, return_tensors="pt") output = model.generate(**inputs, max_length=100) print(processor.tokenizer.decode(output[0])) ```
AndrewL088/SpaceInvadersNoFrameskip-v4_20230722
AndrewL088
2023-07-22T06:31:47Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T06:31:18Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 29.00 +/- 64.30 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AndrewL088 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AndrewL088 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AndrewL088 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.025), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 10000000.0), ('learning_starts', 100000), ('n_timesteps', 110000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
DipanAI/TesAKantaiBERT
DipanAI
2023-07-22T06:30:53Z
113
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-07-15T15:28:23Z
--- tags: - generated_from_trainer model-index: - name: TesAKantaiBERT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TesAKantaiBERT This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
NasimB/guten-rarity-neg-log-rarity-end-19p1k
NasimB
2023-07-22T06:23:58Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-22T04:00:43Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: guten-rarity-neg-log-rarity-end-19p1k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # guten-rarity-neg-log-rarity-end-19p1k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1078 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3472 | 0.29 | 500 | 5.3359 | | 5.0242 | 0.59 | 1000 | 4.9159 | | 4.7018 | 0.88 | 1500 | 4.6868 | | 4.4382 | 1.17 | 2000 | 4.5458 | | 4.2888 | 1.47 | 2500 | 4.4338 | | 4.1941 | 1.76 | 3000 | 4.3265 | | 4.0652 | 2.05 | 3500 | 4.2631 | | 3.8933 | 2.34 | 4000 | 4.2118 | | 3.8664 | 2.64 | 4500 | 4.1589 | | 3.8275 | 2.93 | 5000 | 4.1077 | | 3.6287 | 3.22 | 5500 | 4.1006 | | 3.5847 | 3.52 | 6000 | 4.0707 | | 3.5697 | 3.81 | 6500 | 4.0389 | | 3.4614 | 4.1 | 7000 | 4.0369 | | 3.3179 | 4.4 | 7500 | 4.0323 | | 3.307 | 4.69 | 8000 | 4.0175 | | 3.3039 | 4.98 | 8500 | 4.0058 | | 3.1413 | 5.28 | 9000 | 4.0177 | | 3.132 | 5.57 | 9500 | 4.0172 | | 3.1349 | 5.86 | 10000 | 4.0158 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
tridungduong16/xgen-7b-8k-base-orca
tridungduong16
2023-07-22T06:02:44Z
10
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-20T10:52:06Z
--- license: apache-2.0 --- # XGen-7B-8K-Base Official research release for the family of **XGen** models (`7B`) by Salesforce AI Research: *Title*: [Long Sequence Modeling with XGen: A 7B LLM Trained on 8K Input Sequence Length](https://blog.salesforceairesearch.com/xgen/) *Authors*: [Erik Nijkamp](https://eriknijkamp.com)\*, Tian Xie\*, [Hiroaki Hayashi](https://hiroakih.me/)\*, [Bo Pang](https://scholar.google.com/citations?user=s9fNEVEAAAAJ&hl=en)\*, Congying Xia\*, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryscinski, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, [Chien-Sheng Wu](https://jasonwu0731.github.io/), Silvio Savarese, [Yingbo Zhou](https://scholar.google.com/citations?user=H_6RQ7oAAAAJ&hl=en), [Shafiq Rayhan Joty](https://raihanjoty.github.io/), [Caiming Xiong](http://cmxiong.com/). (* indicates equal contribution) Correspondence to: [Shafiq Rayhan Joty](mailto:[email protected]), [Caiming Xiong](mailto:[email protected]) ## Models ### Base models * [XGen-7B-4K-Base](https://huggingface.co/Salesforce/xgen-7b-4k-base): XGen-7B model pre-trained under 4K sequence length. * License: Apache-2.0 * [XGen-7B-8K-Base](https://huggingface.co/Salesforce/xgen-7b-8k-base): XGen-7B model pre-trained under 8K sequence length. * License: Apache-2.0 ### Instruction-finetuned models Supervised finetuned model on public domain instructional data. Released for ***research purpose*** only. * [XGen-7B-8K-Inst](https://huggingface.co/Salesforce/xgen-7b-8k-inst) ## How to run The training data for the models are tokenized with OpenAI Tiktoken library. To use this model, install the package via `pip`: ```sh pip install tiktoken ``` The models can be used as auto-regressive samplers as follows: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tridungduong16/xgen-7b-8k-base-orca", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("tridungduong16/xgen-7b-8k-base-orca", torch_dtype=torch.bfloat16) inputs = tokenizer("The world is", return_tensors="pt") sample = model.generate(**inputs, max_length=128) print(tokenizer.decode(sample[0])) ``` ## Citation ```bibtex @misc{XGen, title={Long Sequence Modeling with XGen: A 7B LLM Trained on 8K Input Sequence Length}, author={Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryscinski, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Rayhan Joty, Caiming Xiong}, howpublished={Salesforce AI Research Blog}, year={2023}, url={https://blog.salesforceairesearch.com/xgen} } ```
4bit/Nous-Hermes-Llama2-13b-GPTQ
4bit
2023-07-22T05:32:28Z
11
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "self-instruct", "distillation", "synthetic instruction", "en", "license:llama2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-22T05:26:48Z
--- license: llama2 language: - en tags: - llama-2 - self-instruct - distillation - synthetic instruction --- # Model Card: Nous-Hermes-Llama2-13b Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI. ## Model Description Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable. This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine. ## Example Outputs: ![Example4](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example5.png "Example 4") ![Example1](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/Example1.png "Example 1") ![Example2](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example2.png "Example 2") ![Example3](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example3.png "Example 3") ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI. Special mention goes to @winglian for assisting in some of the training issues. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Among the contributors of datasets: - GPTeacher was made available by Teknium - Wizard LM by nlpxucan - Nous Research Instruct Dataset was provided by Karan4D and HueminArt. - GPT4-LLM and Unnatural Instructions were provided by Microsoft - Airoboros dataset by jondurbin - Camel-AI's domain expert datasets are from Camel-AI - CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` or ``` ### Instruction: <prompt> ### Input: <additional context> ### Response: <leave a newline blank for model to respond> ``` ## Benchmark Results AGI-Eval ``` | Task |Version| Metric |Value | |Stderr| |agieval_aqua_rat | 0|acc |0.2362|± |0.0267| | | |acc_norm|0.2480|± |0.0272| |agieval_logiqa_en | 0|acc |0.3425|± |0.0186| | | |acc_norm|0.3472|± |0.0187| |agieval_lsat_ar | 0|acc |0.2522|± |0.0287| | | |acc_norm|0.2087|± |0.0269| |agieval_lsat_lr | 0|acc |0.3510|± |0.0212| | | |acc_norm|0.3627|± |0.0213| |agieval_lsat_rc | 0|acc |0.4647|± |0.0305| | | |acc_norm|0.4424|± |0.0303| |agieval_sat_en | 0|acc |0.6602|± |0.0331| | | |acc_norm|0.6165|± |0.0340| |agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346| | | |acc_norm|0.4272|± |0.0345| |agieval_sat_math | 0|acc |0.2909|± |0.0307| | | |acc_norm|0.2727|± |0.0301| ``` GPT-4All Benchmark Set ``` | Task |Version| Metric |Value | |Stderr| |arc_challenge| 0|acc |0.5102|± |0.0146| | | |acc_norm|0.5213|± |0.0146| |arc_easy | 0|acc |0.7959|± |0.0083| | | |acc_norm|0.7567|± |0.0088| |boolq | 1|acc |0.8394|± |0.0064| |hellaswag | 0|acc |0.6164|± |0.0049| | | |acc_norm|0.8009|± |0.0040| |openbookqa | 0|acc |0.3580|± |0.0215| | | |acc_norm|0.4620|± |0.0223| |piqa | 0|acc |0.7992|± |0.0093| | | |acc_norm|0.8069|± |0.0092| |winogrande | 0|acc |0.7127|± |0.0127| ``` BigBench Reasoning Test ``` | Task |Version| Metric |Value | |Stderr| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362| |bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192| |bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111| |bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123| |bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287| ``` These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores: - GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1 - 0.3657 on BigBench, up from 0.328 on hermes-llama1 - 0.372 on AGIEval, up from 0.354 on Hermes-llama1 These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position. ## Resources for Applied Use Cases: For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot ## Future Plans We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward. ## Model Usage The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions.
Vsukiyaki/Shungiku-Mix
Vsukiyaki
2023-07-22T05:11:31Z
0
23
null
[ "stable-diffusion", "text-to-image", "ja", "en", "license:other", "region:us" ]
text-to-image
2023-06-03T16:25:04Z
--- license: other language: - ja - en tags: - stable-diffusion - text-to-image --- # Shungiku-Mix <img src="https://huggingface.co/Vsukiyaki/Shungiku-Mix/resolve/main/imgs/header.jpg" style="width: 640px;"> ## 概要 / Overview - **Shungiku-Mix**は、アニメ風の画風に特化したマージモデルです。 / **Shungiku-Mix** is a merge model that specializes in an anime-like painting style. - 幻想的な空や光の表現が得意です。 / This model excels in the expression of fantastic skies and light. - VAEはお好きなものをお使いください。VAEが無くても鮮やかな色合いで出力されますが、clearvaeを使用することを推奨しています。 / You can use whatever VAE you like. The output will be vividly tinted without VAE, but we recommend using clearvae. - clearvaeを含んだモデルも提供しています。 / I also offer models that include clearvae. => **Shungiku-Mix_v1-better-vae-fp16.safetensors** <hr> ## 更新 / UPDATE NOTE - 2023/07/22:ライセンスを変更しました。 / License changed. <hr> ## 推奨設定 / Recommended Settings <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> Steps: 20 ~ 60 Sampler: DPM++ SDE Karras CFG scale: 7.5 Denoising strength: 0.55 Hires steps: 20 Hires upscaler: Latent Clip skip: 2 Negative embeddings: EasyNegative, verybadimagenegative </pre> **Negative prompt**: <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad,(inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3, </pre> <hr> ## 例 / Examples <img src="https://huggingface.co/Vsukiyaki/Shungiku-Mix/resolve/main/imgs/sample1.png" style="width: 512px;"> <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> ((solo:1.2)),cute girl,(harbor),(blue sky:1.2),looking at viewer,dramatic,fantastic atmosphere,magnificent view,cumulonimbus,(cowboy shot:1.2),scenery,Mediterranean Buildings,silver hair Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad,(inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3, Steps: 60 Sampler: DPM++ SDE Karras CFG scale: 7.5 Seed: 1896063174 Size: 768x768 Denoising strength: 0.58 Clip skip: 2 Hires upscale: 2 Hires steps: 20 Hires upscaler: Latent </pre> <br> <img src="https://huggingface.co/Vsukiyaki/Shungiku-Mix/resolve/main/imgs/sample2.png" style="width: 640px;"> <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> ((solo:1.2)),cute little (1girl:1.3) walking,landscape,beautiful sky,village,head tilt,bloom effect,fantastic atmosphere,magnificent view,cowboy shot,pale-blonde hair,blue eyes,long twintails,blush,light smile,white dress,wind,(petals) Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad,(inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3, Steps: 60 Sampler: DPM++ SDE Karras CFG scale: 7.5 Seed: 400031884 Size: 848x600 Denoising strength: 0.55 Clip skip: 2 Hires upscale: 2.5 Hires steps: 20 Hires upscaler: Latent </pre> <hr> ## ライセンス / License <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base text-bold" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> 修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> <hr> Twiter: [@Vsukiyaki_AIArt](https://twitter.com/Vsukiyaki_AIArt) <a href="https://twitter.com/Vsukiyaki_AIArt" class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md" style="background-color: #1da1f2"> <svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24"> <path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" /> </svg> </a>
Vasanth/llama2_finetuned_chatbot
Vasanth
2023-07-22T04:53:59Z
0
0
null
[ "tensorboard", "generated_from_trainer", "region:us" ]
null
2023-07-22T04:35:28Z
--- tags: - generated_from_trainer model-index: - name: llama2_finetuned_chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2_finetuned_chatbot This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
YarramsettiNaresh/CartPole-v1
YarramsettiNaresh
2023-07-22T04:53:56Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T04:53:47Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
neurae/bert-dnd-intents
neurae
2023-07-22T04:36:32Z
113
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "en", "dataset:neurae/dnd_style_intents", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-16T06:37:46Z
--- datasets: - neurae/dnd_style_intents language: - en pipeline_tag: text-classification license: apache-2.0 metrics: - accuracy - f1 --- This is bert base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset. | parametrs | value | |---------------|----------| | learning rate | 1.3e-4 | | lr scheduler | constant | | weight decay | 7e-2 | Model has next metrics on test data from dataset | metric | value | |----------|-------| | accuracy | 0.978 | | Macro F1 | 0.977 | | Micro F1 | 0.978 |
Yaxin1992/llama2-7b-5400
Yaxin1992
2023-07-22T04:35:58Z
0
0
null
[ "tensorboard", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:finetune:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2023-07-21T18:13:49Z
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer model-index: - name: llama2-7b-5400 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama2-7b-5400 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 6 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 5400 ### Training results ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
EllaHong/km5.8_qlora_4b_v5
EllaHong
2023-07-22T03:58:06Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-22T03:57:57Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
echerny/open_llama_7b
echerny
2023-07-22T03:48:08Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-22T03:48:07Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0
NasimB/guten-norm-rarity-neg-log-rarity-end-19p5k
NasimB
2023-07-22T03:35:31Z
11
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-22T01:12:12Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: guten-norm-rarity-neg-log-rarity-end-19p5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # guten-norm-rarity-neg-log-rarity-end-19p5k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.337 | 0.29 | 500 | 5.3391 | | 5.0313 | 0.58 | 1000 | 4.9321 | | 4.7099 | 0.88 | 1500 | 4.6910 | | 4.4438 | 1.17 | 2000 | 4.5565 | | 4.3034 | 1.46 | 2500 | 4.4402 | | 4.1985 | 1.75 | 3000 | 4.3359 | | 4.0854 | 2.05 | 3500 | 4.2658 | | 3.899 | 2.34 | 4000 | 4.2197 | | 3.8754 | 2.63 | 4500 | 4.1634 | | 3.8316 | 2.92 | 5000 | 4.1128 | | 3.6489 | 3.21 | 5500 | 4.1043 | | 3.5936 | 3.51 | 6000 | 4.0788 | | 3.5707 | 3.8 | 6500 | 4.0499 | | 3.4804 | 4.09 | 7000 | 4.0419 | | 3.3223 | 4.38 | 7500 | 4.0333 | | 3.3179 | 4.68 | 8000 | 4.0236 | | 3.308 | 4.97 | 8500 | 4.0107 | | 3.157 | 5.26 | 9000 | 4.0223 | | 3.1406 | 5.55 | 9500 | 4.0212 | | 3.1361 | 5.84 | 10000 | 4.0201 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
ittailup/lallama-13b-chat
ittailup
2023-07-22T03:20:47Z
1
0
peft
[ "peft", "pytorch", "llama", "region:us" ]
null
2023-07-21T19:10:35Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
UNIST-Eunchan/Pegasus-x-base-govreport-12288-1024-numepoch-10
UNIST-Eunchan
2023-07-22T03:05:31Z
93
0
transformers
[ "transformers", "pytorch", "pegasus_x", "text2text-generation", "generated_from_trainer", "dataset:govreport-summarization", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-20T02:20:44Z
--- tags: - generated_from_trainer datasets: - govreport-summarization model-index: - name: Pegasus-x-base-govreport-12288-1024-numepoch-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Pegasus-x-base-govreport-12288-1024-numepoch-10 This model is a fine-tuned version of [google/pegasus-x-base](https://huggingface.co/google/pegasus-x-base) on the govreport-summarization dataset. It achieves the following results on the evaluation set: - Loss: 1.6234 ## Model description More information needed ## Evaluation Score **'ROUGE'**: { 'rouge1': 0.5012, 'rouge2': 0.2205, 'rougeL': 0.2552, 'rougeLsum': 0.2554 } **'BERT_SCORE'** {'f1': 0.859, 'precision': 0.8619, 'recall': 0.8563 } ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1149 | 0.37 | 100 | 1.9237 | | 1.9545 | 0.73 | 200 | 1.8380 | | 1.8835 | 1.1 | 300 | 1.7574 | | 1.862 | 1.46 | 400 | 1.7305 | | 1.8536 | 1.83 | 500 | 1.7100 | | 1.8062 | 2.19 | 600 | 1.6944 | | 1.8161 | 2.56 | 700 | 1.6882 | | 1.7611 | 2.92 | 800 | 1.6803 | | 1.7878 | 3.29 | 900 | 1.6671 | | 1.7299 | 3.65 | 1000 | 1.6599 | | 1.7636 | 4.02 | 1100 | 1.6558 | | 1.7262 | 4.38 | 1200 | 1.6547 | | 1.715 | 4.75 | 1300 | 1.6437 | | 1.7178 | 5.12 | 1400 | 1.6445 | | 1.7163 | 5.48 | 1500 | 1.6386 | | 1.7367 | 5.85 | 1600 | 1.6364 | | 1.7114 | 6.21 | 1700 | 1.6365 | | 1.6452 | 6.58 | 1800 | 1.6309 | | 1.7251 | 6.94 | 1900 | 1.6301 | | 1.6726 | 7.31 | 2000 | 1.6305 | | 1.7104 | 7.67 | 2100 | 1.6285 | | 1.6739 | 8.04 | 2200 | 1.6252 | | 1.7082 | 8.4 | 2300 | 1.6246 | | 1.6888 | 8.77 | 2400 | 1.6244 | | 1.6609 | 9.13 | 2500 | 1.6256 | | 1.6707 | 9.5 | 2600 | 1.6241 | | 1.669 | 9.86 | 2700 | 1.6234 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
Falcinspire/Reinforce-MLP-v1-Cartpole-v1
Falcinspire
2023-07-22T02:22:54Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-22T00:42:23Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-MLP-v1-Cartpole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 491.10 +/- 26.70 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
LarryAIDraw/YelanV4-09
LarryAIDraw
2023-07-22T02:16:49Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-22T02:15:12Z
--- license: creativeml-openrail-m --- https://civitai.com/models/61470/yelan-lora-genshin-impact
LarryAIDraw/niloutest
LarryAIDraw
2023-07-22T01:50:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-22T01:49:28Z
--- license: creativeml-openrail-m --- https://civitai.com/models/101969/nilou-genshin-impact
LarryAIDraw/Genshin_Impact-Nilou_V2_nilou__genshin_impact_-000012
LarryAIDraw
2023-07-22T01:49:53Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-22T01:46:49Z
--- license: creativeml-openrail-m --- https://civitai.com/models/5367/tsumasaky-nilou-genshin-impact-lora
minhanhtuan/llama2-qlora-finetunined-french
minhanhtuan
2023-07-22T01:25:58Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-22T01:25:51Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
Mel-Iza0/RedPajama-ZeroShot-20K-new_prompt_classe_bias
Mel-Iza0
2023-07-22T01:12:05Z
2
0
peft
[ "peft", "pytorch", "gpt_neox", "region:us" ]
null
2023-07-21T21:11:26Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
Bainbridge/vilt-b32-mlm-mami
Bainbridge
2023-07-22T01:03:05Z
38
0
transformers
[ "transformers", "pytorch", "tensorboard", "vilt", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-07-22T00:22:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: vilt-b32-mlm-mami results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vilt-b32-mlm-mami This model is a fine-tuned version of [dandelin/vilt-b32-mlm](https://huggingface.co/dandelin/vilt-b32-mlm) on the MAMI dataset. It achieves the following results on the evaluation set: - Loss: 0.5796 - F1: 0.7899 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6898 | 0.48 | 100 | 0.6631 | 0.6076 | | 0.5824 | 0.96 | 200 | 0.5055 | 0.7545 | | 0.4306 | 1.44 | 300 | 0.4586 | 0.7861 | | 0.4207 | 1.91 | 400 | 0.4439 | 0.7927 | | 0.3055 | 2.39 | 500 | 0.4912 | 0.7949 | | 0.2582 | 2.87 | 600 | 0.4921 | 0.7873 | | 0.1875 | 3.35 | 700 | 0.5796 | 0.7899 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
NasimB/cbt-norm-rarity-neg-log-rarity
NasimB
2023-07-22T00:46:15Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-21T22:20:45Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: cbt-norm-rarity-neg-log-rarity results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cbt-norm-rarity-neg-log-rarity This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3494 | 0.29 | 500 | 5.3385 | | 5.0263 | 0.58 | 1000 | 4.9258 | | 4.7061 | 0.87 | 1500 | 4.6888 | | 4.4468 | 1.16 | 2000 | 4.5463 | | 4.2956 | 1.46 | 2500 | 4.4260 | | 4.1947 | 1.75 | 3000 | 4.3302 | | 4.0756 | 2.04 | 3500 | 4.2520 | | 3.8921 | 2.33 | 4000 | 4.2106 | | 3.8655 | 2.62 | 4500 | 4.1572 | | 3.8345 | 2.91 | 5000 | 4.1064 | | 3.6432 | 3.2 | 5500 | 4.1013 | | 3.581 | 3.49 | 6000 | 4.0704 | | 3.569 | 3.79 | 6500 | 4.0362 | | 3.4919 | 4.08 | 7000 | 4.0338 | | 3.3226 | 4.37 | 7500 | 4.0289 | | 3.3106 | 4.66 | 8000 | 4.0166 | | 3.297 | 4.95 | 8500 | 4.0046 | | 3.1568 | 5.24 | 9000 | 4.0152 | | 3.1358 | 5.53 | 9500 | 4.0145 | | 3.1313 | 5.82 | 10000 | 4.0135 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
NetUserGet/Pathfinder
NetUserGet
2023-07-22T00:40:08Z
0
0
null
[ "music", "en", "license:openrail", "region:us" ]
null
2023-07-13T04:28:43Z
--- license: openrail language: - en tags: - music ---
kasoarcat/swin-base-patch4-window7-224-finetuned-lora-food101
kasoarcat
2023-07-22T00:23:44Z
2
0
peft
[ "peft", "pytorch", "tensorboard", "swin", "region:us" ]
null
2023-07-22T00:02:18Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
TheBloke/Vicuna-13B-v1.3-German-GGML
TheBloke
2023-07-22T00:06:50Z
9
4
transformers
[ "transformers", "llama", "text-generation", "de", "en", "arxiv:2302.13971", "arxiv:2306.05685", "license:other", "region:us" ]
text-generation
2023-07-21T23:48:07Z
--- inference: false language: - de - en license: other model_type: llama pipeline_tag: text-generation --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Jan Philipp Harries' Vicuna 13B v1.3 German GGML These files are GGML format model files for [Jan Philipp Harries' Vicuna 13B v1.3 German](https://huggingface.co/jphme/vicuna-13b-v1.3-ger). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend. * [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend. * [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vicuna-13B-v1.3-German-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Vicuna-13B-v1.3-German-GGML) * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jphme/vicuna-13b-v1.3-ger) ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods. ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`. They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation. ## Explanation of the new k-quant methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | vicuna-13b-v1.3-german.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | vicuna-13b-v1.3-german.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | vicuna-13b-v1.3-german.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | vicuna-13b-v1.3-german.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | vicuna-13b-v1.3-german.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. | | vicuna-13b-v1.3-german.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | vicuna-13b-v1.3-german.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | vicuna-13b-v1.3-german.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | vicuna-13b-v1.3-german.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | vicuna-13b-v1.3-german.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | vicuna-13b-v1.3-german.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | vicuna-13b-v1.3-german.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | vicuna-13b-v1.3-german.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization | | vicuna-13b-v1.3-german.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m vicuna-13b-v1.3-german.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:" ``` Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Jan Philipp Harries' Vicuna 13B v1.3 German # Vicuna 13b v1.3 German vicuna-13b-v1.3-ger is a variant of [LMSYS](https://huggingface.co/lmsys)´s [Vicuna 13b v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) model, finetuned on an additional dataset in German language. The original model has been trained on explain tuned datasets, created using instructions and input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. This model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. However the model is not yet fully optimized for German language, as it has been trained on a small, experimental dataset and has limited capabilities due to the small parameter count. Some of the fineunting data is also targeted towards factual retrieval (only answer questions from information in the context and refuse to hallucinate) and the model should perform better for these tasks than original Vicuna. I am working on improving the model´s capabilities and will update the model if there is sufficient interest. A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/vicuna-13b-v1.3-ger-GGML). ## Prompt Template ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hello!</s> USER: How are you? ASSISTANT: I am good.</s> ``` ## Results I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is above the base model in many situations. ## Problems There might be inconsistencies in multi-turn chat applications, as there was a small problem with the <eos> tokens during preparation of the finetuning dataset. Please report any problems so I can fix this for the next version. --------------------------- # Original Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 140K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
Villagerindo/tts-bluearchive
Villagerindo
2023-07-21T23:28:07Z
0
1
null
[ "license:apache-2.0", "region:us" ]
null
2023-07-21T14:59:23Z
--- title: Vits Models emoji: 🏃 colorFrom: pink colorTo: indigo sdk: gradio sdk_version: 3.17.0 app_file: app.py pinned: false license: apache-2.0 --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
EllaHong/km5.8_qlora_4b_v4
EllaHong
2023-07-21T23:27:03Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-21T23:26:58Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.5.0.dev0
DarkAirforce/Reinforce-PixelCopter
DarkAirforce
2023-07-21T23:19:27Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-21T23:19:24Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 32.60 +/- 20.66 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
asapp/sew-d-tiny-100k
asapp
2023-07-21T23:05:03Z
2,248
2
transformers
[ "transformers", "pytorch", "safetensors", "sew-d", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2109.06870", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # SEW-D-tiny [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew#model-checkpoints . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
Emperor-WS/q-Taxi-v3
Emperor-WS
2023-07-21T22:49:20Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-21T22:49:18Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Emperor-WS/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ashercn97/OpenOrcaUpload
ashercn97
2023-07-21T22:28:43Z
7
0
peft
[ "peft", "text-generation", "dataset:ashercn97/OpenOrcaPleaseWork", "region:us" ]
text-generation
2023-07-21T14:43:58Z
--- library_name: peft pipeline_tag: text-generation datasets: - ashercn97/OpenOrcaPleaseWork --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0 - PEFT 0.5.0.dev0
Samir001/ResumeSummary-t5-Wang-Arora
Samir001
2023-07-21T22:13:26Z
110
1
transformers
[ "transformers", "pytorch", "longt5", "text2text-generation", "summarization", "en", "dataset:Samir001/Resume_Summary", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-07-19T01:04:02Z
--- license: other datasets: - Samir001/Resume_Summary language: - en metrics: - rouge pipeline_tag: summarization ---
qfrodicio/roberta-finetuned-gesture-prediction-5-classes
qfrodicio
2023-07-21T21:57:34Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "roberta", "token-classification", "generated_from_trainer", "dataset:qfrodicio/gesture-prediction-5-classes", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-03-07T23:01:36Z
--- license: mit tags: - generated_from_trainer datasets: qfrodicio/gesture-prediction-5-classes metrics: - accuracy - precision - recall - f1 model-index: - name: roberta-finetuned-gesture-prediction-5-classes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-gesture-prediction-5-classes This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4764 - Accuracy: 0.8729 - Precision: 0.8731 - Recall: 0.8729 - F1: 0.8725 It achieves the following results on the evaluation set: - Loss: 0.4842 - Accuracy: 0.8628 - Precision: 0.8629 - Recall: 0.8628 - F1: 0.8619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data The model has been trained with the qfrodicio/gesture-prediction-5-classes dataset ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.4556 | 1.0 | 71 | 0.9405 | 0.6561 | 0.6129 | 0.6561 | 0.5981 | | 0.7207 | 2.0 | 142 | 0.5276 | 0.8442 | 0.8463 | 0.8442 | 0.8406 | | 0.4005 | 3.0 | 213 | 0.4997 | 0.8662 | 0.8719 | 0.8662 | 0.8640 | | 0.2417 | 4.0 | 284 | 0.4764 | 0.8729 | 0.8731 | 0.8729 | 0.8725 | | 0.1757 | 5.0 | 355 | 0.5135 | 0.8812 | 0.8827 | 0.8812 | 0.8810 | | 0.1398 | 6.0 | 426 | 0.5266 | 0.8710 | 0.8710 | 0.8710 | 0.8704 | | 0.0937 | 7.0 | 497 | 0.5438 | 0.8799 | 0.8801 | 0.8799 | 0.8792 | | 0.07 | 8.0 | 568 | 0.5759 | 0.8769 | 0.8770 | 0.8769 | 0.8766 | | 0.0552 | 9.0 | 639 | 0.6035 | 0.8745 | 0.8741 | 0.8745 | 0.8738 | | 0.0478 | 10.0 | 710 | 0.5974 | 0.8778 | 0.8775 | 0.8778 | 0.8771 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Oburaco/llama2-qlora-finetunined-ptbr
Oburaco
2023-07-21T21:43:25Z
1
1
peft
[ "peft", "region:us" ]
null
2023-07-21T21:43:16Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
akshitapps/rampant-vampire-2.0
akshitapps
2023-07-21T21:30:13Z
31
0
diffusers
[ "diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-21T18:32:56Z
--- license: creativeml-openrail-m --- <b>Please read this!</b><br> For version 2.0 it is recommended to use with VAE (to improve generation quality and get rid of blue artifacts): https://huggingface.co/stabilityai/sd-vae-ft-mse-original<br> This model is available on <a href="https://www.mage.space/">Mage.Space</a>, <a href="https://sinkin.ai/">Sinkin.ai</a>, <a href="https://getimg.ai/">GetImg.ai</a> and (<a href="https://randomseed.co/">RandomSeed.co</a> - NSFW content) <hr/> <b>I use this template to get good generation results: Prompt:</b> RAW photo, *subject*, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <b>Example:</b> RAW photo, a close up portrait photo of 26 y.o woman in wastelander clothes, long haircut, pale skin, slim body, background is city ruins, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 <b>Negative Prompt:</b> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck<br> <b>OR</b><br> (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation <b>Euler A or DPM++ 2M Karras with 25 steps<br> CFG Scale 3,5 - 7<br> Hires. fix with Latent upscaler<br> 0 Hires steps and Denoising strength 0.25-0.45<br> Upscale by 1.1-2.0</b>
Aspik101/guanaco-7B-HF-pl-lora_GGML
Aspik101
2023-07-21T21:27:02Z
0
0
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-07-21T21:19:26Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
Pedrampd/NLP-HW5-NerTaggerModel
Pedrampd
2023-07-21T21:25:17Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-21T20:10:05Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: NLP-HW5-NerTaggerModel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLP-HW5-NerTaggerModel This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0218 - Accuracy: 0.9947 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1891 | 1.0 | 878 | 0.0342 | 0.9909 | | 0.0377 | 2.0 | 1756 | 0.0218 | 0.9947 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Aspik101/guanaco-7B-HF-pl-lora_adapter_model
Aspik101
2023-07-21T21:15:58Z
0
0
null
[ "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "region:us" ]
text-generation
2023-07-21T21:15:57Z
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
Pedrampd/NLP-HW5-PosTaggerModel
Pedrampd
2023-07-21T21:14:29Z
121
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-21T21:00:29Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: NLP-HW5-PosTaggerModel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLP-HW5-PosTaggerModel This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1278 - Accuracy: 0.9659 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7026 | 1.0 | 878 | 0.1925 | 0.9493 | | 0.1976 | 2.0 | 1756 | 0.1446 | 0.9610 | | 0.157 | 3.0 | 2634 | 0.1278 | 0.9659 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
digiplay/CounterMix_v1
digiplay
2023-07-21T21:08:10Z
281
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-25T17:48:10Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/70455?modelVersionId=75113 Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/3f49622f-6a41-4b99-b4d6-90f4d3e4abe3/width=1120/00003-2313874300.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/bd5a42ef-ae1d-4532-8c24-a8543b538bb6/width=1120/00131-2130441226.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6e9faa21-a271-4c5d-a835-274651eb4f46/width=1688/00006-3871142016.jpeg) Sample image I made : (You can use: close-up , realistic:2 to enhance image ) ![81c943b7-eb0f-49d4-a4cc-d9e5cb4fc851.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/6FKj_lNJS91WLeV80RFMJ.jpeg) ![02bc5c27-bd03-45cc-a73c-424f4b66f211.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/KCKnoyzTEXseMrQB_B2bh.jpeg)
Mel-Iza0/RedPajama-ZeroShot-20K-new_prompt_classe_nenhuma
Mel-Iza0
2023-07-21T21:07:19Z
2
0
peft
[ "peft", "pytorch", "gpt_neox", "region:us" ]
null
2023-07-21T18:40:27Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0
gokuls/hbertv2-wt-frz-48-Massive-intent
gokuls
2023-07-21T21:02:16Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:massive", "base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz", "base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-21T20:53:10Z
--- base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz tags: - generated_from_trainer datasets: - massive metrics: - accuracy model-index: - name: hbertv2-wt-frz-48-Massive-intent results: - task: name: Text Classification type: text-classification dataset: name: massive type: massive config: en-US split: validation args: en-US metrics: - name: Accuracy type: accuracy value: 0.8701426463354648 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv2-wt-frz-48-Massive-intent This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.8549 - Accuracy: 0.8701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6771 | 1.0 | 180 | 0.8299 | 0.7850 | | 0.7544 | 2.0 | 360 | 0.6817 | 0.8185 | | 0.5364 | 3.0 | 540 | 0.6402 | 0.8362 | | 0.4001 | 4.0 | 720 | 0.6371 | 0.8367 | | 0.2985 | 5.0 | 900 | 0.6864 | 0.8367 | | 0.2297 | 6.0 | 1080 | 0.6357 | 0.8485 | | 0.1633 | 7.0 | 1260 | 0.7224 | 0.8411 | | 0.1304 | 8.0 | 1440 | 0.7212 | 0.8593 | | 0.0859 | 9.0 | 1620 | 0.7789 | 0.8515 | | 0.0632 | 10.0 | 1800 | 0.8223 | 0.8588 | | 0.0447 | 11.0 | 1980 | 0.8011 | 0.8628 | | 0.0288 | 12.0 | 2160 | 0.8139 | 0.8692 | | 0.0188 | 13.0 | 2340 | 0.8859 | 0.8662 | | 0.0115 | 14.0 | 2520 | 0.8549 | 0.8701 | | 0.0067 | 15.0 | 2700 | 0.8622 | 0.8677 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
gokuls/hbertv2-emotion-48-emb-comp-gelu
gokuls
2023-07-21T20:50:46Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu", "base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-21T20:42:04Z
--- base_model: gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: hbertv2-emotion-48-emb-comp-gelu results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.8125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv2-emotion-48-emb-comp-gelu This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_emb_compress_48_gelu) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.6500 - Accuracy: 0.8125 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.583 | 1.0 | 250 | 1.5406 | 0.3875 | | 1.3735 | 2.0 | 500 | 1.2460 | 0.5365 | | 1.1696 | 3.0 | 750 | 1.1673 | 0.556 | | 1.0567 | 4.0 | 1000 | 1.0862 | 0.574 | | 0.8667 | 5.0 | 1250 | 0.8843 | 0.686 | | 0.6994 | 6.0 | 1500 | 0.8536 | 0.698 | | 0.5608 | 7.0 | 1750 | 0.7322 | 0.773 | | 0.4448 | 8.0 | 2000 | 0.6712 | 0.8045 | | 0.3793 | 9.0 | 2250 | 0.6298 | 0.8095 | | 0.335 | 10.0 | 2500 | 0.6500 | 0.8125 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
gokuls/hbertv1-mini-wt-frz-48-Massive-intent-emb-comp
gokuls
2023-07-21T20:28:38Z
54
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:massive", "base_model:gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz", "base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-21T20:24:26Z
--- base_model: gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz tags: - generated_from_trainer datasets: - massive metrics: - accuracy model-index: - name: hbertv1-mini-wt-frz-48-Massive-intent-emb-comp results: - task: name: Text Classification type: text-classification dataset: name: massive type: massive config: en-US split: validation args: en-US metrics: - name: Accuracy type: accuracy value: 0.8288243974422036 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-mini-wt-frz-48-Massive-intent-emb-comp This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.7199 - Accuracy: 0.8288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8181 | 1.0 | 180 | 1.7693 | 0.6085 | | 1.3856 | 2.0 | 360 | 1.0888 | 0.7211 | | 0.9216 | 3.0 | 540 | 0.8674 | 0.7782 | | 0.6977 | 4.0 | 720 | 0.7678 | 0.8028 | | 0.5417 | 5.0 | 900 | 0.7335 | 0.8106 | | 0.4402 | 6.0 | 1080 | 0.7076 | 0.8190 | | 0.3562 | 7.0 | 1260 | 0.6918 | 0.8244 | | 0.2937 | 8.0 | 1440 | 0.6998 | 0.8210 | | 0.2331 | 9.0 | 1620 | 0.7244 | 0.8205 | | 0.1925 | 10.0 | 1800 | 0.7199 | 0.8288 | | 0.1589 | 11.0 | 1980 | 0.7338 | 0.8278 | | 0.1321 | 12.0 | 2160 | 0.7561 | 0.8259 | | 0.1093 | 13.0 | 2340 | 0.7498 | 0.8278 | | 0.0937 | 14.0 | 2520 | 0.7579 | 0.8278 | | 0.0852 | 15.0 | 2700 | 0.7542 | 0.8288 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
gokuls/hbertv1-mini-wt-48-Massive-intent-emb-comp
gokuls
2023-07-21T20:13:55Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:massive", "base_model:gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp", "base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-21T20:09:37Z
--- base_model: gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp tags: - generated_from_trainer datasets: - massive metrics: - accuracy model-index: - name: hbertv1-mini-wt-48-Massive-intent-emb-comp results: - task: name: Text Classification type: text-classification dataset: name: massive type: massive config: en-US split: validation args: en-US metrics: - name: Accuracy type: accuracy value: 0.8411214953271028 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-mini-wt-48-Massive-intent-emb-comp This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.7077 - Accuracy: 0.8411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8713 | 1.0 | 180 | 1.8255 | 0.5903 | | 1.4406 | 2.0 | 360 | 1.1089 | 0.7177 | | 0.9491 | 3.0 | 540 | 0.8839 | 0.7727 | | 0.7165 | 4.0 | 720 | 0.7622 | 0.8072 | | 0.5574 | 5.0 | 900 | 0.7180 | 0.8121 | | 0.4491 | 6.0 | 1080 | 0.7020 | 0.8224 | | 0.3617 | 7.0 | 1260 | 0.6915 | 0.8244 | | 0.291 | 8.0 | 1440 | 0.6727 | 0.8352 | | 0.2355 | 9.0 | 1620 | 0.6822 | 0.8362 | | 0.1915 | 10.0 | 1800 | 0.6960 | 0.8293 | | 0.1569 | 11.0 | 1980 | 0.7021 | 0.8367 | | 0.1296 | 12.0 | 2160 | 0.7077 | 0.8411 | | 0.1087 | 13.0 | 2340 | 0.7080 | 0.8406 | | 0.0931 | 14.0 | 2520 | 0.7152 | 0.8411 | | 0.0839 | 15.0 | 2700 | 0.7203 | 0.8401 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
gokuls/hbertv1-tiny-wt-48-Massive-intent-emb-comp
gokuls
2023-07-21T20:06:02Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:massive", "base_model:gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp", "base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-21T20:02:45Z
--- base_model: gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp tags: - generated_from_trainer datasets: - massive metrics: - accuracy model-index: - name: hbertv1-tiny-wt-48-Massive-intent-emb-comp results: - task: name: Text Classification type: text-classification dataset: name: massive type: massive config: en-US split: validation args: en-US metrics: - name: Accuracy type: accuracy value: 0.7899655681259223 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-tiny-wt-48-Massive-intent-emb-comp This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.8545 - Accuracy: 0.7900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.6847 | 1.0 | 180 | 3.2207 | 0.2710 | | 2.7795 | 2.0 | 360 | 2.3154 | 0.4471 | | 2.0459 | 3.0 | 540 | 1.7680 | 0.5627 | | 1.5874 | 4.0 | 720 | 1.4363 | 0.6734 | | 1.2902 | 5.0 | 900 | 1.2306 | 0.7127 | | 1.0905 | 6.0 | 1080 | 1.1068 | 0.7373 | | 0.9468 | 7.0 | 1260 | 1.0113 | 0.7545 | | 0.844 | 8.0 | 1440 | 0.9661 | 0.7580 | | 0.7684 | 9.0 | 1620 | 0.9333 | 0.7649 | | 0.7086 | 10.0 | 1800 | 0.9018 | 0.7772 | | 0.6629 | 11.0 | 1980 | 0.8807 | 0.7831 | | 0.6244 | 12.0 | 2160 | 0.8747 | 0.7796 | | 0.5965 | 13.0 | 2340 | 0.8591 | 0.7875 | | 0.5731 | 14.0 | 2520 | 0.8634 | 0.7875 | | 0.5633 | 15.0 | 2700 | 0.8545 | 0.7900 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
Kristijan/gpt2_wt103_12-layer
Kristijan
2023-07-21T20:03:33Z
6
0
pytorch
[ "pytorch", "gpt2", "language-model", "transformer", "wikitext-103", "en", "arxiv:2210.13569", "model-index", "region:us" ]
null
2023-03-30T17:20:25Z
--- language: - en library_name: pytorch tags: - language-model - gpt2 - transformer - wikitext-103 model-index: - name: gpt2_wt103-40m_12-layer results: - task: type: language-modeling dataset: type: wikitext name: Wikitext-103 metrics: - type: perplexity value: 40.6 --- # Model description paper: [Characterizing Verbatim Short-Term Memory in Neural Language Models](https://arxiv.org/abs/2210.13569) This is a gpt2-small-like decoder-only transformer model trained on a the [wikitext-103 dataset](https://paperswithcode.com/dataset/wikitext-103). # Usage You can download and load the model as follows: ```python from transformers import GPT2LMHeadModel model = GPT2LMHeadModel.from_pretrained("Kristijan/gpt2_wt103_12-layer") ``` Alternatively, if you've downloaded the checkpoint files in this repository, you could also do: ```python from transformers import GPT2LMHeadModel model = GPT2LMHeadModel.from_pretrained(path_to_folder_with_checkpoint_files) ``` ## BPE Tokenizer You should first pretokenize your text using the [MosesTokenizer](https://pypi.org/project/mosestokenizer/): ```python from mosestokenizer import MosesTokenizer with MosesTokenizer('en') as pretokenize: pretokenized_text = " ".join(pretokenize(text_string)) ``` Then, to BPE tokenize your text for this model, you should use the [tokenizer trained on Wikitext-103](https://huggingface.co/Kristijan/wikitext-103_tokenizer_v2): ```python from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained("Kristijan/wikitext-103-tokenizer_v2") tokenized_text = tokenizer.tokenize(pretokenized_text) ``` # Intended uses This checkpoint is intended for research purposes, for example those interested in studying the behavior of transformer language models trained on smaller datasets.
swaubhik/LoRA-simple
swaubhik
2023-07-21T20:02:44Z
5
0
peft
[ "peft", "product", "LoRA", "region:us" ]
null
2023-07-21T18:58:02Z
--- library_name: peft tags: - peft - product - LoRA --- ## Training procedure per_device_train_batch_size=4, gradient_accumulation_steps=4, warmup_steps=100, max_steps=100, learning_rate=1e-3, fp16=True, logging_steps=1, output_dir='outputs' ### Framework versions - PEFT 0.5.0.dev0
prompiu/mmtl
prompiu
2023-07-21T20:00:43Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-20T03:30:50Z
--- license: creativeml-openrail-m ---
Tasaloris13/finetuned-college-50-new
Tasaloris13
2023-07-21T19:55:29Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-19T21:32:52Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0
josh-salako/ai_generated_image_detector
josh-salako
2023-07-21T19:27:04Z
0
0
keras
[ "keras", "tf-keras", "dataset:competitions/aiornot", "region:us" ]
null
2023-03-13T19:22:12Z
--- library_name: keras datasets: - competitions/aiornot metrics: - accuracy --- ## Model description A model that detects AI generated iamge ## Intended uses & limitations Intended for use cases whenever real images are needed and not AI generated ones. This model however cannot distinguish an AI generated movie whenever it has a close resemblance with a real image. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 0.0010000000474974513 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 | ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Naruke/taxi-v3
Naruke
2023-07-21T19:12:38Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-21T19:12:35Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Naruke/taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
PiXeL99/llama7b-lora-telecom-50steps-5epochs
PiXeL99
2023-07-21T18:57:33Z
0
1
peft
[ "peft", "region:us" ]
null
2023-07-21T18:57:28Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.5.0.dev0
ailabturkiye/semicenkroportaj
ailabturkiye
2023-07-21T18:57:19Z
0
0
null
[ "region:us" ]
null
2023-07-21T18:54:42Z
[![Discord Sunucumuz](https://img.shields.io/badge/Discord.gg%2F-AiLab-ailab )](discord.gg/ailab) ![Static Badge](https://img.shields.io/badge/AI%20LAB%20Hugging%20Face%20Organization-sa?style=plastic&labelColor=blue&color=blue) ![Static Badge](https://img.shields.io/badge/Yap%C4%B1mc%C4%B1%20Bilgisi%20Verilmeden%20Payla%C5%9F%C4%B1lmas%C4%B1%20Yasakt%C4%B1r!-s?style=plastic&labelColor=orange&color=red) # Semicenk (Şarkıcı) - RVC V2 300 Epoch **Şarkıcı Semicenk'in Röportaj Kesitlerinden oluşturulan ses modelidir, Şarkıdaki sesini temsil etmez.! Rvc V2 | 6 Dakikalık Dataset | 300 Epoch olarak eğitilmiştir.** _Dataset ve Train Benim Tarafımdan yapılmıştır.._ __Modelin izinsiz bir şekilde [Ai Lab Discord](discord.gg/ailab) Sunucusu dışında paylaşılması tamamen yasaktır, model openrail lisansına sahiptir.__ ## Credits **Herhangi bir platformda model ile yapılan bir cover paylaşımında credits vermeniz rica olunur.** - Discord: hydragee - YouTube: CoverLai (https://www.youtube.com/@coverlai) ![Static Badge](https://img.shields.io/badge/Yap%C4%B1mc%C4%B1%20Bilgisi%20Verilmeden%20Payla%C5%9F%C4%B1lmas%C4%B1%20Yasakt%C4%B1r!-s?style=plastic&labelColor=orange&color=red) [![Discord Sunucumuz](https://img.shields.io/badge/Discord.gg%2F-AiLab-ailab )](discord.gg/ailab) ![Static Badge](https://img.shields.io/badge/AI%20LAB%20Hugging%20Face%20Organization-sa?style=plastic&labelColor=blue&color=blue)
gokuls/hbertv1-mini-wt-frz-48-emotion-emb-comp
gokuls
2023-07-21T18:56:15Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz", "base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-21T18:52:44Z
--- base_model: gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: hbertv1-mini-wt-frz-48-emotion-emb-comp results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.883 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-mini-wt-frz-48-emotion-emb-comp This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_mini_emb_comp_frz) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3850 - Accuracy: 0.883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0654 | 1.0 | 250 | 0.5938 | 0.8075 | | 0.4743 | 2.0 | 500 | 0.4150 | 0.8575 | | 0.324 | 3.0 | 750 | 0.3850 | 0.883 | | 0.2558 | 4.0 | 1000 | 0.3847 | 0.8755 | | 0.2103 | 5.0 | 1250 | 0.3978 | 0.87 | | 0.1763 | 6.0 | 1500 | 0.3857 | 0.874 | | 0.1454 | 7.0 | 1750 | 0.3880 | 0.879 | | 0.1205 | 8.0 | 2000 | 0.4153 | 0.88 | | 0.0995 | 9.0 | 2250 | 0.4228 | 0.8765 | | 0.0828 | 10.0 | 2500 | 0.4313 | 0.878 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
jaygdesai/Reinforce-Jay-cartpole
jaygdesai
2023-07-21T18:53:27Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-21T18:12:34Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Jay-cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 482.50 +/- 52.50 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dionatandiego11/llama2-qlora-finetunined-french
dionatandiego11
2023-07-21T18:51:04Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-21T18:43:01Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0.dev0
gokuls/hbertv1-emotion-48-emb-comp-gelu
gokuls
2023-07-21T18:50:51Z
48
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu", "base_model:finetune:gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-21T18:42:08Z
--- base_model: gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: hbertv1-emotion-48-emb-comp-gelu results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.712 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-emotion-48-emb-comp-gelu This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_emb_compress_48_gelu) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.8229 - Accuracy: 0.712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6061 | 1.0 | 250 | 1.5996 | 0.275 | | 1.5562 | 2.0 | 500 | 1.6098 | 0.3825 | | 1.3818 | 3.0 | 750 | 1.4045 | 0.483 | | 1.2359 | 4.0 | 1000 | 1.2408 | 0.552 | | 1.1273 | 5.0 | 1250 | 1.1605 | 0.5615 | | 1.0649 | 6.0 | 1500 | 1.1790 | 0.568 | | 1.007 | 7.0 | 1750 | 1.0494 | 0.575 | | 0.9101 | 8.0 | 2000 | 0.9741 | 0.63 | | 0.78 | 9.0 | 2250 | 0.8593 | 0.6915 | | 0.682 | 10.0 | 2500 | 0.8229 | 0.712 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
ittailup/lallama-2-13b-chat-v2
ittailup
2023-07-21T18:47:32Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-21T18:30:25Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
NasimB/guten-rarity-neg-log-rarity-no-cut
NasimB
2023-07-21T18:40:43Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-21T15:16:05Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: guten-rarity-neg-log-rarity-no-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # guten-rarity-neg-log-rarity-no-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.1048 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.3421 | 0.29 | 500 | 5.3363 | | 5.0357 | 0.58 | 1000 | 4.9250 | | 4.7084 | 0.87 | 1500 | 4.6857 | | 4.4492 | 1.16 | 2000 | 4.5455 | | 4.2984 | 1.46 | 2500 | 4.4301 | | 4.1972 | 1.75 | 3000 | 4.3258 | | 4.0832 | 2.04 | 3500 | 4.2503 | | 3.8934 | 2.33 | 4000 | 4.2116 | | 3.8607 | 2.62 | 4500 | 4.1533 | | 3.8323 | 2.91 | 5000 | 4.1090 | | 3.6419 | 3.2 | 5500 | 4.0989 | | 3.5834 | 3.49 | 6000 | 4.0699 | | 3.5762 | 3.79 | 6500 | 4.0398 | | 3.4864 | 4.08 | 7000 | 4.0350 | | 3.3174 | 4.37 | 7500 | 4.0295 | | 3.3153 | 4.66 | 8000 | 4.0165 | | 3.304 | 4.95 | 8500 | 4.0047 | | 3.1667 | 5.24 | 9000 | 4.0159 | | 3.1375 | 5.53 | 9500 | 4.0149 | | 3.1343 | 5.82 | 10000 | 4.0139 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
gokuls/hbertv1-tiny-wt-48-emotion-emb-comp
gokuls
2023-07-21T18:35:03Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "hybridbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp", "base_model:finetune:gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-21T18:32:26Z
--- base_model: gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp tags: - generated_from_trainer datasets: - emotion metrics: - accuracy model-index: - name: hbertv1-tiny-wt-48-emotion-emb-comp results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.8885 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hbertv1-tiny-wt-48-emotion-emb-comp This model is a fine-tuned version of [gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp](https://huggingface.co/gokuls/model_v1_complete_training_wt_init_48_tiny_emb_comp) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3216 - Accuracy: 0.8885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 33 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3139 | 1.0 | 250 | 0.8529 | 0.727 | | 0.6339 | 2.0 | 500 | 0.4745 | 0.848 | | 0.4011 | 3.0 | 750 | 0.3605 | 0.875 | | 0.2998 | 4.0 | 1000 | 0.3326 | 0.885 | | 0.25 | 5.0 | 1250 | 0.3346 | 0.8815 | | 0.2177 | 6.0 | 1500 | 0.3216 | 0.8885 | | 0.1928 | 7.0 | 1750 | 0.3214 | 0.8885 | | 0.1747 | 8.0 | 2000 | 0.3178 | 0.8875 | | 0.1581 | 9.0 | 2250 | 0.3291 | 0.885 | | 0.1404 | 10.0 | 2500 | 0.3260 | 0.887 | ### Framework versions - Transformers 4.31.0 - Pytorch 1.14.0a0+410ce96 - Datasets 2.13.1 - Tokenizers 0.13.3
jaimevera1107/all-MiniLM-L6-v2-similarity-es
jaimevera1107
2023-07-21T18:26:31Z
4,970
3
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "es", "dataset:jaimevera1107/similarity-sentences-spanish", "license:mit", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-21T17:15:03Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers license: mit datasets: - jaimevera1107/similarity-sentences-spanish language: - es library_name: sentence-transformers --- # All-MiniLM-L6-v2 Fine Tuned - Sentence Transformers - Embedding Model (Spanish-Español) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Esta es una frase para ser comparada", "Esta es otra oración"] model = SentenceTransformer('jaimevera1107/roberta-similarity-es') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Esta es una frase para ser comparada", "Esta es otra oración"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('jaimevera1107/roberta-similarity-es') model = AutoModel.from_pretrained('jaimevera1107/roberta-similarity-es') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results | Model | R squared | Spearman Correlation | |----------------------------|--------------|-------------------------| | Roberta Fine tuned | 70.67 % | 80.1 % | ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 767 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` The data used was the one in the [Similarity Sentences Spanish Dataset](https://huggingface.co/datasets/jaimevera1107/similarity-sentences-spanish) **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 500, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 383, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
tyzp-INC/bench1-paraphrase-multilingual-MiniLM-L12-v2
tyzp-INC
2023-07-21T18:25:57Z
5
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-21T18:25:32Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # tyzp-INC/bench1-paraphrase-multilingual-MiniLM-L12-v2 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("tyzp-INC/bench1-paraphrase-multilingual-MiniLM-L12-v2") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```