modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 12:29:05
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 12:27:55
card
stringlengths
11
1.01M
ShekDass/donut-base-smartenroll-v1
ShekDass
2023-07-16T12:58:34Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-07-16T12:13:51Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-smartenroll-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-smartenroll-v1 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
WALIDALI/lyrieldiff
WALIDALI
2023-07-16T12:55:45Z
2
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-16T12:50:56Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### LyrielDiff Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
asedmammad/Vicuna-7B-vanilla-1.1-GGML
asedmammad
2023-07-16T12:50:47Z
0
1
null
[ "llama", "vicuna", "text-generation-inference", "region:us" ]
null
2023-07-16T09:47:34Z
--- inference: false tags: - llama - vicuna - text-generation-inference --- # Ejafa's Vicuna Vanilla 1.1 7B GGML These files are GGML format model files for [Ejafa's Vicuna Vanilla 1.1 7B](https://huggingface.co/Ejafa/vicuna_7B_vanilla_1.1). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 8 -ngl 32 -m vicuna_7B_vanilla_1.1.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "prompt goes here" ``` Change `-t 8` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## Compatibility I have uploded bothe the original llama.cpp quant methods (`q4_0, q4_1, q5_0, q5_1, q8_0`) as well as the new k-quant methods (`q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`). Please refer to [llama.cpp](https://github.com/ggerganov/llama.cpp) and [TheBloke](https://huggingface.co/TheBloke)'s GGML models for further explanation. ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Thanks Thanks to [TheBloke](https://huggingface.co/TheBloke) for inspiration and providing almost all of the readme here! Thanks to [Ejafa](https://huggingface.co/Ejafa) for providing checkpoints of the model. Thanks to [Georgi Gerganov](https://github.com/ggerganov) and all of the awesome people in the AI community.
larry-jiang/RL
larry-jiang
2023-07-16T12:48:55Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T12:47:54Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.32 +/- 20.65 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
headflame02/AchaxV5
headflame02
2023-07-16T12:37:19Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-16T12:37:16Z
--- license: creativeml-openrail-m ---
vuvuongvi/vivu_marketingAI_fourthbrain
vuvuongvi
2023-07-16T12:29:23Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-16T12:28:39Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
Rihong/ppo-LunarLander-v2
Rihong
2023-07-16T12:20:44Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T12:19:16Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 272.93 +/- 18.31 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ALM-AHME/swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20
ALM-AHME
2023-07-16T12:15:04Z
199
1
transformers
[ "transformers", "pytorch", "tensorboard", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-16T09:38:10Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: Splitted-Resized split: train args: Splitted-Resized metrics: - name: Accuracy type: accuracy value: 0.9943422913719944 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-large-patch4-window12to16-192to256-22kto1k-ft-finetuned-BreastCancer-BreakHis-AH-60-20-20 This model is a fine-tuned version of [microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft](https://huggingface.co/microsoft/swinv2-large-patch4-window12to16-192to256-22kto1k-ft) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0229 - Accuracy: 0.9943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.5 - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2053 | 1.0 | 199 | 0.1227 | 0.9496 | | 0.1302 | 2.0 | 398 | 0.0665 | 0.9736 | | 0.0784 | 3.0 | 597 | 0.0600 | 0.9778 | | 0.1181 | 4.0 | 796 | 0.0449 | 0.9849 | | 0.208 | 5.0 | 995 | 0.0393 | 0.9887 | | 0.0057 | 6.0 | 1194 | 0.0229 | 0.9943 | | 0.0017 | 7.0 | 1393 | 0.0263 | 0.9939 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
joserodr68/Reinforce-cartpole
joserodr68
2023-07-16T12:12:28Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T12:11:19Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
sjdata/speecht5_finetuned_single_speaker_en_test_librivox
sjdata
2023-07-16T12:09:19Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "speecht5", "text-to-audio", "generated_from_trainer", "en", "dataset:speecht5_finetuned_single_speaker_en_test_librivox", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2023-07-13T12:31:39Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - speecht5_finetuned_single_speaker_en_test_librivox model-index: - name: SpeechT5 Single Speaker test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 Single Speaker test This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the single_speaker_en_test_librivox dataset. It achieves the following results on the evaluation set: - Loss: 0.4215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4809 | 1.78 | 1000 | 0.4215 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
indiaLLMs/dolly-llama-3b
indiaLLMs
2023-07-16T11:42:56Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-16T11:42:19Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
dhinman/ppo-Huggy
dhinman
2023-07-16T11:25:59Z
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-16T11:25:07Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: dhinman/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
NasimB/all-base-rarity-all-children-rarity-all-iorder-est-5p5k-mostf
NasimB
2023-07-16T11:21:16Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-16T09:32:37Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: all-base-rarity-all-children-rarity-all-iorder-est-5p5k-mostf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-base-rarity-all-children-rarity-all-iorder-est-5p5k-mostf This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.7525 | 0.31 | 500 | 5.6457 | | 5.4141 | 0.63 | 1000 | 5.2112 | | 5.0523 | 0.94 | 1500 | 4.9580 | | 4.7674 | 1.25 | 2000 | 4.8174 | | 4.6213 | 1.56 | 2500 | 4.6915 | | 4.5132 | 1.88 | 3000 | 4.5796 | | 4.3109 | 2.19 | 3500 | 4.5205 | | 4.2115 | 2.5 | 4000 | 4.4590 | | 4.1668 | 2.82 | 4500 | 4.3952 | | 4.0277 | 3.13 | 5000 | 4.3712 | | 3.8841 | 3.44 | 5500 | 4.3431 | | 3.8738 | 3.75 | 6000 | 4.3064 | | 3.7942 | 4.07 | 6500 | 4.2923 | | 3.5972 | 4.38 | 7000 | 4.2869 | | 3.5903 | 4.69 | 7500 | 4.2730 | | 3.5681 | 5.01 | 8000 | 4.2585 | | 3.3989 | 5.32 | 8500 | 4.2700 | | 3.3939 | 5.63 | 9000 | 4.2694 | | 3.3913 | 5.94 | 9500 | 4.2686 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
chrishoertnagl/dolly-v2-3b-chris
chrishoertnagl
2023-07-16T11:20:19Z
4
0
peft
[ "peft", "region:us" ]
null
2023-07-15T10:45:38Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0 - PEFT 0.4.0.dev0
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1
hafidikhsan
2023-07-16T11:14:40Z
103
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-16T11:12:03Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-english-pronunciation-evaluation-lr-v1 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4017 - Accuracy: 0.25 - F1: 0.1 - Precision: 0.0625 - Recall: 0.25 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:---------:|:------:| | 1.3826 | 1.0 | 500 | 1.4017 | 0.25 | 0.1 | 0.0625 | 0.25 | | 1.4074 | 2.0 | 1000 | 1.3922 | 0.25 | 0.1 | 0.0625 | 0.25 | | 1.3984 | 3.0 | 1500 | 1.3868 | 0.25 | 0.1 | 0.0625 | 0.25 | | 1.387 | 4.0 | 2000 | 1.3863 | 0.25 | 0.1 | 0.0625 | 0.25 | | 1.3861 | 5.0 | 2500 | 1.3863 | 0.25 | 0.1 | 0.0625 | 0.25 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
gfs0508/AIron-Trans-PT2EN
gfs0508
2023-07-16T11:10:39Z
0
1
keras
[ "keras", "translation", "pt", "en", "license:mit", "region:us" ]
translation
2023-07-16T11:02:01Z
--- license: mit language: - pt - en library_name: keras pipeline_tag: translation --- # AIron-Trans-PT2EN ## License - MIT ## Overview AIron-Trans-PT2EN is a Portuguese to English translation model developed using the Keras library. ## Description AIron-Trans-PT2EN is a translation model that allows you to translate phrases and texts from Portuguese to English. It has been trained using the Long Short-Term Memory (LSTM) neural network architecture and implemented using the Keras library. ## Features - Translation from Portuguese to English - Model trained using the Keras library - LSTM architecture for better contextual understanding - Text preprocessing for improved translation quality ## Usage You can use this translation model in your own projects by following the instructions below: 1. Install the necessary dependencies (Keras, TensorFlow, etc.). 2. Load the trained model using the `load_model()` function from Keras. 3. Preprocess input sentences using the same preprocessing steps used during training. 4. Call the `translate_sentence()` function to get the translation of the input sentence. Code example: ```python from tensorflow import keras # Load the model model = keras.models.load_model('path/to/model.h5') # Preprocess the input sentence preprocessed_sentence = preprocess_sentence('Olá, como vai?') # Translate the sentence translated_sentence = translate_sentence(preprocessed_sentence, model) print(translated_sentence) ``` ## Contribution If you encounter any issues, have ideas for improvements, or would like to contribute to this project, feel free to open an issue or submit a pull request. We welcome contributions! ## Acknowledgments We would like to thank all contributors who helped develop and improve this translation model.
sagarsdesai/ppo-Huggy
sagarsdesai
2023-07-16T11:10:05Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-07-16T11:09:59Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: sagarsdesai/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
vlkn/falcon_instruct_6
vlkn
2023-07-16T10:57:41Z
0
0
null
[ "tensorboard", "generated_from_trainer", "region:us" ]
null
2023-07-16T10:50:11Z
--- tags: - generated_from_trainer model-index: - name: falcon_instruct_6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon_instruct_6 This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 30 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
naot97/bloom1b1-zalo-test
naot97
2023-07-16T10:57:03Z
0
0
peft
[ "peft", "endpoints_compatible", "region:us" ]
null
2023-07-13T14:27:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
caqlayan/falcon-7b-prompt
caqlayan
2023-07-16T09:51:16Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-16T09:31:48Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
neurae/electra-dnd-intents
neurae
2023-07-16T09:39:47Z
104
0
transformers
[ "transformers", "pytorch", "electra", "text-classification", "en", "dataset:neurae/dnd_style_intents", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-16T12:44:45Z
--- datasets: - neurae/dnd_style_intents language: - en pipeline_tag: text-classification license: apache-2.0 metrics: - accuracy - f1 --- This is electra base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset. | parametrs | value | |---------------|----------------------| | learning rate | 6.6e-5 | | lr scheduler | cosine with restarts | | weight decay | 0 | Model has next metrics on test data from dataset | metric | value | |----------|-------| | accuracy | 0.978 | | Macro F1 | 0.976 | | Micro F1 | 0.978 |
crcdng/q-Taxi-v3-r2
crcdng
2023-07-16T09:38:41Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-15T23:41:09Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3-r2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="crcdng/q-Taxi-v3-r2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
neurae/roberta-dnd-intents
neurae
2023-07-16T09:33:44Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "dataset:neurae/dnd_style_intents", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-16T09:18:00Z
--- datasets: - neurae/dnd_style_intents language: - en pipeline_tag: text-classification license: apache-2.0 metrics: - accuracy - f1 --- This is roberta base tuned with optimal lr, lr scheduler and weight decay on dnd-style-intents dataset. | parametrs | value | |---------------|----------| | learning rate | 5e-5 | | lr scheduler | linear | | weight decay | 0 | Model has next metrics on test data from dataset | metric | value | |----------|-------| | accuracy | 0.985 | | Macro F1 | 0.985 | | Micro F1 | 0.985 |
NasimB/guten-rarity-all-end-19k-ctx-512-finegrained-eval
NasimB
2023-07-16T09:07:24Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-16T07:08:53Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: guten-rarity-all-end-19k-ctx-512-finegrained-eval results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # guten-rarity-all-end-19k-ctx-512-finegrained-eval This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.2215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.9362 | 0.24 | 100 | 7.3168 | | 6.5524 | 0.48 | 200 | 6.1279 | | 5.9236 | 0.71 | 300 | 5.7874 | | 5.6556 | 0.95 | 400 | 5.5952 | | 5.4733 | 1.19 | 500 | 5.4416 | | 5.2958 | 1.43 | 600 | 5.2824 | | 5.1307 | 1.66 | 700 | 5.1223 | | 4.9829 | 1.9 | 800 | 4.9860 | | 4.8024 | 2.14 | 900 | 4.8963 | | 4.6927 | 2.38 | 1000 | 4.7992 | | 4.6095 | 2.61 | 1100 | 4.6988 | | 4.516 | 2.85 | 1200 | 4.6015 | | 4.3713 | 3.09 | 1300 | 4.5147 | | 4.2277 | 3.33 | 1400 | 4.4417 | | 4.1862 | 3.56 | 1500 | 4.3820 | | 4.1371 | 3.8 | 1600 | 4.3342 | | 4.059 | 4.04 | 1700 | 4.2893 | | 3.8884 | 4.28 | 1800 | 4.2612 | | 3.8665 | 4.51 | 1900 | 4.2299 | | 3.8437 | 4.75 | 2000 | 4.1981 | | 3.815 | 4.99 | 2100 | 4.1766 | | 3.6574 | 5.23 | 2200 | 4.1724 | | 3.6435 | 5.46 | 2300 | 4.1629 | | 3.6348 | 5.7 | 2400 | 4.1584 | | 3.6424 | 5.94 | 2500 | 4.1557 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
rakaaa/tree-lora
rakaaa
2023-07-16T09:01:55Z
1
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-07-16T07:30:55Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - rakaaa/tree-lora These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
SADAF-IMAMU/train
SADAF-IMAMU
2023-07-16T08:54:59Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-05-25T09:54:23Z
--- tags: - generated_from_trainer metrics: - precision - recall - accuracy model-index: - name: train results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9948 - Macro F1: 0.7856 - Precision: 0.7820 - Recall: 0.7956 - Kappa: 0.6940 - Accuracy: 0.7956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 128 - seed: 25 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Macro F1 | Precision | Recall | Kappa | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 101 | 1.1562 | 0.6031 | 0.5561 | 0.7044 | 0.4967 | 0.7044 | | No log | 2.0 | 203 | 0.9119 | 0.7151 | 0.7107 | 0.7672 | 0.6236 | 0.7672 | | No log | 3.0 | 304 | 0.8493 | 0.7280 | 0.7139 | 0.7734 | 0.6381 | 0.7734 | | No log | 4.0 | 406 | 0.8087 | 0.7455 | 0.7632 | 0.7648 | 0.6421 | 0.7648 | | 0.9431 | 5.0 | 507 | 0.7735 | 0.7779 | 0.7741 | 0.7931 | 0.6858 | 0.7931 | | 0.9431 | 6.0 | 609 | 0.8201 | 0.7753 | 0.7735 | 0.7869 | 0.6797 | 0.7869 | | 0.9431 | 7.0 | 710 | 0.8564 | 0.7886 | 0.7883 | 0.8017 | 0.7004 | 0.8017 | | 0.9431 | 8.0 | 812 | 0.8712 | 0.7799 | 0.7754 | 0.7894 | 0.6854 | 0.7894 | | 0.9431 | 9.0 | 913 | 0.9142 | 0.7775 | 0.7751 | 0.7869 | 0.6811 | 0.7869 | | 0.2851 | 10.0 | 1015 | 0.9007 | 0.7820 | 0.7764 | 0.7943 | 0.6913 | 0.7943 | | 0.2851 | 11.0 | 1116 | 0.9425 | 0.7859 | 0.7825 | 0.7956 | 0.6940 | 0.7956 | | 0.2851 | 12.0 | 1218 | 0.9798 | 0.7815 | 0.7797 | 0.7906 | 0.6869 | 0.7906 | | 0.2851 | 13.0 | 1319 | 0.9895 | 0.7895 | 0.7860 | 0.7993 | 0.7003 | 0.7993 | | 0.2851 | 14.0 | 1421 | 0.9872 | 0.7854 | 0.7813 | 0.7943 | 0.6935 | 0.7943 | | 0.1273 | 14.93 | 1515 | 0.9948 | 0.7856 | 0.7820 | 0.7956 | 0.6940 | 0.7956 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Tokenizers 0.13.3
BaleChen/REINFORCE-pixelcopter-test
BaleChen
2023-07-16T08:51:54Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T07:58:26Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: REINFORCE-pixelcopter-test results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 35.80 +/- 26.25 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
NasimB/cbt-log-rarity-no-cut
NasimB
2023-07-16T08:49:47Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-16T06:49:25Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: cbt-log-rarity-no-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cbt-log-rarity-no-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3118 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6934 | 0.29 | 500 | 5.6310 | | 5.3312 | 0.58 | 1000 | 5.1981 | | 4.9883 | 0.87 | 1500 | 4.9500 | | 4.7132 | 1.16 | 2000 | 4.7969 | | 4.5508 | 1.46 | 2500 | 4.6748 | | 4.4477 | 1.75 | 3000 | 4.5709 | | 4.3214 | 2.04 | 3500 | 4.4910 | | 4.1241 | 2.33 | 4000 | 4.4484 | | 4.0945 | 2.62 | 4500 | 4.3895 | | 4.0594 | 2.91 | 5000 | 4.3351 | | 3.859 | 3.2 | 5500 | 4.3306 | | 3.7902 | 3.49 | 6000 | 4.3011 | | 3.7783 | 3.79 | 6500 | 4.2646 | | 3.6948 | 4.08 | 7000 | 4.2644 | | 3.5134 | 4.37 | 7500 | 4.2584 | | 3.5019 | 4.66 | 8000 | 4.2430 | | 3.4878 | 4.95 | 8500 | 4.2312 | | 3.3395 | 5.24 | 9000 | 4.2428 | | 3.313 | 5.53 | 9500 | 4.2422 | | 3.3111 | 5.82 | 10000 | 4.2412 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
manmyung/a2c-PandaReachDense-v2
manmyung
2023-07-16T08:43:12Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T08:40:02Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -0.88 +/- 0.21 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
openlm-research/open_llama_3b_v2_easylm
openlm-research
2023-07-16T08:32:50Z
0
4
null
[ "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "arxiv:2302.13971", "license:apache-2.0", "region:us" ]
null
2023-07-16T00:40:05Z
--- license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T --- # OpenLLaMA: An Open Reproduction of LLaMA **TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations. In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details. ## Weights Release, License and Usage We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license. ### Loading the Weights with Hugging Face Transformers Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM ## v2 models model_path = 'openlm-research/open_llama_3b_v2' # model_path = 'openlm-research/open_llama_7b_v2' ## v1 models # model_path = 'openlm-research/open_llama_3b' # model_path = 'openlm-research/open_llama_7b' # model_path = 'openlm-research/open_llama_13b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) prompt = 'Q: What is the largest animal?\nA:' input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=32 ) print(tokenizer.decode(generation_output[0])) ``` For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama). ### Evaluating with LM-Eval-Harness The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below: ```python tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained( pretrained if tokenizer is None else tokenizer, revision=revision + ("/" + subfolder if subfolder is not None else ""), use_fast=False ) ``` ### Loading the Weights with EasyLM For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. ## Dataset and Training The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA. We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism [](https://engineering.fb.com/2021/07/15/open-source/fsdp/)(also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model. ## Evaluation We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/). The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks. | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 3Bv2 | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B | | ---------------------- | -------- | -------- | --------- | -------------- | -------------- | ------------ | ------------ | ------------- | | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.34 | 0.33 | 0.33 | 0.33 | | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 | | anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.39 | 0.35 | 0.38 | 0.40 | | arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.34 | 0.39 | 0.34 | 0.37 | 0.41 | | arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.36 | 0.41 | 0.37 | 0.38 | 0.44 | | arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.68 | 0.73 | 0.69 | 0.72 | 0.75 | | arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.63 | 0.70 | 0.65 | 0.68 | 0.70 | | boolq/acc | 0.66 | 0.75 | 0.71 | 0.66 | 0.72 | 0.68 | 0.71 | 0.75 | | hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.52 | 0.56 | 0.49 | 0.53 | 0.56 | | hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.70 | 0.75 | 0.67 | 0.72 | 0.76 | | openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.26 | 0.30 | 0.27 | 0.30 | 0.31 | | openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.38 | 0.41 | 0.40 | 0.40 | 0.43 | | piqa/acc | 0.75 | 0.78 | 0.79 | 0.77 | 0.79 | 0.75 | 0.76 | 0.77 | | piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.78 | 0.80 | 0.76 | 0.77 | 0.79 | | record/em | 0.88 | 0.91 | 0.92 | 0.87 | 0.89 | 0.88 | 0.89 | 0.91 | | record/f1 | 0.89 | 0.91 | 0.92 | 0.88 | 0.89 | 0.89 | 0.90 | 0.91 | | rte/acc | 0.54 | 0.56 | 0.69 | 0.55 | 0.57 | 0.58 | 0.60 | 0.64 | | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.22 | 0.23 | 0.22 | 0.23 | 0.25 | | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.35 | 0.38 | | wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 | | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.63 | 0.66 | 0.62 | 0.67 | 0.70 | | Average | 0.52 | 0.55 | 0.57 | 0.53 | 0.56 | 0.53 | 0.55 | 0.57 | We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set. ## Contact We would love to get feedback from the community. If you have any questions, please open an issue or contact us. OpenLLaMA is developed by: [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research. *Equal Contribution ## Acknowledgment We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback. The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support. ## Reference If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX: ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{together2023redpajama, author = {Together Computer}, title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset}, month = April, year = 2023, url = {https://github.com/togethercomputer/RedPajama-Data} } ``` ``` @article{touvron2023llama, title={Llama: Open and efficient foundation language models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
wuru330/results
wuru330
2023-07-16T08:27:12Z
22
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-09T16:24:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8129 - Accuracy: 0.5969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9759 | 1.0 | 37 | 0.9392 | 0.5408 | | 0.8313 | 2.0 | 74 | 0.8845 | 0.6122 | | 0.8032 | 3.0 | 111 | 0.8459 | 0.6122 | | 0.7375 | 4.0 | 148 | 0.8693 | 0.5782 | | 0.635 | 5.0 | 185 | 0.8724 | 0.6344 | | 0.578 | 6.0 | 222 | 0.9932 | 0.5629 | | 0.3875 | 7.0 | 259 | 1.0738 | 0.5952 | | 0.3544 | 8.0 | 296 | 1.1359 | 0.6156 | | 0.407 | 9.0 | 333 | 1.3020 | 0.5493 | | 0.2329 | 10.0 | 370 | 1.2567 | 0.6020 | | 0.2305 | 11.0 | 407 | 1.3148 | 0.6156 | | 0.2098 | 12.0 | 444 | 1.2928 | 0.6241 | | 0.1595 | 13.0 | 481 | 1.5325 | 0.5629 | | 0.1515 | 14.0 | 518 | 1.4402 | 0.6156 | | 0.1429 | 15.0 | 555 | 1.4456 | 0.6276 | | 0.1812 | 16.0 | 592 | 1.5088 | 0.5663 | | 0.1169 | 17.0 | 629 | 1.6266 | 0.5850 | | 0.1375 | 18.0 | 666 | 1.5252 | 0.6173 | | 0.0907 | 19.0 | 703 | 1.6055 | 0.6088 | | 0.1003 | 20.0 | 740 | 1.5785 | 0.6003 | | 0.0756 | 21.0 | 777 | 1.6485 | 0.5850 | | 0.0641 | 22.0 | 814 | 1.6257 | 0.6190 | | 0.0387 | 23.0 | 851 | 1.6758 | 0.6105 | | 0.0341 | 24.0 | 888 | 1.7239 | 0.6088 | | 0.0227 | 25.0 | 925 | 1.7956 | 0.6020 | | 0.0247 | 26.0 | 962 | 1.7542 | 0.6037 | | 0.014 | 27.0 | 999 | 1.7693 | 0.6139 | | 0.0152 | 28.0 | 1036 | 1.8133 | 0.5969 | | 0.0125 | 29.0 | 1073 | 1.8082 | 0.6037 | | 0.0116 | 30.0 | 1110 | 1.8129 | 0.5969 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
peterwilli/photon
peterwilli
2023-07-16T08:26:36Z
48
0
diffusers
[ "diffusers", "art", "en", "license:openrail", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-16T08:24:07Z
--- license: openrail language: - en tags: - art --- Realistic SD1.5 model, ported from SafeTensors to Diffusers. Original is here: https://civitai.com/models/84728/photon
watcharakorn/whisper-small-th-v2
watcharakorn
2023-07-16T08:24:22Z
77
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "th-asr-leaderboard", "generated_from_trainer", "th", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-16T08:21:55Z
--- language: - th license: apache-2.0 base_model: openai/whisper-small tags: - th-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small th - mix dataset v.2 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: th split: test args: 'config: th, split: test' metrics: - name: Wer type: wer value: 0.37791454289122656 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small th - mix dataset v.2 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2980 - Wer: 0.3779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3654 | 0.26 | 1000 | 0.2980 | 0.3779 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
digiplay/polla_mix_2.4D
digiplay
2023-07-16T08:23:45Z
334
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-16T06:56:58Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/110130?modelVersionId=118734 Simple image I made thru Huggingface's API : ![827c852a-171d-4875-a5d1-f226ca9e82ae.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/XBES3_gljSSXufKpgJNoE.jpeg) prompt : > pink spider with pink heart symbol ***Original Author's DEMO images :*** ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d3bcc3e4-5612-4947-9f9c-c76e5347b67d/width=1024/00055-114656479-1boy,%20armor,%20arthur_pendragon_(fate),%20blonde_hair,%20commentary_request,%20fate_prototype,%20fate_(series),%20green_eyes,%20hood,%20male_foc.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6aa33f59-4e0d-48a3-84e9-ef9c2aa1ba4d/width=1024/00125-1371705934-1boy,%20bad_id,%20bad_pixiv_id,%20bandage_on_face,%20bandages,%20black_hair,%20blue_eyes,%20chain,%20hat,%20jojo_no_kimyou_na_bouken,%20kujo_jotaro,.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/05ded902-bd43-4ad5-8aaa-53ca6b2b1f7d/width=1024/00052-114656476-1girl,%20black_hair,%20blush,%20closed_eyes,%20closed_mouth,%20commentary_request,%20drawn_wings,%20from_side,%20haikyuu!!,%20happy_birthday,%20jack.jpeg)
gpcarl123/resnet18_mnist
gpcarl123
2023-07-16T08:16:35Z
0
0
timm
[ "timm", "en", "dataset:mnist", "model-index", "region:us" ]
null
2023-07-16T07:48:41Z
--- language: - en library_name: timm datasets: - mnist metrics: - accuracy model-index: - name: resnet18_mnist results: - task: type: image-classification dataset: name: MNIST type: mnist metrics: - type: accuracy value: 0.9936 --- # Usage ```python import timm import torchvision MNIST_PATH = './datasets/mnist' net = timm.create_model("resnet18", pretrained=False, num_classes=10) net.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False) net.load_state_dict( torch.hub.load_state_dict_from_url( "https://huggingface.co/gpcarl123/resnet18_mnist/resolve/main/resnet18_mnist.pth", map_location="cpu", file_name="resnet18_mnist.pth", ) ) preprocessor = torchvision.transforms.Normalize((0.1307,), (0.3081,)) transform = transforms.Compose([transforms.ToTensor()]) test_set = datasets.MNIST(root=MNIST_PATH, train=False, download=True, transform=transform) test_loader = data.DataLoader(test_set, batch_size=5, shuffle=False, num_workers=2) for data, target in test_loader: print(net(preprocessor(data))) print(target) break ```
imgeaslikok/flan-t5-definition-en-large-taboo-for-llms-deft
imgeaslikok
2023-07-16T08:04:58Z
160
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-15T11:31:14Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-definition-en-large-taboo-for-llms-deft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-definition-en-large-taboo-for-llms-deft This model is a fine-tuned version of [ltg/flan-t5-definition-en-large](https://huggingface.co/ltg/flan-t5-definition-en-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0332 - Rouge1: 33.5241 - Rouge2: 16.8064 - Rougel: 30.2969 - Rougelsum: 30.2909 - Gen Len: 16.5819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.6185 | 0.62 | 100 | 2.1816 | 33.3077 | 15.1203 | 28.9167 | 28.8557 | 17.7666 | | 2.3457 | 1.24 | 200 | 2.0990 | 33.2477 | 16.1885 | 29.5227 | 29.4474 | 16.7143 | | 2.1751 | 1.85 | 300 | 2.0604 | 33.5161 | 16.4732 | 30.0261 | 30.0036 | 16.3031 | | 2.0749 | 2.47 | 400 | 2.0392 | 33.1594 | 16.8128 | 30.0222 | 30.0057 | 16.5401 | | 2.035 | 3.09 | 500 | 2.0332 | 33.5241 | 16.8064 | 30.2969 | 30.2909 | 16.5819 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
digiplay/polla_mix_2.5D
digiplay
2023-07-16T07:56:07Z
50
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-16T06:57:17Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/110130?modelVersionId=118741 Sample image I made thru Huggingface's API : ![012d0578-2ea4-4041-986c-411c3ebd2460.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/pyWHxPdufQnASl8CNW5qT.jpeg) Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/486ce102-be6c-4ea6-9a2c-3790a8e034b7/00010-1892571106-1girl,%20black_gloves,%20blonde_hair,%20blue_eyes,%20breasts,%20dress,%20fang,%20gloves,%20hair_ribbon,%20hat,%20holding,%20long_hair,%20open_mouth,%20pan.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9ad286ca-8c62-4d45-980a-c0993ed05259/00297-1401259144-1boy,%20bag,%20baseball_cap,%20black_background,%20black_bag,%20black_pants,%20blue_jacket,%20brown_eyes,%20brown_hair,%20bubble,%20commentary_reque.jpeg)
manmyung/a2c-AntBulletEnv-v0
manmyung
2023-07-16T07:45:04Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T07:43:55Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1954.34 +/- 180.80 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DanGalt/openai-finetuned-minds14
DanGalt
2023-07-16T07:39:51Z
85
0
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:PolyAI/minds14", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-16T07:39:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - PolyAI/minds14 metrics: - wer model-index: - name: openai-finetuned-minds14 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: PolyAI/minds14 type: PolyAI/minds14 config: en-US split: train args: en-US metrics: - name: Wer type: wer value: 0.31463990554899646 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai-finetuned-minds14 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset. It achieves the following results on the evaluation set: - Loss: 0.6316 - Wer Ortho: 0.3122 - Wer: 0.3146 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:| | 0.0003 | 17.86 | 500 | 0.6316 | 0.3122 | 0.3146 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
HemrajS/LORA
HemrajS
2023-07-16T07:34:19Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-16T07:34:18Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
abusiddik/falcon-7b-qlora-chat-support-bot-faq
abusiddik
2023-07-16T07:31:36Z
5
0
peft
[ "peft", "region:us" ]
null
2023-07-16T07:26:37Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: True - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
sgarg/falcon-7b-qlora-fiqa-finbot-v2
sgarg
2023-07-16T07:21:22Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-16T05:48:35Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0.dev0
harithapliyal/distilbert-base-uncased-finetuned-squad
harithapliyal
2023-07-16T07:12:30Z
61
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-16T05:09:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: harithapliyal/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # harithapliyal/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9661 - Train End Logits Accuracy: 0.7320 - Train Start Logits Accuracy: 0.6921 - Validation Loss: 1.1291 - Validation End Logits Accuracy: 0.6971 - Validation Start Logits Accuracy: 0.6623 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.4910 | 0.6123 | 0.5711 | 1.1731 | 0.6869 | 0.6507 | 0 | | 0.9661 | 0.7320 | 0.6921 | 1.1291 | 0.6971 | 0.6623 | 1 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3
hafidikhsan/wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-70-15-15
hafidikhsan
2023-07-16T07:06:13Z
103
0
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-07-16T07:05:15Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-70-15-15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-english-pronunciation-evaluation-dp-70-15-15 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0328 - Accuracy: 0.7827 - F1: 0.7812 - Precision: 0.7819 - Recall: 0.7827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.7945 | 1.0 | 438 | 0.8709 | 0.5573 | 0.5239 | 0.5859 | 0.5573 | | 0.8011 | 2.0 | 876 | 0.7536 | 0.6533 | 0.6340 | 0.6534 | 0.6533 | | 0.6528 | 3.0 | 1314 | 0.6918 | 0.732 | 0.7295 | 0.7313 | 0.732 | | 0.3574 | 4.0 | 1752 | 0.8670 | 0.7573 | 0.7540 | 0.7552 | 0.7573 | | 0.0999 | 5.0 | 2190 | 1.0217 | 0.7627 | 0.7585 | 0.7619 | 0.7627 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
digiplay/hellofantasytime_v1.22
digiplay
2023-07-16T07:00:33Z
391
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-13T09:19:29Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/108289?modelVersionId=116540 Sample image I made thru Huggingface's API : ![e8d72193-e8ab-4288-b287-43c69c0a286a.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/jmB3JX9BimTzS9WQSgYmE.jpeg) Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/3aeca71a-d571-4a64-bed5-dd26c7aa006d/width=768/379582-3823481811-((Best%20quality)),%20((masterpiece)),%20(detailed_1.4),%203D,%20an%20image%20of%20a%20beautiful%20cyberpunk%20female%20with%20all%20black%20armour,HDR%20(High.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1ac3d062-0c4a-4927-9cd7-7d52b15cdfae/width=768/379581-60658536-landscape,%20manchupicchu%20at%20dawn,%20epic,%20fog,%20temple%20stone,%20ornament,%20ornate,%20details,%20forest,%20dim%20light,%20mountain,%20advntr.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c16e0834-2b5b-4344-af38-d2bacf8f8a68/width=768/379582-532568519-no%20humans,%20landscape,%20oil%20on%20matte%20canvas,%20sharp%20details,%20the%20expanse%20scifi%20spacescape%20ceres%20colony,%20intricate,%20highly%20detailed,.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a98c9c6f-0f64-47db-b26b-55851b73a716/width=768/379581-3406315035-no%20humans,(best%20quality,%20masterpiece),%20green%20dinosaur,%20(two%20hands_1.2),(two%20legs_1.4),(one%20tail_1.2),standing,solo,%20sharp%20teeth,.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/3d7f258b-70d6-4f53-89c1-812190bd2421/width=768/379581-1415737129-(masterpiece,%20best%20quality),%20black%20girl,%20curly%20hair,%20barista.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9431b6e0-096b-480b-9fa1-4554724aaf47/width=768/379581-563112602-Boutique,best%20quality,Gold%20jewelry,(slip%20out%20feet),Fairy%20skin,(Fidelity%20_1.2),Standing,Super%20Detailed,realistic,High%20quality,Mov.jpeg)
laserchalk/kangaroo-training-part-10
laserchalk
2023-07-16T06:53:40Z
6
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-16T06:39:24Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### kangaroo-training-part-10 Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
nolanaatama/crsfnhllvnrvcv250pchszmbllth
nolanaatama
2023-07-16T06:48:04Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-16T06:41:26Z
--- license: creativeml-openrail-m ---
amirabdullah19852020/pythia_70m_ppo_imdb_sentiment_v3
amirabdullah19852020
2023-07-16T06:27:08Z
59
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-07-16T06:26:44Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="amirabdullah19852020//tmp/tmpswq0_4bi/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment_v3") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("amirabdullah19852020//tmp/tmpswq0_4bi/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment_v3") model = AutoModelForCausalLMWithValueHead.from_pretrained("amirabdullah19852020//tmp/tmpswq0_4bi/amirabdullah19852020/pythia_70m_ppo_imdb_sentiment_v3") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
Sucial/so-vits-svc4.1-sanwu
Sucial
2023-07-16T05:59:47Z
4
3
transformers
[ "transformers", "so-vits-svc", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
2023-07-16T05:57:25Z
--- license: cc-by-sa-4.0 tags: - so-vits-svc --- # so-vits-svc4.1-sanwu ## 官方项目地址:https://github.com/svc-develop-team/so-vits-svc ## 如何使用?How to use? 1. install requirements 2. download pretrain model [checkpoint_best_legacy_500.pt](https://ibm.box.com/s/z1wgl1stco8ffooyatzdwsqn2psd9lrr) and put it into `./pretrain` 3. put `sanwu_100800.pth`, `feature_and_index.pkl`, 'kmeans_10000.pt' into `./logs/44k` 4. put `config.json`into `./config` 5. enjoy! ## 以下引用官方文档 ## 推理 使用 [inference_main.py](inference_main.py) ```shell # 例 python inference_main.py -m "logs/44k/G_30400.pth" -c "configs/config.json" -n "君の知らない物語-src.wav" -t 0 -s "nen" ``` 必填项部分: + `-m` | `--model_path`:模型路径 + `-c` | `--config_path`:配置文件路径 + `-n` | `--clean_names`:wav 文件名列表,放在 raw 文件夹下 + `-t` | `--trans`:音高调整,支持正负(半音) + `-s` | `--spk_list`:合成目标说话人名称 + `-cl` | `--clip`:音频强制切片,默认0为自动切片,单位为秒/s 可选项部分:部分具体见下一节 + `-lg` | `--linear_gradient`:两段音频切片的交叉淡入长度,如果强制切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,单位为秒 + `-f0p` | `--f0_predictor`:选择F0预测器,可选择crepe,pm,dio,harvest,默认为pm(注意:crepe为原F0使用均值滤波器) + `-a` | `--auto_predict_f0`:语音转换自动预测音高,转换歌声时不要打开这个会严重跑调 + `-cm` | `--cluster_model_path`:聚类模型或特征检索索引路径,如果没有训练聚类或特征检索则随便填 + `-cr` | `--cluster_infer_ratio`:聚类方案或特征检索占比,范围0-1,若没有训练聚类模型或特征检索则默认0即可 + `-eh` | `--enhance`:是否使用NSF_HIFIGAN增强器,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭 + `-shd` | `--shallow_diffusion`:是否使用浅层扩散,使用后可解决一部分电音问题,默认关闭,该选项打开时,NSF_HIFIGAN增强器将会被禁止 + `-usm` | `--use_spk_mix`:是否使用角色融合/动态声线融合 + `-lea` | `--loudness_envelope_adjustment`:输入源响度包络替换输出响度包络融合比例,越靠近1越使用输出响度包络 + `-fr` | `--feature_retrieval`:是否使用特征检索,如果使用聚类模型将被禁用,且cm与cr参数将会变成特征检索的索引路径与混合比例 浅扩散设置: + `-dm` | `--diffusion_model_path`:扩散模型路径 + `-dc` | `--diffusion_config_path`:扩散模型配置文件路径 + `-ks` | `--k_step`:扩散步数,越大越接近扩散模型的结果,默认100 + `-od` | `--only_diffusion`:纯扩散模式,该模式不会加载sovits模型,以扩散模型推理 + `-se` | `--second_encoding`:二次编码,浅扩散前会对原始音频进行二次编码,玄学选项,有时候效果好,有时候效果差 ### 注意 如果使用`whisper-ppg` 声音编码器进行推理,需要将`--clip`设置为25,`-lg`设置为1。否则将无法正常推理。 ## 🤔 可选项 如果前面的效果已经满意,或者没看明白下面在讲啥,那后面的内容都可以忽略,不影响模型使用(这些可选项影响比较小,可能在某些特定数据上有点效果,但大部分情况似乎都感知不太明显) ### 自动f0预测 4.0模型训练过程会训练一个f0预测器,对于语音转换可以开启自动音高预测,如果效果不好也可以使用手动的,但转换歌声时请不要启用此功能!!!会严重跑调!! + 在inference_main中设置auto_predict_f0为true即可 ### 聚类音色泄漏控制 介绍:聚类方案可以减小音色泄漏,使得模型训练出来更像目标的音色(但其实不是特别明显),但是单纯的聚类方案会降低模型的咬字(会口齿不清)(这个很明显),本模型采用了融合的方式,可以线性控制聚类方案与非聚类方案的占比,也就是可以手动在"像目标音色" 和 "咬字清晰" 之间调整比例,找到合适的折中点 使用聚类前面的已有步骤不用进行任何的变动,只需要额外训练一个聚类模型,虽然效果比较有限,但训练成本也比较低 + 训练过程: + 使用cpu性能较好的机器训练,据我的经验在腾讯云6核cpu训练每个speaker需要约4分钟即可完成训练 + 执行`python cluster/train_cluster.py`,模型的输出会在`logs/44k/kmeans_10000.pt` + 聚类模型目前可以使用gpu进行训练,执行`python cluster/train_cluster.py --gpu` + 推理过程: + `inference_main.py`中指定`cluster_model_path` + `inference_main.py`中指定`cluster_infer_ratio`,`0`为完全不使用聚类,`1`为只使用聚类,通常设置`0.5`即可 ### 特征检索 介绍:跟聚类方案一样可以减小音色泄漏,咬字比聚类稍好,但会降低推理速度,采用了融合的方式,可以线性控制特征检索与非特征检索的占比, + 训练过程: 首先需要在生成hubert与f0后执行: ```shell python train_index.py -c configs/config.json ``` 模型的输出会在`logs/44k/feature_and_index.pkl` + 推理过程: + 需要首先制定`--feature_retrieval`,此时聚类方案会自动切换到特征检索方案 + `inference_main.py`中指定`cluster_model_path` 为模型输出文件 + `inference_main.py`中指定`cluster_infer_ratio`,`0`为完全不使用特征检索,`1`为只使用特征检索,通常设置`0.5`即可 ### 静态声线混合 **参考`webUI.py`文件中,小工具/实验室特性的静态声线融合。** 介绍:该功能可以将多个声音模型合成为一个声音模型(多个模型参数的凸组合或线性组合),从而制造出现实中不存在的声线 **注意:** 1. 该功能仅支持单说话人的模型 2. 如果强行使用多说话人模型,需要保证多个模型的说话人数量相同,这样可以混合同一个SpaekerID下的声音 3. 保证所有待混合模型的config.json中的model字段是相同的 4. 输出的混合模型可以使用待合成模型的任意一个config.json,但聚类模型将不能使用 5. 批量上传模型的时候最好把模型放到一个文件夹选中后一起上传 6. 混合比例调整建议大小在0-100之间,也可以调为其他数字,但在线性组合模式下会出现未知的效果 7. 混合完毕后,文件将会保存在项目根目录中,文件名为output.pth 8. 凸组合模式会将混合比例执行Softmax使混合比例相加为1,而线性组合模式不会 ### 动态声线混合 **参考`spkmix.py`文件中关于动态声线混合的介绍** 角色混合轨道 编写规则: 角色ID : \[\[起始时间1, 终止时间1, 起始数值1, 起始数值1], [起始时间2, 终止时间2, 起始数值2, 起始数值2]] 起始时间和前一个的终止时间必须相同,第一个起始时间必须为0,最后一个终止时间必须为1 (时间的范围为0-1) 全部角色必须填写,不使用的角色填\[\[0., 1., 0., 0.]]即可 融合数值可以随便填,在指定的时间段内从起始数值线性变化为终止数值,内部会自动确保线性组合为1(凸组合条件),可以放心使用 推理的时候使用`--use_spk_mix`参数即可启用动态声线混合 ## 📚 一些法律条例参考 #### 任何国家,地区,组织和个人使用此项目必须遵守以下法律 #### 《民法典》 ##### 第一千零一十九条 任何组织或者个人不得以丑化、污损,或者利用信息技术手段伪造等方式侵害他人的肖像权。未经肖像权人同意,不得制作、使用、公开肖像权人的肖像,但是法律另有规定的除外。未经肖像权人同意,肖像作品权利人不得以发表、复制、发行、出租、展览等方式使用或者公开肖像权人的肖像。对自然人声音的保护,参照适用肖像权保护的有关规定。 ##### 第一千零二十四条 【名誉权】民事主体享有名誉权。任何组织或者个人不得以侮辱、诽谤等方式侵害他人的名誉权。 ##### 第一千零二十七条 【作品侵害名誉权】行为人发表的文学、艺术作品以真人真事或者特定人为描述对象,含有侮辱、诽谤内容,侵害他人名誉权的,受害人有权依法请求该行为人承担民事责任。行为人发表的文学、艺术作品不以特定人为描述对象,仅其中的情节与该特定人的情况相似的,不承担民事责任。 #### 《[中华人民共和国宪法](http://www.gov.cn/guoqing/2018-03/22/content_5276318.htm)》 #### 《[中华人民共和国刑法](http://gongbao.court.gov.cn/Details/f8e30d0689b23f57bfc782d21035c3.html?sw=中华人民共和国刑法)》 #### 《[中华人民共和国民法典](http://gongbao.court.gov.cn/Details/51eb6750b8361f79be8f90d09bc202.html)》
lovelyxs/rl_course_vizdoom_health_gathering_supreme
lovelyxs
2023-07-16T05:56:49Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T05:56:44Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 13.28 +/- 4.85 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r lovelyxs/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
diogopaes10/007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k
diogopaes10
2023-07-16T05:23:43Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-07-16T04:56:59Z
--- license: mit tags: - generated_from_trainer metrics: - f1 - accuracy - precision - recall model-index: - name: 007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 007-microsoft-deberta-v3-base-finetuned-yahoo-80_20k This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8060 - F1: 0.7514 - Accuracy: 0.7552 - Precision: 0.7512 - Recall: 0.7552 - System Ram Used: 4.1778 - System Ram Total: 83.4807 - Gpu Ram Allocated: 2.0903 - Gpu Ram Cached: 34.3125 - Gpu Ram Total: 39.5640 - Gpu Utilization: 44 - Disk Space Used: 36.0258 - Disk Space Total: 78.1898 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall | System Ram Used | System Ram Total | Gpu Ram Allocated | Gpu Ram Cached | Gpu Ram Total | Gpu Utilization | Disk Space Used | Disk Space Total | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|:------:|:---------------:|:----------------:|:-----------------:|:--------------:|:-------------:|:---------------:|:---------------:|:----------------:| | 1.3512 | 0.15 | 375 | 0.9418 | 0.7160 | 0.7189 | 0.7210 | 0.7189 | 3.9586 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 24.9904 | 78.1898 | | 0.9581 | 0.3 | 750 | 0.8981 | 0.7232 | 0.7298 | 0.7301 | 0.7298 | 3.9108 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9906 | 78.1898 | | 0.9184 | 0.45 | 1125 | 0.8941 | 0.7248 | 0.7316 | 0.7301 | 0.7316 | 3.8717 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9910 | 78.1898 | | 0.8716 | 0.6 | 1500 | 0.8481 | 0.7368 | 0.7391 | 0.7414 | 0.7391 | 3.9030 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 24.9913 | 78.1898 | | 0.8564 | 0.75 | 1875 | 0.8394 | 0.7379 | 0.7440 | 0.7423 | 0.7440 | 3.8964 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 24.9915 | 78.1898 | | 0.8359 | 0.9 | 2250 | 0.8371 | 0.7347 | 0.7403 | 0.7417 | 0.7403 | 3.8917 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 48 | 24.9917 | 78.1898 | | 0.7896 | 1.05 | 2625 | 0.8277 | 0.7369 | 0.7435 | 0.7461 | 0.7435 | 4.1488 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 29.8274 | 78.1898 | | 0.7368 | 1.2 | 3000 | 0.8204 | 0.7426 | 0.7473 | 0.7468 | 0.7473 | 4.1447 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 45 | 29.8276 | 78.1898 | | 0.72 | 1.35 | 3375 | 0.8199 | 0.7455 | 0.7486 | 0.7467 | 0.7486 | 3.9562 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 29.8279 | 78.1898 | | 0.7333 | 1.5 | 3750 | 0.7991 | 0.7488 | 0.7524 | 0.7496 | 0.7524 | 3.9475 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 45 | 29.8282 | 78.1898 | | 0.7116 | 1.65 | 4125 | 0.8149 | 0.7470 | 0.7499 | 0.7497 | 0.7499 | 3.9456 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 29.8285 | 78.1898 | | 0.7177 | 1.8 | 4500 | 0.7880 | 0.7523 | 0.7558 | 0.7529 | 0.7558 | 3.9296 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 29.8287 | 78.1898 | | 0.7151 | 1.95 | 4875 | 0.7949 | 0.7509 | 0.7540 | 0.7507 | 0.7540 | 3.9427 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 41 | 29.8294 | 78.1898 | | 0.657 | 2.1 | 5250 | 0.8097 | 0.7500 | 0.7537 | 0.7506 | 0.7537 | 4.1520 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 43 | 33.9634 | 78.1898 | | 0.6218 | 2.25 | 5625 | 0.8049 | 0.7485 | 0.7528 | 0.7484 | 0.7528 | 4.1390 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 44 | 33.9635 | 78.1898 | | 0.6185 | 2.4 | 6000 | 0.8093 | 0.7511 | 0.7543 | 0.7513 | 0.7543 | 3.9715 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 33.9637 | 78.1898 | | 0.6271 | 2.55 | 6375 | 0.8019 | 0.7517 | 0.7550 | 0.7521 | 0.7550 | 3.9697 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 33.9638 | 78.1898 | | 0.6103 | 2.7 | 6750 | 0.8026 | 0.7519 | 0.7554 | 0.7523 | 0.7554 | 3.9622 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 46 | 33.9639 | 78.1898 | | 0.6111 | 2.85 | 7125 | 0.8056 | 0.7507 | 0.7546 | 0.7511 | 0.7546 | 3.9783 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 41 | 33.9640 | 78.1898 | | 0.6015 | 3.0 | 7500 | 0.8060 | 0.7514 | 0.7552 | 0.7512 | 0.7552 | 3.9702 | 83.4807 | 2.0903 | 34.3125 | 39.5640 | 42 | 33.9642 | 78.1898 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
kojitakahiro/webui
kojitakahiro
2023-07-16T05:21:17Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-05-12T07:09:31Z
--- license: creativeml-openrail-m ---
BaleChen/dqn-SpaceInvadersNoFrameskip-v4-test
BaleChen
2023-07-16T05:13:06Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T05:12:23Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 545.00 +/- 104.33 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BaleChen -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga BaleChen -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga BaleChen ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
NobodyExistsOnTheInternet/nous7badaptor
NobodyExistsOnTheInternet
2023-07-16T05:06:00Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-16T04:54:15Z
--- library_name: peft --- Use teknium's 7b model i accidentally trained on vicuna 1.1 and not alpaca (original model)
weekcircle/wav2vec2-large-mms-1b-korean-colab
weekcircle
2023-07-16T04:57:38Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_13_0", "base_model:weekcircle/wav2vec2-large-mms-1b-korean-colab", "base_model:finetune:weekcircle/wav2vec2-large-mms-1b-korean-colab", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-11T12:39:54Z
--- license: cc-by-nc-4.0 base_model: weekcircle/wav2vec2-large-mms-1b-korean-colab tags: - generated_from_trainer datasets: - common_voice_13_0 metrics: - wer model-index: - name: wav2vec2-large-mms-1b-korean-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_13_0 type: common_voice_13_0 config: ko split: test args: ko metrics: - name: Wer type: wer value: 0.9959718026183283 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-mms-1b-korean-colab This model is a fine-tuned version of [weekcircle/wav2vec2-large-mms-1b-korean-colab](https://huggingface.co/weekcircle/wav2vec2-large-mms-1b-korean-colab) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 8.8258 - Wer: 0.9960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.4313 | 2.63 | 100 | 7.9123 | 0.9839 | | 2.3616 | 5.26 | 200 | 7.9118 | 0.9930 | | 1.859 | 7.89 | 300 | 7.9977 | 0.9909 | | 1.4135 | 10.53 | 400 | 8.3395 | 1.0040 | | 1.1407 | 13.16 | 500 | 8.5900 | 0.9940 | | 0.9639 | 15.79 | 600 | 8.6300 | 0.9950 | | 0.7991 | 18.42 | 700 | 8.8258 | 0.9960 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Ahwaztime/Ahwazt
Ahwaztime
2023-07-16T04:43:19Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2023-07-16T04:43:19Z
--- license: bigscience-openrail-m ---
LeoLyu/finetuning-sentiment-model-3000-samples
LeoLyu
2023-07-16T04:39:09Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-04T01:18:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: test args: plain_text metrics: - name: Accuracy type: accuracy value: 0.88 - name: F1 type: f1 value: 0.880794701986755 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2903 - Accuracy: 0.88 - F1: 0.8808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
NasimB/children-rarity-all-guten-log-rarity-all
NasimB
2023-07-16T04:21:14Z
9
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-16T02:19:49Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: children-rarity-all-guten-log-rarity-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # children-rarity-all-guten-log-rarity-all This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.7036 | 0.29 | 500 | 5.6365 | | 5.348 | 0.58 | 1000 | 5.2064 | | 4.99 | 0.87 | 1500 | 4.9589 | | 4.7208 | 1.16 | 2000 | 4.8071 | | 4.5602 | 1.46 | 2500 | 4.6761 | | 4.4513 | 1.75 | 3000 | 4.5690 | | 4.3332 | 2.04 | 3500 | 4.4907 | | 4.1308 | 2.33 | 4000 | 4.4479 | | 4.1002 | 2.62 | 4500 | 4.3912 | | 4.0711 | 2.91 | 5000 | 4.3370 | | 3.8621 | 3.2 | 5500 | 4.3334 | | 3.803 | 3.49 | 6000 | 4.3002 | | 3.7865 | 3.79 | 6500 | 4.2683 | | 3.6992 | 4.08 | 7000 | 4.2633 | | 3.5158 | 4.37 | 7500 | 4.2591 | | 3.5163 | 4.66 | 8000 | 4.2433 | | 3.501 | 4.95 | 8500 | 4.2300 | | 3.3525 | 5.24 | 9000 | 4.2437 | | 3.3213 | 5.53 | 9500 | 4.2424 | | 3.3235 | 5.82 | 10000 | 4.2416 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
laserchalk/kangaroo-training-part-7
laserchalk
2023-07-16T04:15:03Z
2
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-16T04:04:01Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Kangaroo-training-part-7 Dreambooth model trained by laserchalk with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward5
Evan-Lin
2023-07-16T03:40:20Z
48
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-07-15T19:51:18Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="Evan-Lin//tmp/tmpi3mfbi5q/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward5") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmpi3mfbi5q/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward5") model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmpi3mfbi5q/Evan-Lin/Bart-RL-many-keywordmax-entailment-attractive-reward5") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
KonekoSushi/Ado
KonekoSushi
2023-07-16T03:36:21Z
0
2
null
[ "rvc", "rvc2", "japanese artist", "artist ", "ja", "en", "region:us" ]
null
2023-07-15T23:01:30Z
--- language: - ja - en tags: - rvc - rvc2 - japanese artist - 'artist ' ---
OptimalScale/robin-33b-v2-delta
OptimalScale
2023-07-16T03:14:37Z
1,548
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.12420", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-28T06:02:53Z
--- inference: false --- # Robin Model Card ## Model Details Robin is a series of models finetuned from LLaMA on several high-quality data. - **Developed by:** [LMFlow](https://github.com/OptimalScale/LMFlow/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/OptimalScale/LMFlow/ - **Blog:** https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1 - **Paper:** https://arxiv.org/abs/2306.12420 - **Demo:** https://lmflow.com/ ## Uses Robin is primarily utilized for conducting research on extensive language models and chatbots, catering to users specializing in natural language processing, machine learning, and artificial intelligence research. ## How to Get Started with the Model We provide four kinds of demos including: - Online Service: If you don't want to run any code and just want to try our models, we deploy our instruction-tuned LLaMA you to have a try. - Colab Chatbot (shell): An interactive shell-based chatbot for you to easily deploy a chatbot on colab. - Colab Chatbot (web): An interactive web-based chatbot for you to easily deploy your own chatbot on colab. - Local Deploy: We also provide a way for you to deploy your model/chatbot locally, which means you can deploy much larger model than previous three methods if you have enough resource. Please refer to https://github.com/OptimalScale/LMFlow#demos ## Training Details Expanding upon the initial idea of self-instruct techniques, we incorporated several different data sources and build a new dataset called [LMFlow Dataset](http://lmflow.org:5000/lmflow_data.tar.gz). The new training split is created by merging the following datasets: - ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT. - GPT-4-LLM: 52K English data from GPT-4-LLM. - BELLE: randomly sample 80K Chinese data from BELLE. See more details in the "Instruction Tuning" section in our [paper](https://arxiv.org/pdf/2306.12420.pdf). ## Evaluation Robin is evaluated with [LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418). See more details in this [paper](https://arxiv.org/pdf/2306.12420.pdf). ## Citation If you find this repository useful, please consider giving ⭐ and citing our [paper](https://arxiv.org/abs/2306.12420): ``` @misc{lmflow, author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang}, title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://optimalscale.github.io/LMFlow/}}, } ```
OptimalScale/robin-13b-v2-delta
OptimalScale
2023-07-16T03:14:08Z
1,546
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.12420", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-28T05:55:54Z
--- inference: false --- # Robin Model Card ## Model Details Robin is a series of models finetuned from LLaMA on several high-quality data. - **Developed by:** [LMFlow](https://github.com/OptimalScale/LMFlow/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/OptimalScale/LMFlow/ - **Blog:** https://medium.com/@hkust.ml/robin-v2-launches-achieves-unparalleled-performance-on-openllm-4f6886e822c1 - **Paper:** https://arxiv.org/abs/2306.12420 - **Demo:** https://lmflow.com/ ## Uses Robin is primarily utilized for conducting research on extensive language models and chatbots, catering to users specializing in natural language processing, machine learning, and artificial intelligence research. ## How to Get Started with the Model We provide four kinds of demos including: - Online Service: If you don't want to run any code and just want to try our models, we deploy our instruction-tuned LLaMA you to have a try. - Colab Chatbot (shell): An interactive shell-based chatbot for you to easily deploy a chatbot on colab. - Colab Chatbot (web): An interactive web-based chatbot for you to easily deploy your own chatbot on colab. - Local Deploy: We also provide a way for you to deploy your model/chatbot locally, which means you can deploy much larger model than previous three methods if you have enough resource. Please refer to https://github.com/OptimalScale/LMFlow#demos ## Training Details Expanding upon the initial idea of self-instruct techniques, we incorporated several different data sources and build a new dataset called [LMFlow Dataset](http://lmflow.org:5000/lmflow_data.tar.gz). The new training split is created by merging the following datasets: - ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT. - GPT-4-LLM: 52K English data from GPT-4-LLM. - BELLE: randomly sample 80K Chinese data from BELLE. See more details in the "Instruction Tuning" section in our [paper](https://arxiv.org/pdf/2306.12420.pdf). ## Evaluation Robin is evaluated with [LMFlow Benchmark](https://blog.gopenai.com/lmflow-benchmark-an-automatic-evaluation-framework-for-open-source-llms-ef5c6f142418). See more details in this [paper](https://arxiv.org/pdf/2306.12420.pdf). ## Citation If you find this repository useful, please consider giving ⭐ and citing our [paper](https://arxiv.org/abs/2306.12420): ``` @misc{lmflow, author = {Shizhe Diao and Rui Pan and Hanze Dong and KaShun Shum and Jipeng Zhang and Wei Xiong and Tong Zhang}, title = {LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://optimalscale.github.io/LMFlow/}}, } ```
manmyung/ppo-PyramidsTraining
manmyung
2023-07-16T02:53:53Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-07-16T02:53:51Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: manmyung/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Pamela153/ppo-LunarLander-v2
Pamela153
2023-07-16T02:47:00Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T02:44:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 251.70 +/- 12.72 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
PeterBrendan/pbjsGPT2v2
PeterBrendan
2023-07-16T02:32:02Z
144
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-12T15:07:20Z
--- license: mit widget: - text: bidderTimeout - text: Usebidcache - text: bidderSequence - text: customPriceBucket --- ## Model: GPT-2 ### Model name: pbjsGPT2v2 ### Model description: This fine-tuned version of the GPT-2 model was trained on a subset of 1100+ publisher domains' Prebid config files. Its focus is on sophisticated Prebid publishers. The model provides insights into how these publishers configure their Prebid settings. By inputting a Prebid config setting, such as ***bidderTimeout***, the model generates sample Prebid configuration settings based on the collected data. It aims to assist publishers in understanding different configurations used by sophisticated publishers. ### Intended uses: This model is intended to assist publishers in understanding and exploring how other publishers configure their Prebid settings. It serves as a reference for gaining insights into common configurations, best practices, and different approaches used by top publishers across various domains. ### Limitations: The generated Prebid configuration settings are based on the data from the training set and may not cover all possible configurations or reflect the specific requirements of a particular domain. Publishers should carefully review and adapt the generated configurations to their specific needs and business rules. ### How to use: To use this model, provide a Prebid config setting, such as ***bidderSequence***. The model will generate a sample Prebid configuration related to that input based on the collected data. ### Training data: This model was trained on a subset of 1100+ publisher domains Prebid config files. The dataset was collected from a variety of publishers and represents a wide range of Prebid settings used in the industry. ### Training procedure: The model was fine-tuned using the GPT-2 base model with the aforementioned dataset. ### Evaluation results: The evaluation of this model focuses on its ability to generate coherent and valid Prebid configuration settings based on the provided Prebid config setting. Human evaluators reviewed the generated configurations for relevance and accuracy. ### Safety and bias considerations: The model is trained on data from actual Prebid config files and aims to provide accurate insights into publishers' configurations. However, it's important to note that biases may exist in the original data itself, as the training data is based on real-world configurations. Users should review and validate the generated configurations to ensure they align with their specific requirements and guidelines. Users are encouraged to exercise caution and use their expertise in interpreting and adapting the generated Prebid configurations for their own use. The model should be seen as a helpful tool to gain inspiration and understanding of common Prebid settings but not as a substitute for thorough testing and manual review of the final configurations.
monideep2255/spell_correction_M04_V3
monideep2255
2023-07-16T02:10:18Z
3
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-16T00:59:14Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: spell_correction_M04_V3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spell_correction_M04_V3 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 269 | 0.2687 | | 1.8467 | 2.0 | 538 | 0.0361 | | 1.8467 | 3.0 | 807 | 0.0241 | | 0.0357 | 4.0 | 1076 | 0.0198 | | 0.0357 | 5.0 | 1345 | 0.0199 | | 0.0159 | 6.0 | 1614 | 0.0175 | | 0.0159 | 7.0 | 1883 | 0.0179 | | 0.0077 | 8.0 | 2152 | 0.0189 | | 0.0077 | 9.0 | 2421 | 0.0183 | | 0.006 | 10.0 | 2690 | 0.0183 | | 0.006 | 11.0 | 2959 | 0.0191 | | 0.0044 | 12.0 | 3228 | 0.0186 | | 0.0044 | 13.0 | 3497 | 0.0192 | | 0.0033 | 14.0 | 3766 | 0.0189 | | 0.0024 | 15.0 | 4035 | 0.0173 | | 0.0024 | 16.0 | 4304 | 0.0171 | | 0.0026 | 17.0 | 4573 | 0.0183 | | 0.0026 | 18.0 | 4842 | 0.0181 | | 0.0021 | 19.0 | 5111 | 0.0177 | | 0.0021 | 20.0 | 5380 | 0.0174 | | 0.0015 | 21.0 | 5649 | 0.0173 | | 0.0015 | 22.0 | 5918 | 0.0174 | | 0.0016 | 23.0 | 6187 | 0.0178 | | 0.0016 | 24.0 | 6456 | 0.0180 | | 0.0018 | 25.0 | 6725 | 0.0175 | | 0.0018 | 26.0 | 6994 | 0.0171 | | 0.0017 | 27.0 | 7263 | 0.0175 | | 0.0014 | 28.0 | 7532 | 0.0177 | | 0.0014 | 29.0 | 7801 | 0.0178 | | 0.0013 | 30.0 | 8070 | 0.0178 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.12.1+cu102 - Datasets 2.13.1 - Tokenizers 0.13.3
WasuratS/whisper-small-da
WasuratS
2023-07-16T02:07:39Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "da", "dataset:mozilla-foundation/common_voice_13_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-15T15:11:37Z
--- language: - da license: apache-2.0 tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Whisper Small Da - WasuratS results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: da split: test args: da metrics: - name: Wer type: wer value: 23.39882224190943 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Da - WasuratS This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset on Danish language It achieves the following results on the evaluation set: - Loss: 0.6393 - Wer Ortho: 29.0926 - Wer: 23.3988 ## Model description [openai/whisper-small](https://huggingface.co/openai/whisper-small) ## Training and evaluation data [mozilla-foundation/common_voice_13_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.218 | 1.61 | 500 | 0.4724 | 30.2496 | 24.7069 | | 0.0628 | 3.22 | 1000 | 0.4825 | 28.8946 | 23.3154 | | 0.0289 | 4.82 | 1500 | 0.5311 | 29.3376 | 23.4666 | | 0.0078 | 6.43 | 2000 | 0.5740 | 29.4627 | 23.6542 | | 0.0032 | 8.04 | 2500 | 0.6070 | 29.0613 | 23.2790 | | 0.0025 | 9.65 | 3000 | 0.6274 | 29.1187 | 23.4770 | | 0.0012 | 11.25 | 3500 | 0.6335 | 29.0978 | 23.3623 | | 0.0011 | 12.86 | 4000 | 0.6393 | 29.0926 | 23.3988 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3
LarryAIDraw/anyloracleanlinearmix_v10
LarryAIDraw
2023-07-16T02:02:42Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-16T01:41:22Z
--- license: creativeml-openrail-m --- https://civitai.com/models/107677/anyloracleanlinearmix-clearvae
mitra-mir/setfit_model_Calgary_epochs2_Jul_15_2023
mitra-mir
2023-07-16T02:00:04Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2023-07-16T01:59:53Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 115 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 230, "warmup_steps": 23, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Hex820000/anime_v10
Hex820000
2023-07-16T01:57:47Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-16T01:46:09Z
--- license: creativeml-openrail-m ---
NasimB/guten_rarity_all_cut_19k_shuffled
NasimB
2023-07-16T01:54:07Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-15T23:59:13Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: guten_rarity_all_cut_19k_shuffled results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # guten_rarity_all_cut_19k_shuffled This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3157 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6912 | 0.29 | 500 | 5.6363 | | 5.3342 | 0.59 | 1000 | 5.1999 | | 4.9978 | 0.88 | 1500 | 4.9467 | | 4.7092 | 1.17 | 2000 | 4.7986 | | 4.5524 | 1.47 | 2500 | 4.6740 | | 4.4477 | 1.76 | 3000 | 4.5737 | | 4.3238 | 2.05 | 3500 | 4.4934 | | 4.1271 | 2.35 | 4000 | 4.4404 | | 4.1 | 2.64 | 4500 | 4.3886 | | 4.0602 | 2.93 | 5000 | 4.3370 | | 3.8454 | 3.23 | 5500 | 4.3333 | | 3.8039 | 3.52 | 6000 | 4.3005 | | 3.7844 | 3.81 | 6500 | 4.2628 | | 3.6706 | 4.11 | 7000 | 4.2667 | | 3.5198 | 4.4 | 7500 | 4.2607 | | 3.5089 | 4.69 | 8000 | 4.2466 | | 3.4958 | 4.99 | 8500 | 4.2321 | | 3.3358 | 5.28 | 9000 | 4.2473 | | 3.3204 | 5.57 | 9500 | 4.2460 | | 3.3125 | 5.87 | 10000 | 4.2451 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
yzzhong/RL_q_tax_v3
yzzhong
2023-07-16T01:19:33Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-16T01:06:21Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: RL_q_tax_v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="yzzhong/RL_q_tax_v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AbdelSiam/nart-100k-7b-GPTQ
AbdelSiam
2023-07-16T00:41:25Z
7
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-16T00:29:43Z
--- license: cc-by-nc-nd-4.0 ---
KingKazma/xsum_t5-small_prefix_tuning_500_10_3000_8_e-1_s108_v3_prefix200_manual
KingKazma
2023-07-16T00:15:55Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-16T00:15:54Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
NooberZ/adcmain
NooberZ
2023-07-16T00:05:56Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2023-07-15T23:48:48Z
--- license: bigcode-openrail-m ---
KingKazma/xsum_t5-small_prefix_tuning_500_10_3000_8_e-1_s55555_v3_prefix200_manual
KingKazma
2023-07-15T23:50:46Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-15T23:50:45Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
GliderMixesYT/RichardAshcroft1997
GliderMixesYT
2023-07-15T23:38:11Z
0
0
null
[ "region:us" ]
null
2023-07-15T23:28:18Z
Voice model for Verve frontman Richard Ashcroft, for his vocal range from 1996-2000.
Liduvina/LLM_A1
Liduvina
2023-07-15T23:36:45Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-15T23:36:39Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
NasimB/cbt-guten-log-rarity-all-no-cut
NasimB
2023-07-15T23:32:37Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-15T21:37:01Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: cbt-guten-log-rarity-all-no-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cbt-guten-log-rarity-all-no-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3166 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6947 | 0.29 | 500 | 5.6397 | | 5.3475 | 0.58 | 1000 | 5.2031 | | 4.991 | 0.87 | 1500 | 4.9524 | | 4.7228 | 1.17 | 2000 | 4.8034 | | 4.563 | 1.46 | 2500 | 4.6832 | | 4.446 | 1.75 | 3000 | 4.5709 | | 4.3323 | 2.04 | 3500 | 4.4920 | | 4.1314 | 2.33 | 4000 | 4.4447 | | 4.1022 | 2.62 | 4500 | 4.3948 | | 4.059 | 2.91 | 5000 | 4.3383 | | 3.8712 | 3.21 | 5500 | 4.3368 | | 3.8024 | 3.5 | 6000 | 4.3008 | | 3.7855 | 3.79 | 6500 | 4.2702 | | 3.6976 | 4.08 | 7000 | 4.2655 | | 3.5207 | 4.37 | 7500 | 4.2612 | | 3.5156 | 4.66 | 8000 | 4.2501 | | 3.5001 | 4.95 | 8500 | 4.2351 | | 3.357 | 5.24 | 9000 | 4.2478 | | 3.3255 | 5.54 | 9500 | 4.2467 | | 3.3217 | 5.83 | 10000 | 4.2455 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
Jonathaniu/alpaca-breast-cancer-13b-mix_data
Jonathaniu
2023-07-15T23:30:49Z
2
0
peft
[ "peft", "region:us" ]
null
2023-07-15T23:30:29Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False ### Framework versions - PEFT 0.4.0.dev0
KingKazma/xsum_t5-small_prefix_tuning_500_10_3000_8_e-1_s6789_v3_manual
KingKazma
2023-07-15T23:19:59Z
1
0
peft
[ "peft", "region:us" ]
null
2023-07-15T23:19:56Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
NasimB/cbt-log-rarity-all-no-cut
NasimB
2023-07-15T23:15:14Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-15T21:20:04Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: cbt-log-rarity-all-no-cut results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cbt-log-rarity-all-no-cut This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3130 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6895 | 0.29 | 500 | 5.6304 | | 5.3369 | 0.58 | 1000 | 5.2048 | | 4.9919 | 0.87 | 1500 | 4.9517 | | 4.7188 | 1.16 | 2000 | 4.8039 | | 4.5541 | 1.46 | 2500 | 4.6726 | | 4.4401 | 1.75 | 3000 | 4.5700 | | 4.333 | 2.04 | 3500 | 4.4973 | | 4.122 | 2.33 | 4000 | 4.4425 | | 4.0972 | 2.62 | 4500 | 4.3886 | | 4.0567 | 2.91 | 5000 | 4.3345 | | 3.8616 | 3.2 | 5500 | 4.3307 | | 3.7938 | 3.49 | 6000 | 4.2967 | | 3.7866 | 3.79 | 6500 | 4.2664 | | 3.6955 | 4.08 | 7000 | 4.2620 | | 3.5098 | 4.37 | 7500 | 4.2572 | | 3.5009 | 4.66 | 8000 | 4.2436 | | 3.4957 | 4.95 | 8500 | 4.2324 | | 3.3439 | 5.24 | 9000 | 4.2435 | | 3.3139 | 5.53 | 9500 | 4.2430 | | 3.3107 | 5.82 | 10000 | 4.2420 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
seny1004/wav2vec2-large-mms-1b-korean-colab
seny1004
2023-07-15T22:55:48Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_13_0", "base_model:facebook/mms-1b-l1107", "base_model:finetune:facebook/mms-1b-l1107", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-07-14T06:47:50Z
--- license: cc-by-nc-4.0 base_model: facebook/mms-1b-l1107 tags: - generated_from_trainer datasets: - common_voice_13_0 metrics: - wer model-index: - name: wav2vec2-large-mms-1b-korean-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_13_0 type: common_voice_13_0 config: ko split: test args: ko metrics: - name: Wer type: wer value: 0.9929506545820745 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-mms-1b-korean-colab This model is a fine-tuned version of [facebook/mms-1b-l1107](https://huggingface.co/facebook/mms-1b-l1107) on the common_voice_13_0 dataset. It achieves the following results on the evaluation set: - Loss: 7.8135 - Wer: 0.9930 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 10.9747 | 2.63 | 100 | 7.8812 | 0.9990 | | 5.9431 | 5.26 | 200 | 8.2212 | 0.9960 | | 5.7372 | 7.89 | 300 | 8.1054 | 0.9930 | | 5.2582 | 10.53 | 400 | 8.2347 | 0.9940 | | 3.8725 | 13.16 | 500 | 7.7536 | 0.9940 | | 3.4454 | 15.79 | 600 | 7.7220 | 0.9930 | | 2.5989 | 18.42 | 700 | 7.8135 | 0.9930 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
crcdng/q-Taxi-v3
crcdng
2023-07-15T22:35:04Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-07-15T19:49:55Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="crcdng/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e-1_s108_v3_manual
KingKazma
2023-07-15T22:16:42Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-15T22:16:41Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
Bebezenta/Danni2d
Bebezenta
2023-07-15T21:50:12Z
0
0
null
[ "license:other", "region:us" ]
null
2023-07-15T21:48:15Z
--- license: other --- TAGS: danniashe 1GIRL SOLO LARGE BREASTS REALISTIC NIPPLES LARGE AREOLAE
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e-1_s55555_v3_manual
KingKazma
2023-07-15T21:43:50Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-15T21:43:49Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
merthacioglu/roberta-finetuned-subjqa-movies_2
merthacioglu
2023-07-15T21:39:57Z
114
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2023-07-15T14:30:17Z
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: roberta-finetuned-subjqa-movies_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-finetuned-subjqa-movies_2 This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
0sunfire0/a2c-AntBulletEnv-v0
0sunfire0
2023-07-15T21:39:40Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-15T21:38:34Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 2109.96 +/- 104.30 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
lovelyxs/ppo-LunarLander-v2-2
lovelyxs
2023-07-15T21:23:37Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-07-15T20:27:07Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 133.96 +/- 135.43 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 2000000 'learning_rate': 0.0003 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.25 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'lovelyxs/ppo-LunarLander-v2-2' 'batch_size': 512 'minibatch_size': 128} ```
AnupamShankar/anupamshankar
AnupamShankar
2023-07-15T21:07:27Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
text-classification
2023-07-15T20:56:30Z
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # /var/folders/yp/w3wbm1755g3dkb_6mbfzm41r0000gn/T/tmpa72zbztn/AnupamShankar/anupamshankar This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("/var/folders/yp/w3wbm1755g3dkb_6mbfzm41r0000gn/T/tmpa72zbztn/AnupamShankar/anupamshankar") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
0sunfire0/Pixelcopter_train_01
0sunfire0
2023-07-15T21:01:20Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-07-15T21:01:01Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter_train_01 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 38.00 +/- 26.76 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
nolanaatama/kylbrflvsksthprkrvcv2300pchrhys
nolanaatama
2023-07-15T20:57:09Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-07-15T20:54:39Z
--- license: creativeml-openrail-m ---
NasimB/guten-mod-rarity-all-end-est-19k
NasimB
2023-07-15T20:51:36Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:generator", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-07-15T18:49:26Z
--- license: mit tags: - generated_from_trainer datasets: - generator model-index: - name: guten-mod-rarity-all-end-est-19k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # guten-mod-rarity-all-end-est-19k This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 4.3119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 6.6905 | 0.29 | 500 | 5.6474 | | 5.341 | 0.59 | 1000 | 5.2080 | | 4.9929 | 0.88 | 1500 | 4.9578 | | 4.716 | 1.17 | 2000 | 4.8093 | | 4.5529 | 1.47 | 2500 | 4.6791 | | 4.4478 | 1.76 | 3000 | 4.5686 | | 4.32 | 2.05 | 3500 | 4.4927 | | 4.133 | 2.35 | 4000 | 4.4466 | | 4.1021 | 2.64 | 4500 | 4.3862 | | 4.0551 | 2.93 | 5000 | 4.3333 | | 3.8497 | 3.23 | 5500 | 4.3300 | | 3.8038 | 3.52 | 6000 | 4.2997 | | 3.7766 | 3.81 | 6500 | 4.2648 | | 3.6682 | 4.11 | 7000 | 4.2638 | | 3.5163 | 4.4 | 7500 | 4.2577 | | 3.5129 | 4.69 | 8000 | 4.2423 | | 3.502 | 4.99 | 8500 | 4.2289 | | 3.3286 | 5.28 | 9000 | 4.2431 | | 3.3215 | 5.58 | 9500 | 4.2421 | | 3.3231 | 5.87 | 10000 | 4.2414 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.11.0+cu113 - Datasets 2.13.0 - Tokenizers 0.13.3
schutzp/lunarLander-PPO-trained-2e7
schutzp
2023-07-15T20:40:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-07-15T20:40:04Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 272.67 +/- 19.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
KingKazma/xsum_t5-small_p_tuning_500_10_3000_16_e-1_s108_v3_manual
KingKazma
2023-07-15T20:36:36Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-15T20:36:35Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
KingKazma/xsum_t5-small_p_tuning_500_10_3000_8_e-1_s6789_v3_manual
KingKazma
2023-07-15T20:06:13Z
0
0
peft
[ "peft", "region:us" ]
null
2023-07-15T20:06:12Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
peft-internal-testing/opt-350m-lora
peft-internal-testing
2023-07-15T19:57:59Z
5
0
peft
[ "peft", "safetensors", "region:us" ]
null
2023-07-15T19:57:58Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
akar49/mri_classifier
akar49
2023-07-15T19:42:47Z
63
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-07-15T17:47:10Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: akar49/mri_classifier results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # akar49/mri_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1032 - Validation Loss: 0.1556 - Train Accuracy: 0.9367 - Epoch: 14 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'SGD', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'momentum': 0.0, 'nesterov': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.6447 | 0.6133 | 0.7004 | 0 | | 0.5405 | 0.5010 | 0.8256 | 1 | | 0.4181 | 0.3917 | 0.8650 | 2 | | 0.3122 | 0.3189 | 0.9058 | 3 | | 0.2474 | 0.3069 | 0.8875 | 4 | | 0.2021 | 0.2733 | 0.9044 | 5 | | 0.1745 | 0.2455 | 0.9100 | 6 | | 0.1591 | 0.2203 | 0.9212 | 7 | | 0.1450 | 0.2350 | 0.9142 | 8 | | 0.1397 | 0.2122 | 0.9198 | 9 | | 0.1227 | 0.2098 | 0.9212 | 10 | | 0.1169 | 0.1754 | 0.9325 | 11 | | 0.1080 | 0.1782 | 0.9339 | 12 | | 0.0971 | 0.1705 | 0.9353 | 13 | | 0.1032 | 0.1556 | 0.9367 | 14 | ### Framework versions - Transformers 4.30.2 - TensorFlow 2.12.0 - Datasets 2.13.1 - Tokenizers 0.13.3